{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:07.049423Z" }, "title": "Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "", "affiliation": {}, "email": "" }, { "first": "John", "middle": [], "last": "Ortega", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Annette", "middle": [], "last": "Rios", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Gustavo", "middle": [ "A" ], "last": "Gim\u00e9nez-Lugo", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ricardo", "middle": [], "last": "Ramos", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart \u03c8 University of Zurich", "location": {} }, "email": "" }, { "first": "Ivan", "middle": [], "last": "Vladimir", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Meza", "middle": [], "last": "Ruiz", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Elisabeth", "middle": [], "last": "Mager", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Ngoc", "middle": [], "last": "Thang Vu", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Katharina", "middle": [], "last": "Kann", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. The shared task featured two independent tracks, and participants submitted machine translation systems for up to 10 indigenous languages. Overall, 8 teams participated with a total of 214 submissions. We provided training sets consisting of data collected from various sources, as well as manually translated sentences for the development and test sets. An official baseline trained on this data was also provided. Team submissions featured a variety of architectures, including both statistical and neural models, and for the majority of languages, many teams were able to considerably improve over the baseline. The best performing systems achieved 12.97 ChrF higher than baseline, when averaged across languages.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "This paper presents the results of the 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. The shared task featured two independent tracks, and participants submitted machine translation systems for up to 10 indigenous languages. Overall, 8 teams participated with a total of 214 submissions. We provided training sets consisting of data collected from various sources, as well as manually translated sentences for the development and test sets. An official baseline trained on this data was also provided. Team submissions featured a variety of architectures, including both statistical and neural models, and for the majority of languages, many teams were able to considerably improve over the baseline. The best performing systems achieved 12.97 ChrF higher than baseline, when averaged across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many of the world's languages, including languages native to the Americas, receive worryingly little attention from NLP researchers. According to Glottolog (Nordhoff and Hammarstr\u00f6m, 2012) , 86 language families and 95 language isolates can be found in the Americas, and many of them are labeled as endangered. From an NLP perspective, the development of language technologies has the potential to help language communities and activists in the documentation, promotion and revitalization of their languages (Mager et al., 2018b; Galla, 2016) . There have been recent initiatives to promote research on languages of the Americas (Fern\u00e1ndez et al., 2013; Coler and Homola, 2014; Gutierrez-Vasques, 2015; Mager and Meza, 2018; Ortega et al., 2020; Zhang et al., 2020; Schwartz et al., 2020; Barrault et al., 2020) . * *The first three authors contributed equally.", "cite_spans": [ { "start": 156, "end": 188, "text": "(Nordhoff and Hammarstr\u00f6m, 2012)", "ref_id": "BIBREF43" }, { "start": 508, "end": 529, "text": "(Mager et al., 2018b;", "ref_id": "BIBREF36" }, { "start": 530, "end": 542, "text": "Galla, 2016)", "ref_id": "BIBREF19" }, { "start": 629, "end": 653, "text": "(Fern\u00e1ndez et al., 2013;", "ref_id": "BIBREF15" }, { "start": 654, "end": 677, "text": "Coler and Homola, 2014;", "ref_id": "BIBREF8" }, { "start": 678, "end": 702, "text": "Gutierrez-Vasques, 2015;", "ref_id": "BIBREF21" }, { "start": 703, "end": 724, "text": "Mager and Meza, 2018;", "ref_id": "BIBREF37" }, { "start": 725, "end": 745, "text": "Ortega et al., 2020;", "ref_id": "BIBREF44" }, { "start": 746, "end": 765, "text": "Zhang et al., 2020;", "ref_id": "BIBREF58" }, { "start": 766, "end": 788, "text": "Schwartz et al., 2020;", "ref_id": "BIBREF53" }, { "start": 789, "end": 811, "text": "Barrault et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The AmericasNLP 2021 Shared Task on Open Machine Translation (OMT) aimed at moving research on indigenous and endangered languages more into the focus of the NLP community. As the official shared task training sets, we provided a collection of publicly available parallel corpora ( \u00a73). Additionally, all participants were allowed to use other existing datasets or create their own resources for training in order to improve their systems. Each language pair used in the shared task consisted of an indigenous language and a high-resource language (Spanish). The languages belong to a diverse set of language families: Aymaran, Arawak, Chibchan, Tupi-Guarani, Uto-Aztecan, Oto-Manguean, Quechuan, and Panoan. The ten language pairs included in the shared task are: Quechua-Spanish, Wixarika-Spanish, Shipibo-Konibo-Spanish, Ash\u00e1ninka-Spanish, Raramuri-Spanish, Nahuatl-Spanish, Otom\u00ed-Spanish, Aymara-Spanish, Guarani-Spanish, and Bribri-Spanish. For development and testing, we used parallel sentences belonging to a new natural language inference dataset for the 10 indigenous languages featured in our shared task, which is a manual translation of the Spanish version of the multilingual XNLI dataset (Conneau et al., 2018) . For a complete description of this dataset we refer the reader to Ebrahimi et al. (2021) .", "cite_spans": [ { "start": 1203, "end": 1225, "text": "(Conneau et al., 2018)", "ref_id": "BIBREF9" }, { "start": 1294, "end": 1316, "text": "Ebrahimi et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Together with the data, we also provided: a simple baseline based on the small transformer architecture (Vaswani et al., 2017) proposed together with the FLORES dataset (Guzm\u00e1n et al., 2019) ; and a description of challenges and particular characteristics for all provided resources 1 . We established two tracks: one where training models on the development set after hyperparameter tuning is allowed (Track 1), and one where models cannot be trained directly on the development set (Track 2) .", "cite_spans": [ { "start": 104, "end": 126, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF56" }, { "start": 169, "end": 190, "text": "(Guzm\u00e1n et al., 2019)", "ref_id": "BIBREF23" }, { "start": 484, "end": 493, "text": "(Track 2)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Machine translation for indigenous languages often presents unique challenges. As many indigenous languages do not have a strong written tradition, orthographic rules are not well defined or standardized, and even if they are regulated, often times native speakers do not follow them or create their own adapted versions. Simply normalizing the data is generally not a viable option, as even the definition of what constitutes a morpheme or a orthographic word is frequently ill defined. Furthermore, the huge dialectal variability among those languages, even from one village to the other, adds additional complexity to the task. We describe the particular challenges for each language in Section \u00a73.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Eight teams participated in the AmericasNLP 2021 Shared Task on OMT. Most teams submitted systems in both tracks and for all 10 language pairs, yielding a total of 214 submissions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given the limited availability of resources and the important dialectal, orthographic and domain challenges, we designed our task as an unrestrained machine translation shared task: we called it open machine translation to emphasize that participants were free to use any resources they could find. Possible resources could, for instance, include existing or newly created parallel data, dictionaries, tools, or pretrained models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Open Machine Translation", "sec_num": "2.1" }, { "text": "We invited submissions to two different tracks: Systems in Track 1 were allowed to use the development set as part of the training data, since this is a common practice in the machine translation community. Systems in Track 2 were not allowed to be trained directly on the development set, mimicking a more realistic low-resource setting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Open Machine Translation", "sec_num": "2.1" }, { "text": "In order to be able to evaluate a large number of systems on all 10 languages, we used automatic metrics for our primary evaluation. Our main metric, which determined the official ranking of systems, was ChrF (Popovi\u0107, 2015) . We made this choice due to certain properties of our languages, such as word boundaries not being standardized for all languages and many languages being polysynthetic, resulting in a small number of words per sentence. We further reported BLEU scores (Papineni et al., 2002) for all systems and languages.", "cite_spans": [ { "start": 209, "end": 224, "text": "(Popovi\u0107, 2015)", "ref_id": "BIBREF49" }, { "start": 479, "end": 502, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF46" } ], "ref_spans": [], "eq_spans": [], "section": "Primary Evaluation", "sec_num": "2.2" }, { "text": "To gain additional insight into the strengths and weaknesses of the top-performing submissions, we further performed a supplementary manual evaluation for two language pairs and a limited number of systems, using a subset of the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supplementary Evaluation", "sec_num": "2.3" }, { "text": "We asked our annotators to provide ratings of system outputs using separate 5-point scales for adequacy and fluency. The annotation was performed by the translator who created the test datasets. The expert received the source sentence in Spanish, the reference in the indigenous language, and an anonymized system output. In addition to the baseline, we considered the 3 highest ranked systems according to our main metric, and randomly selected 100 sentences for each language. The following were the descriptions of the ratings as provided to the expert annotator in Spanish (translated into English here for convenience):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supplementary Evaluation", "sec_num": "2.3" }, { "text": "Adequacy The output sentence expresses the meaning of the reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supplementary Evaluation", "sec_num": "2.3" }, { "text": "The original meaning is not contained at all. 2. Bad: Some words or phrases allow to guess the content. 3. Neutral. 4. Sufficiently good: The original meaning is understandable, but some parts are unclear or incorrect. 5. Excellent: The meaning of the output is the same as that of the reference.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extremely bad:", "sec_num": "1." }, { "text": "Fluency The output sentence is easily readable and looks like a human-produced text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extremely bad:", "sec_num": "1." }, { "text": "The output text does not belong to the target language. 2. Bad: The output sentence is hardly readable. 3. Neutral. 4. Sufficiently good: The output seems like a human-produced text in the target language, but contains weird mistakes. 5. Excellent: The output seems like a humanproduced text in the target language, and is readable without issues. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extremely bad:", "sec_num": "1." }, { "text": "In this section, we will present the languages and datasets featured in our shared task. Figure 1 additionally provides an overview of the languages, their linguistic families, and the number of parallel sentences with Spanish.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Languages and Datasets", "sec_num": "3" }, { "text": "For system development and testing, we leveraged individual pairs of parallel sentences from Amer-icasNLI (Ebrahimi et al., 2021) . This dataset is a translation of the Spanish version of XNLI (Conneau et al., 2018) into our 10 indigenous languages. It was not publicly available until after the conclusion of the competition, avoiding an accidental inclusion of the test set into the training data by the participants. For more information regarding the creation of the dataset, we refer the reader to (Ebrahimi et al., 2021) .", "cite_spans": [ { "start": 106, "end": 129, "text": "(Ebrahimi et al., 2021)", "ref_id": null }, { "start": 503, "end": 526, "text": "(Ebrahimi et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Development and Test Sets", "sec_num": "3.1" }, { "text": "We collected publicly available datasets in all 10 languages and provided them to the shared task participants as a starting point. We will now introduce the languages and the training datasets, explaining similarities and differences between training sets on the one hand and development and test sets on the other.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3.2" }, { "text": "Spanish-Wixarika Wixarika (also known as Huichol) with ISO code hch is spoken in Mexico and belongs to the Yuto-Aztecan linguistic family. The training, development and test sets all belong to the same dialectal variation, Wixarika of Zoquipan, and use the same orthography. However, word boundaries are not always marked according to the same criteria in development/test and train.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3.2" }, { "text": "The training data (Mager et al., 2018a ) is a translation of the fairy tales of Hans Christian Andersen and contains word acquisitions and code-switching.", "cite_spans": [ { "start": 18, "end": 38, "text": "(Mager et al., 2018a", "ref_id": "BIBREF35" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3.2" }, { "text": "Spanish-Nahuatl Nahuatl is a Yuto-Aztecan language spoken in Mexico and El Salvador, with a wide dialectal variation (around 30 variants). For each main dialect a specific ISO 639-3 code is available. 2 There is a lack of consensus regarding the orthographic standard. This is very noticeable in the training data: the train corpus (Gutierrez-Vasques et al., 2016) has dialectal, domain, orthographic and diachronic variation (Nahuatl side). However, the majority of entries are closer to a Classical Nahuatl orthographic \"standard\". The development and test datasets were translated to modern Nahuatl. In particular, the translations belong to Nahuatl Central/Nahuatl de la Huasteca (Hidalgo y San Luis Potos\u00ed) dialects. In order to be closer to the training corpus, an orthographic normalization was applied. A simple rule based approach was used, which was based on the most predictable orthographic changes between modern varieties and Classical Nahuatl.", "cite_spans": [ { "start": 332, "end": 364, "text": "(Gutierrez-Vasques et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3.2" }, { "text": "Spanish-Guarani Guarani is mostly spoken in Paraguay, Bolivia, Argentina and Brazil. It belongs to the Tupian language family (ISO gnw, gun, gug, gui, grn, nhd). The training corpus for Guarani (Chiruzzo et al., 2020) was collected from web sources (blogs and news articles) that contained a mix of dialects, from pure Guarani to more mixed Jopara which combines Guarani with Spanish neologisms. The development and test corpora, on the other hand, are in standard Paraguayan Guarani.", "cite_spans": [ { "start": 194, "end": 217, "text": "(Chiruzzo et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3.2" }, { "text": "Spanish-Bribri Bribri is a Chibchan language spoken in southern Costa Rica (ISO code bzd). The training set for Bribri was extracted from six sources (Feldman and Coto-Solano, 2020; Margery, 2005; Jara Murillo, 2018a; Constenla et al., 2004 ; Jara Murillo and Garc\u00eda Segura, 2013; Jara Murillo, 2018b; Flores Sol\u00f3rzano, 2017), including a dictionary, a grammar, two language learning textbooks, one storybook and the transcribed sentences from 2 ISO 639-3 for the Nahutal languages: nci, nhn, nch, ncx, naz, nln, nhe, ngu, azz, nhq, nhk, nhx, nhp, ncl, nhm, nhy, ncj, nht, nlv, ppl, nhz, npl, nhc, nhv, nhi, nhg, nuz, nhw, nsu, xpo, nhn, nch, ncx, naz, nln, nhe, ngu, azz, nhq, nhk, nhx, nhp, ncl, nhm, nhy, ncj, nht, nlv, ppl, nhz, npl, nhc, nhv, nhi, nhg, nuz, nhw, nsu, and xpo. one spoken corpus. The sentences belong to three major dialects: Amubri, Coroma and Salitre.", "cite_spans": [ { "start": 150, "end": 181, "text": "(Feldman and Coto-Solano, 2020;", "ref_id": "BIBREF14" }, { "start": 182, "end": 196, "text": "Margery, 2005;", "ref_id": "BIBREF38" }, { "start": 197, "end": 217, "text": "Jara Murillo, 2018a;", "ref_id": "BIBREF27" }, { "start": 218, "end": 240, "text": "Constenla et al., 2004", "ref_id": null }, { "start": 464, "end": 781, "text": "Nahutal languages: nci, nhn, nch, ncx, naz, nln, nhe, ngu, azz, nhq, nhk, nhx, nhp, ncl, nhm, nhy, ncj, nht, nlv, ppl, nhz, npl, nhc, nhv, nhi, nhg, nuz, nhw, nsu, xpo, nhn, nch, ncx, naz, nln, nhe, ngu, azz, nhq, nhk, nhx, nhp, ncl, nhm, nhy, ncj, nht, nlv, ppl, nhz, npl, nhc, nhv, nhi, nhg, nuz, nhw, nsu, and xpo.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3.2" }, { "text": "There are numerous sources of variation in the Bribri data (Feldman and Coto-Solano, 2020): 1) There are several different orthographies, which use different diacritics for the same words. 2) The Unicode encoding of visually similar diacritics differs among authors. 3) There is phonetic and lexical variation across dialects. 4) There is considerable idiosyncratic variation between writers, including variation in word boundaries (e.g. ik\u00ede vrs i kie \"it is called\"). In order to build a standardized training set, an intermediate orthography was used to make these different forms comparable and learning easier. All of the training sentences are comparable in domain; they come from either traditional stories or language learning examples. Because of the nature of the texts, there is very little code-switching into Spanish. This is different from regular Bribri conversation, which would contain more borrowings from Spanish and more codeswitching. The development and test sentences were translated by a speaker of the Amubri dialect and transformed into the intermediate orthography.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training Data", "sec_num": "3.2" }, { "text": "Rar\u00e1muri is a Uto-Aztecan language, spoken in northern Mexico (ISO: tac, twr, tar, tcu, thh). Training data for Rar\u00e1muri consists of a set of extracted phrases from the Rar\u00e1muri dictionary Brambila (1976) . However, we could not find any description of the dialectal variation to which these examples belong. The development and test set are translations from Spanish into the highlands Rar\u00e1muri variant (tar), and may differ from the training set. As with many polysynthetic languages, challenges can arise when the boundaries of a morpheme and a word are not clear and have no consensus. Native speakers, even with a standard orthography and from the same dialectal variation, may define words in a different standards to define word boundaries.", "cite_spans": [ { "start": 189, "end": 204, "text": "Brambila (1976)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish-Rar\u00e1muri", "sec_num": null }, { "text": "Spanish-Quechua Quechua is a family of languages spoken in Argentina, Bolivia, Colombia, Ecuador, Peru, and Chile with many ISO codes for its language (quh, cqu, qvn, qvc, qur, quy, quk, qvo, qve, and quf). The development and test sets are translated into the standard version of Southern Quechua, specifically the Quechua Chanka (Ayacucho, code: quy) variety. This variety is spoken in different regions of Peru, and it can be understood in different areas of other countries, such as Bolivia or Argentina. This is the variant used on Wikipedia Quechua pages, and by Microsoft in its translations of software into Quechua. Southern Quechua includes different Quechua variants, such as Quechua Cuzco (quz) and Quechua Ayacucho (quy). Training datasets are provided for both variants. These datasets were created from JW300 (Agi\u0107 and Vuli\u0107, 2019) , which consists of Jehovah's Witness texts, sentences extracted from the official dictionary of the Minister of Education (MINEDU), and miscellaneous dictionary entries and samples which have been collected and reviewed by Huarcaya Taquiri (2020).", "cite_spans": [ { "start": 824, "end": 846, "text": "(Agi\u0107 and Vuli\u0107, 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish-Rar\u00e1muri", "sec_num": null }, { "text": "Spanish-Aymara Aymara is a Aymaran language spoken in Bolivia, Peru, and Chile (ISO codes aym, ayr, ayc). The development and test sets are translated into the Central Aymara variant (ayr), specifically Aymara La Paz jilata, the largest variant. This is similar to the variant of the available training set, which is obtained from Global Voices (Prokopidis et al., 2016) (and published in OPUS (Tiedemann, 2012) ), a news portal translated by volunteers. However, the text may have potentially different writing styles that are not necessarily edited.", "cite_spans": [ { "start": 394, "end": 411, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish-Rar\u00e1muri", "sec_num": null }, { "text": "Spanish--Shipibo-Konibo Shipibo-Konibo is a Panoan language spoken in Per\u00fa (ISO shp and kaq). The training sets for Shipibo-Konibo have been obtained from different sources and translators: Sources include translations of a sample from the Tatoeba dataset (G\u00f3mez Montoya et al., 2019) , translated sentences from books for bilingual education (Galarreta et al., 2017) , and dictionary entries and examples (Loriot et al., 1993) . Translated text was created by a bilingual teacher, and follows the most recent guidelines of the Minister of Education in Peru, however, the third source is an extraction of parallel sentences from an old dictionary. The development and test sets were created following the official convention as in the translated training sets.", "cite_spans": [ { "start": 263, "end": 284, "text": "Montoya et al., 2019)", "ref_id": "BIBREF20" }, { "start": 343, "end": 367, "text": "(Galarreta et al., 2017)", "ref_id": "BIBREF18" }, { "start": 406, "end": 427, "text": "(Loriot et al., 1993)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Spanish-Rar\u00e1muri", "sec_num": null }, { "text": "Ash\u00e1ninka is an Arawakan language (ISO: cni) spoken in Peru and Brazil. Training data was created by collecting texts from different domains such as traditional stories, educational texts, and environmental laws for the Amazonian region (Ortega et al., 2020; Romano, Rub\u00e9n and Richer, Sebasti\u00e1n, 2008; Mihas, 2011) . The texts belong to domains such as: traditional stories, educational texts, environmental laws for the Amazonian region. Not all the texts are translated into Spanish, there is a small fraction of these that are translated into Portuguese because a dialect of pan-Ashaninka is also spoken in the state of Acre in Brazil. The texts come from different pan-Ashaninka dialects and have been normalized using the AshMorph (Ortega et al., 2020) . There are many neologisms that are not spread to the speakers of different communities. The translator of the development and test sets only translated the words and concepts that are well known in the communities, whereas other terms are preserved in Spanish. Moreover, the development and test sets were created following the official writing convention proposed by the Peruvian Government and taught in bilingual schools.", "cite_spans": [ { "start": 237, "end": 258, "text": "(Ortega et al., 2020;", "ref_id": "BIBREF44" }, { "start": 259, "end": 301, "text": "Romano, Rub\u00e9n and Richer, Sebasti\u00e1n, 2008;", "ref_id": null }, { "start": 302, "end": 314, "text": "Mihas, 2011)", "ref_id": "BIBREF41" }, { "start": 736, "end": 757, "text": "(Ortega et al., 2020)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish-Ash\u00e1ninka", "sec_num": null }, { "text": "Spanish--Otom\u00ed Otom\u00ed (also known as H\u00f1\u00e4h\u00f1u, H\u00f1\u00e4h\u00f1o, \u00d1hato, \u00d1\u00fbhm\u00fb, depending on the region) is an Oto-Manguean language spoken in Mexico (ISO codes: ott, otn, otx, ote, otq, otz, otl, ots, otm). The training set 3 was collected from a set of different sources, which implies that the text contains more than one dialectal variation and orthographic standard, however, most texts belong to the Valle del Mezquital dialect (ote). This was specially challenging for the translation task, since the development and test sets are from the \u00d1\u00fbhm\u00fb de Ixtenco, Tlaxcala, variant (otz), which also has its own orthographic system. This variant is especially endangered as less than 100 elders still speak it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spanish-Ash\u00e1ninka", "sec_num": null }, { "text": "In addition to the provided datasets, participants also used additional publicly available parallel data, monolingual corpora or newly collected data sets. The most common datasets were JW300 (Agi\u0107 and Vuli\u0107, 2019) and the Bible's New Testament (Mayer and Cysouw, 2014; Christodouloupoulos and Steedman, 2015; McCarthy et al., 2020) . Besides those, GlobalVoices (Prokopidis et al., 2016) and datasets available at OPUS (Tiedemann, 2012) were added. New datasets were extracted from constitutions, dictionaries, and educational books. For monolingual text, Wikipedia was most commonly used, assuming one was available in a language.", "cite_spans": [ { "start": 192, "end": 214, "text": "(Agi\u0107 and Vuli\u0107, 2019)", "ref_id": "BIBREF0" }, { "start": 245, "end": 269, "text": "(Mayer and Cysouw, 2014;", "ref_id": "BIBREF39" }, { "start": 270, "end": 309, "text": "Christodouloupoulos and Steedman, 2015;", "ref_id": "BIBREF7" }, { "start": 310, "end": 332, "text": "McCarthy et al., 2020)", "ref_id": "BIBREF40" }, { "start": 363, "end": 388, "text": "(Prokopidis et al., 2016)", "ref_id": "BIBREF50" }, { "start": 420, "end": 437, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF55" } ], "ref_spans": [], "eq_spans": [], "section": "External Data Used by Participants", "sec_num": "3.3" }, { "text": "3 Otom\u00ed online corpus: https://tsunkua.elotl.mx/about/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "External Data Used by Participants", "sec_num": "3.3" }, { "text": "We will now describe our baseline as well as all submitted systems. An overview of all teams and the main ideas going into their submissions is shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Baseline and Submitted Systems", "sec_num": "4" }, { "text": "Our baseline system was a transformer-based sequence to sequence model (Vaswani et al., 2017) . We employed the hyperparameters proposed by Guzm\u00e1n et al. (2019) for a low-resource scenario. We implemented the model using Fairseq . The implementation of the baseline can be found in the official shared task repository. 4", "cite_spans": [ { "start": 71, "end": 93, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF56" }, { "start": 140, "end": 160, "text": "Guzm\u00e1n et al. (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Baseline", "sec_num": "4.1" }, { "text": "The team of the University of British Columbia (UBC-NLP; Billah-Nagoudi et al., 2021) participated for all ten language pairs and in both tracks. They used an encoder-decoder transformer model based on T5 (Raffel et al., 2020) . This model was pretrained on a dataset consisting of 10 indigenous languages and Spanish, that was collected by the team from different sources such as the Bible and Wikipedia, totaling 1.17 GB of text. However, given that some of the languages have more available data than others, this dataset is unbalanced in favor of languages like Nahuatl, Guarani, and Quechua. The team also proposed a two-stage fine-tuning method: first fine-tuning on the entire dataset, and then only on the target languages.", "cite_spans": [ { "start": 205, "end": 226, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "University of British Columbia", "sec_num": "4.2" }, { "text": "The University of Helsinki (Helsinki; V\u00e1zquez et al., 2021) participated for all ten language pairs in both tracks. This team did an extensive exploration of the existing datasets, and collected additional resources both from commonly used sources such as the Bible and Wikipedia, as well as other minor sources such as constitutions. Monolingual data was used to generate paired sentences through back-translation, and these parallel examples were added to the existing dataset. Then, a normalization process was done using existing tools, and the aligned data was further filtered. The quality of the data was also considered, and each dataset was assigned a weight depending on a noisiness estimation. The team used a transformer sequenceto-sequence model trained via two steps. For their main submission they first trained on data which was 90% Spanish-English and 10% indigenous languages, and then changed the data proportion to 50% Spanish-English and 50% indigenous languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Helsinki", "sec_num": "4.3" }, { "text": "The team of the University of Copenhagen (CoAStaL) submitted systems for both tracks (Bollmann et al., 2021) . They focused on additional data collection and tried to improve the results with low-resource techniques. The team discovered that it was even hard to generate correct words in the output and that phrase-based statistical machine translation (PB-SMT) systems work well when compared to the state-of-the-art neural models. Interestingly, the team introduced a baseline that mimicked the target language using a character-trigram distribution and length constraints without any knowledge of the source sentence. This random text generation achieved even better results than some of the other submitted systems. The team also reported failed experiments, where character-based neural machine translation (NMT), pretrained transformers, language model priors, and graph convolution encoders using UD annotations could not get any meaningful results.", "cite_spans": [ { "start": 85, "end": 108, "text": "(Bollmann et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "CoAStaL", "sec_num": "4.4" }, { "text": "The system of the Pontificia Universidad Cat\u00f3lica del Per\u00fa (REPUcs; Moreno, 2021) submitted to the the Spanish-Quechua language pair in both tracks. The team collected external data from 3 different sources and analyzed the domain disparity between this training data and the development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "REPUcs", "sec_num": "4.5" }, { "text": "To solve the problem of domain mismatch, they decided to collect additional data that could be a better match for the target domain. The used data from a handbook (Iter and Ortiz-C\u00e1rdenas, 2019), a lexicon, 5 and poems on the web (Duran, 2010 ). 6 Their model is a transformer encoder-decoder architecture with SentencePiece (Kudo and Richardson, 2018) tokenization. Together with the existing parallel corpora, the new paired data was used for finetuning on top of a pretrained Spanish-English translation model. The team submitted two versions of their system: the first was only finetuned on JW300+ data, while the second one additionally leveraged the newly collected dataset.", "cite_spans": [ { "start": 230, "end": 242, "text": "(Duran, 2010", "ref_id": "BIBREF12" }, { "start": 325, "end": 352, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "REPUcs", "sec_num": "4.5" }, { "text": "The team of the University of Tokyo (UTokyo; Zheng et al., 2021) submitted systems for all languages and both tracks. A multilingual pretrained encoder-decoder model (mBART; was used, implemented with the Fairseq toolkit Table 3 : Results of Track 1 (development set used for training) for all systems and language pairs. The results are ranked by the official metric of the shared task: ChrF. One team decided to send a anonymous submission (Anonym). Best results are shown in bold, and they are significantly better than the second place team (in each language-pair) according to the Wilcoxon signed-ranked test and Pitman's permutation test with p<0.05 (Dror et al., 2018) .", "cite_spans": [ { "start": 45, "end": 64, "text": "Zheng et al., 2021)", "ref_id": "BIBREF59" }, { "start": 656, "end": 675, "text": "(Dror et al., 2018)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 221, "end": 228, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "UTokyo", "sec_num": "4.6" }, { "text": "ious high-resource languages, and then finetuned for each target language using the official provided data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "UTokyo", "sec_num": "4.6" }, { "text": "The team of the National Research Council Canada (NRC-CNRC; Knowles et al., 2021) submitted systems for the Spanish to Wix\u00e1rika, Nahuatl, Rar\u00e1muri and Guarani language pairs for both tracks. Due to ethical considerations, the team decided not to use external data, and restricted themselves to the data provided for the shared task. All data was preprocessed with standard Moses tools (Koehn et al., 2007) . The submitted systems were based on a Transformer model, and used BPE for tokenization. The team experimented with multilingual models pretrained on either 3 or 4 languages, finding that the 4 language model achieved higher performance.", "cite_spans": [ { "start": 60, "end": 81, "text": "Knowles et al., 2021)", "ref_id": "BIBREF30" }, { "start": 385, "end": 405, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "NRC-CNRC", "sec_num": "4.7" }, { "text": "Additionally the team trained a Translation Memory (Simard and Fujita, 2012) using half of the examples of the development set. Surprisingly, even given its small amount of training data, this system outperformed the team's Track 2 submission for Rar\u00e1muri.", "cite_spans": [ { "start": 51, "end": 76, "text": "(Simard and Fujita, 2012)", "ref_id": "BIBREF54" } ], "ref_spans": [], "eq_spans": [], "section": "NRC-CNRC", "sec_num": "4.7" }, { "text": "The team Tamalli 7 (Parida et al., 2021) participated in Track 1 for all 10 language pairs. The team used an IBM Model 2 for SMT, and a transformer model for NMT. The team's NMT models were trained in two settings: one-to-one, with one model being trained per target language, and one-to-many, where decoder weights were shared across languages and a language embedding layer was added to the decoder. They submitted 5 systems per language, which differed in their hyperparameter choices and training setup.", "cite_spans": [ { "start": 19, "end": 40, "text": "(Parida et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Tamalli", "sec_num": "4.8" }, { "text": "The complete results for all systems submitted to Track 1 are shown in Table 3 . Submission 2 of the Helsinki team achieved first place for all language pairs. Interestingly, for all language pairs, the Helsinki team also achieved the second best result with their Submission 1. Submission 3 was less successful, achieving third place on three pairs. The NRC-CNRC team achieved third place for Wix\u00e1rika, Nahuatl, and Rar\u00e1muri, and fourth for Guarani.The lower automatic scores of their systems can also be partly due to the team not using additional datasets. The REPUcs system obtained the third best result for Quechua, the only language they participated in. CoAStaL's first system, a PB-SMT model, achieved third place for Bribri, Otom\u00ed, and Shipibo-Konibo, and fourth place for Ashaninka. This suggests that SMT is still competitive for low-resource languages. UTokyo and UBC-NLP were less successful than the other approaches. Finally, we attribute the bad performance of the anonymous submission to a possible bug. Since our baseline system was not trained on the development set, no specific baseline was available for this track.", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Track 1", "sec_num": "5.1" }, { "text": "All results for Track 2, including those of our baseline system, are shown in Table 5 . Most submissions outperformed the baseline by a large margin. As for Track 1, the best system was from the Helsinki team (submission 5), winning 9 out of 10 language pairs. REPUcs achieved the best score for Spanish-Quechua, the only language pair they submitted results for. Their pretraining on Spanish-English and the newly collected dataset proved to be successful.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Track 2", "sec_num": "5.2" }, { "text": "Second places were more diverse for Track 2 than for Track 1. The NRC-CNRC team achieved second place for two languages (Wixarika and Guarani), UTokyo achieved second place for three languages (Aymara, Nahuatl and Otom\u00ed), and the Helsinki team came in second for Quechua. Tamalli only participated in Track 2, with 4 systems per language. Their most successful one was submission 1, a word-based SMT system. An interesting submission for this track was the CoAStaL submission 2, which created a random generated output that mimics the target language distribution. This system consistently outperformed the official baseline and even outperformed other approaches for most languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Track 2", "sec_num": "5.2" }, { "text": "As explained in \u00a72, we also conducted a small human evaluation of system outputs based on adequacy and fluency on a 5-points scale, which was performed by a professional translator for two language-pairs: Spanish to Shipibo-Konibo and Table 4 : Results of the NLI analysis. * indicates that the average score is not directly comparable as the number of languages differs for the given system. Otom\u00ed. 8 This evaluation was performed given the extremely low automatic evaluation scores, and the natural question about the usefulness of the outputs of MT systems at the current state-of-the-art. While we selected two languages as a sample to get a better approximation to this question, further studies are needed to draw stronger conclusions. Figure 1 shows the adequacy and fluency scores annotated for Spanish-Shipibo-Konibo and Spanish-Otom\u00ed language-pairs. considering the baseline and the three highest ranked systems according to ChrF. For both languages, we observe that the adequacy scores are similar between all systems except for Helsinki, the best ranked submission given the automatic evaluation metric, which has more variance than the others. However, the average score is low, around 2, which means that only few words or phrases express the meaning of the reference.", "cite_spans": [ { "start": 393, "end": 401, "text": "Otom\u00ed. 8", "ref_id": null } ], "ref_spans": [ { "start": 235, "end": 242, "text": "Table 4", "ref_id": null }, { "start": 742, "end": 750, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Supplementary Evaluation Results", "sec_num": "5.3" }, { "text": "Looking at fluency, there is less similarity between the Shipibo-Konibo and Otom\u00ed annotations. For Shipibo-Konibo, there is no clear difference between the systems in terms of their average scores. We note that Tamalli's system obtained the larger group with the relatively highest score. For Otom\u00ed, the three submitted systems are at least slightly better than the baseline on average, but only in 1 level of the scale. The scores for fluency are similar to adequacy in this case. Besides, according to the annotations, the output translations in Shipibo-Konibo were closer to human-produced texts than in Otom\u00ed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supplementary Evaluation Results", "sec_num": "5.3" }, { "text": "We also show the relationship between ChrF and the adequacy and fluency scores in Figure 2 . However, there does not seem to be a correlation between the automatic metric and the manually assigned scores.", "cite_spans": [], "ref_spans": [ { "start": 82, "end": 90, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Supplementary Evaluation Results", "sec_num": "5.3" }, { "text": "One approach for zero-shot transfer learning of a sequence classification task is the translate-train approach, where a translation system is used to translate high-resource labeled training data into the target language. In the case of pretrained multilingual models, these machine translated examples are then used for finetuning. For our analysis, we used various shared task submissions to create different sets of translated training data. We then trained a natural language inference (NLI) model using this translated data, and used the downstream NLI performance as an extrinsic evaluation of translation quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Analysis: NLI", "sec_num": "5.4" }, { "text": "Our experimental setup was identical to Ebrahimi et al. (2021) . We focused only on submissions from Track 2, and analyzed the Helsinki-5 and the NRC-CNRC-1 system. We present results in Table 4 . Performance from using the Helsinki system far outperforms the baseline on average, and using the NRC-CNRC system also improves over the baseline. For the four languages covered by all systems, we can see that the ranking of NLI performance matches that of the automatic ChrF evaluation. Between the Helsinki and Baseline systems, this ranking also holds for every other language except for Bribri, where the Baseline achieves around 3 percentage points higher accuracy. Overall, this evaluation both confirms the ranking created by the ChrF scores and provides strong evidence supporting the use of translationbased approaches for zero-shot tasks.", "cite_spans": [ { "start": 40, "end": 62, "text": "Ebrahimi et al. (2021)", "ref_id": null } ], "ref_spans": [ { "start": 187, "end": 194, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Analysis: NLI", "sec_num": "5.4" }, { "text": "To extend the analysis in the previous sections, Tables 6 and 7 show output samples using the best ranked system (Helsinki-5) for Shipibo-Konibo and Otom\u00ed, respectively. In each table, we present the top-3 outputs ranked by ChrF and the top-3 ranked by Adequacy and Fluency.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "For Shipibo-Konibo, in Table 6 , we observe that the first three outputs (with the highest ChrF) are quite close to the reference. Surprisingly, the ad- equacy annotation of the first sample is relatively low. We can also observe that many subwords are presented in both the reference and the system's output, but not entire words, which shows why BLEU may not be a useful metric to evaluate performance. However, the subwords are still located in different order, and concatenated with different morphemes, which impacts the fluency. Concerning the most adequate and fluent samples, we still observe a high presence of correct subwords in the output, and we can infer that the different order or concatenation of different morphemes did not affect the original meaning of the sentence.", "cite_spans": [], "ref_spans": [ { "start": 23, "end": 30, "text": "Table 6", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "For Otom\u00ed, in Table 7 , the scenario was less positive, as the ChrF scores are lower than for Shipibo-Konibo, on average. This was echoed in the top-3 outputs, which are very short and contain words or phrases that are preserved in Spanish for the reference translation. Concerning the most adequate and fluent outputs, we observed a very low overlapping of subwords (less than in Shipibo-Konibo), which could only indicate that the outputs preserve part of the meaning of the source but they are expressed differently than the reference. Moreover, we noticed some inconsistencies in the punctuation, which impacts in the ChrF overall score.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "In summary, there are some elements to explore further in the rest of the outputs: How many loanwords or how much code-switched text from Spanish is presented in the reference translation? Is there consistency in the punctuation, e.g., period at the end of a segment, between all the source and reference sentences?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "6" }, { "text": "This paper presents the results of the AmericasNLP 2021 Shared Task on OMT. We received 214 submissions of machine translation systems by 8 teams. All systems suffered from the minimal amount of data and the challenging orthographic, dialectal and domain mismatches of the training and test set. However, most teams achieved huge improvements over the official baseline. We found that text cleaning and normalization, as well as domain adaptation played large roles in the best performing systems. The best NMT systems were multilingual approaches with a limited size (over massive multilingual). Additionally, SMT models also performed well, outperforming larger pretrained submissions. OUT: N'a ra b\u00e4tsi bi du ko ya kut'a. C: 13.9 SRC: \u00c9l recibe ayuda con sus comidas y ropa. A: 4 REF: na di hi\u00e2ni m\u00e2hte nen ynu yn\u00f1uni xi \u00e1hxo F: 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "OUT: Nu'a h\u00e4 h\u00e4ni ko ya h\u00f1uni ne ya dutu. C: 13.3 SRC: Ni siquiera entendi\u00f3 la ceremonia nupcial, ni siquiera sab\u00eda que se hab\u00eda casado, en serio-A: 4 REF: Hin bi \u00f4ccode na n\u00eenthadi, hin mip\u00e2ca gu\u00ea bin miqha nth\u00e2di,maqhuani ngu -a. F: 4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "213", "sec_num": null }, { "text": "OUT: Inbi b\u00e4di te ra nge'a bi nthati, bi ot'e ra guenda... Table 7 : Translation outputs of the best system (Helsinki) for Otom\u00ed. Top-3 samples have the highest ChrF (C) scores, whereas the bottom-3 have the best adequacy (A) and fluency (F) values.", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 66, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "213", "sec_num": null }, { "text": "https://github.com/AmericasNLP/americasnlp2021/ blob/main/data/information_datasets.pdf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/AmericasNLP/americasnlp2021", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.inkatour.com/dico/ 6 https://lyricstranslate.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Participating universities: Idiap Research Institute, City University of New York, BITS-India, Universidad Aut\u00f3noma Metropolitana-M\u00e9xico, Ghent University, and Universidad Polit\u00e9cnica de Tulancingo-M\u00e9xico", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In the WMT campaigns, it is common to perform a crowdsourced evaluation with several annotators. However, we cannot follow that procedure given the low chance to find native speakers of indigenous languages as users in crowdsourcing platforms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank translators of the test and development set, that made this shared task possible: Francisco Morales (Bribri), Feliciano Torres R\u00edos and Esau Zumaeta Rojas (Ash\u00e1ninka), Perla Alvarez Britez (Guarani), Silvino Gonz\u00e1lez de la Cr\u00faz (Wixarika), Giovany Mart\u00ednez Sebasti\u00e1n, Pedro Kapoltitan, and Jos\u00e9 Antonio (Nahuatl), Jos\u00e9 Mateo Lino Cajero Vel\u00e1zquez (Otom\u00ed), Liz Ch\u00e1vez (Shipibo-Konibo), and Mar\u00eda del C\u00e1rmen Sotelo Holgu\u00edn (Rar\u00e1muri). We also thank our sponsors for their financial support: Facebook AI Research, Microsoft Research, Google Research, the Institute of Computational Linguistics at the University of Zurich, the NAACL Emerging Regions Funding, Comunidad Elotl, and Snorkel AI. Additionally we want to thank all participants for their submissions and effort to advance NLP research for the indigenous languages of the Americas. Manuel Mager received financial support by DAAD Doctoral Research Grant for this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "JW300: A Wide-Coverage Parallel Corpus for Low-Resource Languages", "authors": [ { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3204--3210", "other_ids": { "DOI": [ "10.18653/v1/P19-1310" ] }, "num": null, "urls": [], "raw_text": "\u017deljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A Wide- Coverage Parallel Corpus for Low-Resource Lan- guages. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3204-3210, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Proceedings of the Fifth Conference on Machine Translation", "authors": [ { "first": "Lo\u00efc", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "Magdalena", "middle": [], "last": "Biesialska", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Marta", "middle": [ "R" ], "last": "Costa-Juss\u00e0", "suffix": "" }, { "first": "Christian", "middle": [], "last": "Federmann", "suffix": "" }, { "first": "Yvette", "middle": [], "last": "Graham", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Grundkiewicz", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Eric", "middle": [], "last": "Joanis", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Chi-Kiu", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Ljube\u0161i\u0107", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" }, { "first": "Makoto", "middle": [], "last": "Morishita", "suffix": "" }, { "first": "Masaaki", "middle": [], "last": "Nagata", "suffix": "" }, { "first": "Toshiaki", "middle": [], "last": "Nakazawa", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "1--55", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "IndT5: A Text-to-Text Transformer for 10 Indigenous Languages", "authors": [], "year": null, "venue": "Proceedings of the AmericasNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "El Moatez Billah-Nagoudi, Wei-Rui Chen, Muhammad Abdul-Mageed, and Hasan Cavusoglu. 2021. IndT5: A Text-to-Text Transformer for 10 Indigenous Lan- guages. In Proceedings of the AmericasNLP 2021", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shared Task on Open Machine Translation for In- digenous Languages of the Americas, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Miryam de Lhoneux, and Anders S\u00f8gaard. 2021. Moses and the characterbased random babbling baseline: CoAStaL at Amer-icasNLP 2021 shared task", "authors": [ { "first": "Marcel", "middle": [], "last": "Bollmann", "suffix": "" }, { "first": "Rahul", "middle": [], "last": "Aralikatte", "suffix": "" }, { "first": "H\u00e9ctor", "middle": [], "last": "Murrieta-Bello", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Hershcovich", "suffix": "" } ], "year": null, "venue": "Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcel Bollmann, Rahul Aralikatte, H\u00e9ctor Murrieta- Bello, Daniel Hershcovich, Miryam de Lhoneux, and Anders S\u00f8gaard. 2021. Moses and the character- based random babbling baseline: CoAStaL at Amer- icasNLP 2021 shared task. In Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Ameri- cas, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Diccionario rar\u00e1muricastellano (tarahumar)", "authors": [ { "first": "David", "middle": [], "last": "Brambila", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Brambila. 1976. Diccionario rar\u00e1muri- castellano (tarahumar). Obra Nacional de la buena Prensa.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Development of a Guarani -Spanish parallel corpus", "authors": [ { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Amarilla", "suffix": "" }, { "first": "Adolfo", "middle": [], "last": "R\u00edos", "suffix": "" }, { "first": "Gustavo", "middle": [ "Gim\u00e9nez" ], "last": "Lugo", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2629--2633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luis Chiruzzo, Pedro Amarilla, Adolfo R\u00edos, and Gus- tavo Gim\u00e9nez Lugo. 2020. Development of a Guarani -Spanish parallel corpus. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 2629-2633, Marseille, France. Euro- pean Language Resources Association.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A massively parallel corpus: the bible in 100 languages. Language resources and evaluation", "authors": [ { "first": "Christos", "middle": [], "last": "Christodouloupoulos", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2015, "venue": "", "volume": "49", "issue": "", "pages": "375--395", "other_ids": {}, "num": null, "urls": [], "raw_text": "Christos Christodouloupoulos and Mark Steedman. 2015. A massively parallel corpus: the bible in 100 languages. Language resources and evaluation, 49(2):375-395.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Rule-based machine translation for Aymara", "authors": [ { "first": "Matthew", "middle": [], "last": "Coler", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Homola", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "67--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Coler and Petr Homola. 2014. Rule-based machine translation for Aymara, pages 67-80. Cam- bridge University Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "XNLI: Evaluating cross-lingual sentence representations", "authors": [ { "first": "Alexis", "middle": [], "last": "Conneau", "suffix": "" }, { "first": "Ruty", "middle": [], "last": "Rinott", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Adina", "middle": [], "last": "Williams", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Bowman", "suffix": "" }, { "first": "Holger", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2475--2485", "other_ids": { "DOI": [ "10.18653/v1/D18-1269" ] }, "num": null, "urls": [], "raw_text": "Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "The hitchhiker's guide to testing statistical significance in natural language processing", "authors": [ { "first": "Rotem", "middle": [], "last": "Dror", "suffix": "" }, { "first": "Gili", "middle": [], "last": "Baumer", "suffix": "" }, { "first": "Segev", "middle": [], "last": "Shlomov", "suffix": "" }, { "first": "Roi", "middle": [], "last": "Reichart", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1383--1392", "other_ids": { "DOI": [ "10.18653/v1/P18-1128" ] }, "num": null, "urls": [], "raw_text": "Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Aus- tralia. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Lengua general de los incas", "authors": [ { "first": "Maximiliano", "middle": [], "last": "Duran", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "2021--2024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximiliano Duran. 2010. Lengua general de los in- cas. http://quechua-ayacucho.org/es/index_es.php. Accessed: 2021-03-15.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Ngoc Thang Vu, and Katharina Kann. 2021. AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Multilingual Models in Truly Low-resource Languages", "authors": [ { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Ramos", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vladimir", "suffix": "" }, { "first": "Gustavo", "middle": [ "A" ], "last": "Gim\u00e9nez-Lugo", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Mager", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Gim\u00e9nez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, and Katharina Kann. 2021. AmericasNLI: Evaluating Zero-shot Natural Language Understanding of Pretrained Mul- tilingual Models in Truly Low-resource Languages.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Neural machine translation models with back-translation for the extremely low-resource indigenous language Bribri", "authors": [ { "first": "Isaac", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3965--3976", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.351" ] }, "num": null, "urls": [], "raw_text": "Isaac Feldman and Rolando Coto-Solano. 2020. Neu- ral machine translation models with back-translation for the extremely low-resource indigenous language Bribri. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3965-3976, Barcelona, Spain (Online). Interna- tional Committee on Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Design and implementation of an \"Web API\" for the automatic translation Colombia's language pairs: Spanish-Wayuunaiki case", "authors": [ { "first": "Ornela", "middle": [ "Quintero" ], "last": "Dayana Iguar\u00e1n Fern\u00e1ndez", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Gamboa", "suffix": "" }, { "first": "Oscar El\u00edas Herrera", "middle": [], "last": "Molina Atencia", "suffix": "" }, { "first": "", "middle": [], "last": "Bedoya", "suffix": "" } ], "year": 2013, "venue": "Communications and Computing (COLCOM)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dayana Iguar\u00e1n Fern\u00e1ndez, Ornela Quintero Gam- boa, Jose Molina Atencia, and Oscar El\u00edas Her- rera Bedoya. 2013. Design and implementation of an \"Web API\" for the automatic translation Colom- bia's language pairs: Spanish-Wayuunaiki case. In Communications and Computing (COLCOM), 2013", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Corpus oral pandialectal de la lengua bribri", "authors": [ { "first": "", "middle": [], "last": "Sof\u00eda Flores Sol\u00f3rzano", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sof\u00eda Flores Sol\u00f3rzano. 2017. Corpus oral pandialectal de la lengua bribri. http://bribri.net.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Corpus creation and initial SMT experiments between Spanish and Shipibo-konibo", "authors": [ { "first": "Ana-Paula", "middle": [], "last": "Galarreta", "suffix": "" }, { "first": "Andr\u00e9s", "middle": [], "last": "Melgar", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "238--244", "other_ids": { "DOI": [ "10.26615/978-954-452-049-6_033" ] }, "num": null, "urls": [], "raw_text": "Ana-Paula Galarreta, Andr\u00e9s Melgar, and Arturo On- cevay. 2017. Corpus creation and initial SMT ex- periments between Spanish and Shipibo-konibo. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 238-244, Varna, Bulgaria. INCOMA Ltd.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Indigenous language revitalization, promotion, and education: Function of digital technology", "authors": [ { "first": "Candace", "middle": [], "last": "Kaleimamoowahinekapu", "suffix": "" }, { "first": "Galla", "middle": [], "last": "", "suffix": "" } ], "year": 2016, "venue": "Computer Assisted Language Learning", "volume": "29", "issue": "7", "pages": "1137--1151", "other_ids": {}, "num": null, "urls": [], "raw_text": "Candace Kaleimamoowahinekapu Galla. 2016. Indige- nous language revitalization, promotion, and educa- tion: Function of digital technology. Computer As- sisted Language Learning, 29(7):1137-1151.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A continuous improvement framework of machine translation for Shipibo-konibo", "authors": [ { "first": "H\u00e9ctor Erasmo G\u00f3mez", "middle": [], "last": "Montoya", "suffix": "" }, { "first": "Kervy Dante Rivas", "middle": [], "last": "Rojas", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "17--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "H\u00e9ctor Erasmo G\u00f3mez Montoya, Kervy Dante Rivas Rojas, and Arturo Oncevay. 2019. A continuous improvement framework of machine translation for Shipibo-konibo. In Proceedings of the 2nd Work- shop on Technologies for MT of Low Resource Lan- guages, pages 17-23, Dublin, Ireland. European As- sociation for Machine Translation.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Bilingual lexicon extraction for a distant language pair using a small parallel corpus", "authors": [ { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop", "volume": "", "issue": "", "pages": "154--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ximena Gutierrez-Vasques. 2015. Bilingual lexicon extraction for a distant language pair using a small parallel corpus. In Proceedings of the 2015 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Student Re- search Workshop, pages 154-160.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Axolotl: a web accessible parallel corpus for Spanish-Nahuatl", "authors": [ { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "" }, { "first": "Gerardo", "middle": [], "last": "Sierra", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Hernandez Pompa", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4210--4214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ximena Gutierrez-Vasques, Gerardo Sierra, and Isaac Hernandez Pompa. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4210-4214, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English", "authors": [ { "first": "Francisco", "middle": [], "last": "Guzm\u00e1n", "suffix": "" }, { "first": "Peng-Jen", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Pino", "suffix": "" }, { "first": "Guillaume", "middle": [], "last": "Lample", "suffix": "" }, { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Marc'aurelio", "middle": [], "last": "Ranzato", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "6098--6111", "other_ids": { "DOI": [ "10.18653/v1/D19-1632" ] }, "num": null, "urls": [], "raw_text": "Francisco Guzm\u00e1n, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource ma- chine translation: Nepali-English and Sinhala- English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Traducci\u00f3n autom\u00e1tica neuronal para lengua nativa peruana. Bachelor's thesis", "authors": [ { "first": "Diego", "middle": [], "last": "Huarcaya", "suffix": "" }, { "first": "Taquiri", "middle": [], "last": "", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Huarcaya Taquiri. 2020. Traducci\u00f3n autom\u00e1tica neuronal para lengua nativa peruana. Bachelor's the- sis, Universidad Peruana Uni\u00f3n.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Runasimita yachasun. M\u00e9todo de quechua", "authors": [ { "first": "Cesar", "middle": [], "last": "Iter", "suffix": "" }, { "first": "Zenobio", "middle": [], "last": "Ortiz-C\u00e1rdenas", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cesar Iter and Zenobio Ortiz-C\u00e1rdenas. 2019. Runasimita yachasun. M\u00e9todo de quechua, 1", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Instituto Franc\u00e9s de Estudios Andinos", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "edition. Instituto Franc\u00e9s de Estudios Andinos, Lima.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Gram\u00e1tica de la Lengua Bribri", "authors": [ { "first": "Carla", "middle": [], "last": "Victoria", "suffix": "" }, { "first": "Jara", "middle": [], "last": "Murillo", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carla Victoria Jara Murillo. 2018a. Gram\u00e1tica de la Lengua Bribri. EDigital.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "I Tt\u00e8 Historias Bribris", "authors": [ { "first": "Carla", "middle": [], "last": "Victoria", "suffix": "" }, { "first": "Jara", "middle": [], "last": "Murillo", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carla Victoria Jara Murillo. 2018b. I Tt\u00e8 Historias Bribris, second edition. Editorial de la Universidad de Costa Rica.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Se' tt\u00f6' bribri ie Hablemos en bribri. EDigital", "authors": [ { "first": "Carla", "middle": [], "last": "Victoria", "suffix": "" }, { "first": "Jara", "middle": [], "last": "Murillo", "suffix": "" }, { "first": "Al\u00ed Garc\u00eda", "middle": [], "last": "Segura", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carla Victoria Jara Murillo and Al\u00ed Garc\u00eda Segura. 2013. Se' tt\u00f6' bribri ie Hablemos en bribri. EDigi- tal.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "NRC-CNRC Machine Translation Systems for the 2021 AmericasNLP Shared Task", "authors": [ { "first": "Rebecca", "middle": [], "last": "Knowles", "suffix": "" }, { "first": "Darlene", "middle": [], "last": "Stewart", "suffix": "" }, { "first": "Samuel", "middle": [], "last": "Larkin", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Knowles, Darlene Stewart, Samuel Larkin, and Patrick Littell. 2021. NRC-CNRC Machine Translation Systems for the 2021 AmericasNLP Shared Task. In Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th annual meeting of the association for computational linguistics companion", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th annual meeting of the associ- ation for computational linguistics companion vol- ume proceedings of the demo and poster sessions, pages 177-180.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for Neural Text Processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Multilingual Denoising Pre-training for Neural Machine Translation. Transactions of the Association for", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Computational Linguistics", "volume": "8", "issue": "", "pages": "726--742", "other_ids": { "DOI": [ "10.1162/tacl_a_00343" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual Denoising Pre-training for Neural Machine Translation. Trans- actions of the Association for Computational Lin- guistics, 8:726-742.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Probabilistic finite-state morphological segmenter for wixarika (huichol) language", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Di\u00f3nico", "middle": [], "last": "Carrillo", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Meza", "suffix": "" } ], "year": 2018, "venue": "Journal of Intelligent & Fuzzy Systems", "volume": "34", "issue": "5", "pages": "3081--3087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Mager, Di\u00f3nico Carrillo, and Ivan Meza. 2018a. Probabilistic finite-state morphological seg- menter for wixarika (huichol) language. Journal of Intelligent & Fuzzy Systems, 34(5):3081-3087.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Challenges of language technologies for the indigenous languages of the Americas", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "" }, { "first": "Gerardo", "middle": [], "last": "Sierra", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Meza-Ruiz", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "55--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, and Ivan Meza-Ruiz. 2018b. Challenges of language technologies for the indigenous languages of the Americas. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 55-69, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Hacia la traducci\u00f3n autom\u00e1tica de las lenguas ind\u00edgenas de m\u00e9xico", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Meza", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Digital Humanities Conference. The Association of Digital Humanities Organizations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Mager and Ivan Meza. 2018. Hacia la traduc- ci\u00f3n autom\u00e1tica de las lenguas ind\u00edgenas de m\u00e9xico. In Proceedings of the 2018 Digital Humanities Con- ference. The Association of Digital Humanities Or- ganizations.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Diccionario Fraseol\u00f3gico Bribri-Espa\u00f1ol Espa\u00f1ol-Bribri", "authors": [ { "first": "Enrique", "middle": [ "Margery" ], "last": "", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Enrique Margery. 2005. Diccionario Fraseol\u00f3gico Bribri-Espa\u00f1ol Espa\u00f1ol-Bribri, second edition. Edi- torial de la Universidad de Costa Rica.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Creating a massively parallel bible corpus", "authors": [ { "first": "Thomas", "middle": [], "last": "Mayer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cysouw", "suffix": "" } ], "year": 2014, "venue": "Oceania", "volume": "135", "issue": "273", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Mayer and Michael Cysouw. 2014. Creat- ing a massively parallel bible corpus. Oceania, 135(273):40.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration", "authors": [ { "first": "D", "middle": [], "last": "Arya", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Dylan", "middle": [], "last": "Wicks", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Winston", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "David", "middle": [], "last": "Post", "suffix": "" }, { "first": "", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2884--2892", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Garrett Nico- lai, Matt Post, and David Yarowsky. 2020. The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 2884-2892, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "A\u00f1aani katonkosatzi parenini, El idioma del alto Peren\u00e9", "authors": [ { "first": "Elena", "middle": [], "last": "Mihas", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Mihas. 2011. A\u00f1aani katonkosatzi parenini, El idioma del alto Peren\u00e9. WI:Clarks Graphics.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "The REPU CS' spanish-quechua submission to the AmericasNLP 2021 shared task on open machine translation", "authors": [ { "first": "Oscar", "middle": [], "last": "Moreno", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oscar Moreno. 2021. The REPU CS' spanish-quechua submission to the AmericasNLP 2021 shared task on open machine translation. In Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Ameri- cas, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Glottolog/Langdoc:Increasing the visibility of grey literature for low-density languages", "authors": [ { "first": "Sebastian", "middle": [], "last": "Nordhoff", "suffix": "" }, { "first": "Harald", "middle": [], "last": "Hammarstr\u00f6m", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "3289--3294", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Nordhoff and Harald Hammarstr\u00f6m. 2012. Glottolog/Langdoc:Increasing the visibility of grey literature for low-density languages. In Proceed- ings of the Eighth International Conference on Lan- guage Resources and Evaluation (LREC'12), pages 3289-3294, Istanbul, Turkey. European Language Resources Association (ELRA).", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Overcoming resistance: The normalization of an Amazonian tribal language", "authors": [ { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Richard", "middle": [ "Alexander" ], "last": "Castro-Mamani", "suffix": "" }, { "first": "Jaime Rafael Montoya", "middle": [], "last": "Samame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Ortega, Richard Alexander Castro-Mamani, and Jaime Rafael Montoya Samame. 2020. Overcom- ing resistance: The normalization of an Amazonian tribal language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 1-13, Suzhou, China. Association for Compu- tational Linguistics.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "fairseq: A Fast, Extensible Toolkit for Sequence Modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "48--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a Method for Automatic Eval- uation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Yashvardhan Sharma, and Petr Motlicek. 2021. Open Machine Translation for Low Resource South American Languages (AmericasNLP 2021 Shared Task Contribution)", "authors": [ { "first": "Shantipriya", "middle": [], "last": "Parida", "suffix": "" }, { "first": "Subhadarshi", "middle": [], "last": "Panda", "suffix": "" }, { "first": "Amulya", "middle": [], "last": "Dash", "suffix": "" }, { "first": "Esau", "middle": [], "last": "Villatoro-Tello", "suffix": "" }, { "first": "A", "middle": [], "last": "Seza", "suffix": "" }, { "first": "Rosa", "middle": [ "M" ], "last": "Dogru\u00f6z", "suffix": "" }, { "first": "Amadeo", "middle": [], "last": "Ortega-Mendoza", "suffix": "" }, { "first": "", "middle": [], "last": "Hern\u00e1ndez", "suffix": "" } ], "year": null, "venue": "Proceedings of the AmericasNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shantipriya Parida, Subhadarshi Panda, Amulya Dash, Esau Villatoro-Tello, A. Seza Dogru\u00f6z, Rosa M. Ortega-Mendoza, Amadeo Hern\u00e1ndez, Yashvardhan Sharma, and Petr Motlicek. 2021. Open Machine Translation for Low Resource South American Lan- guages (AmericasNLP 2021 Shared Task Contribu- tion). In Proceedings of the AmericasNLP 2021", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shared Task on Open Machine Translation for In- digenous Languages of the Americas, Online. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "chrF: character n-gram F-score for automatic MT evaluation", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "392--395", "other_ids": { "DOI": [ "10.18653/v1/W15-3049" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Parallel Global Voices: a Collection of Multilingual Corpora with Citizen Media Stories", "authors": [ { "first": "Prokopis", "middle": [], "last": "Prokopidis", "suffix": "" }, { "first": "Vassilis", "middle": [], "last": "Papavassiliou", "suffix": "" }, { "first": "Stelios", "middle": [], "last": "Piperidis", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "900--905", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prokopis Prokopidis, Vassilis Papavassiliou, and Ste- lios Piperidis. 2016. Parallel Global Voices: a Col- lection of Multilingual Corpora with Citizen Media Stories. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 900-905, Portoro\u017e, Slovenia. Eu- ropean Language Resources Association (ELRA).", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to- Text Transformer. Journal of Machine Learning Re- search, 21:1-67.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Neural polysynthetic language modelling", "authors": [ { "first": "Lane", "middle": [], "last": "Schwartz", "suffix": "" }, { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "Lori", "middle": [], "last": "Levin", "suffix": "" }, { "first": "Christo", "middle": [], "last": "Kirov", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "" }, { "first": "Chi-Kiu", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Prud'hommeaux", "suffix": "" }, { "first": "Hayley", "middle": [], "last": "Hyunji", "suffix": "" }, { "first": "Kenneth", "middle": [], "last": "Park", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Steimel", "suffix": "" }, { "first": "", "middle": [], "last": "Knowles", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.05477" ] }, "num": null, "urls": [], "raw_text": "Lane Schwartz, Francis Tyers, Lori Levin, Christo Kirov, Patrick Littell, Chi-kiu Lo, Emily Prud'hommeaux, Hyunji Hayley Park, Ken- neth Steimel, Rebecca Knowles, et al. 2020. Neural polysynthetic language modelling. arXiv preprint arXiv:2005.05477.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "A poor man's translation memory using machine translation evaluation metrics", "authors": [ { "first": "Michel", "middle": [], "last": "Simard", "suffix": "" }, { "first": "Atsushi", "middle": [], "last": "Fujita", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 10th Biennial Conference of the Association for Machine Translation in the Americas (AMTA)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Simard and Atsushi Fujita. 2012. A poor man's translation memory using machine translation evalu- ation metrics. In Proceedings of the 10th Biennial Conference of the Association for Machine Transla- tion in the Americas (AMTA).", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Parallel Data, Tools and Interfaces in OPUS", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel Data, Tools and Inter- faces in OPUS. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Attention is All you Need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF57": { "ref_id": "b57", "title": "The Helsinki submission to the AmericasNLP shared task", "authors": [ { "first": "Ra\u00fal", "middle": [], "last": "V\u00e1zquez", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Scherrer", "suffix": "" }, { "first": "Sami", "middle": [], "last": "Virpioja", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ra\u00fal V\u00e1zquez, Yves Scherrer, Sami Virpioja, and J\u00f6rg Tiedemann. 2021. The Helsinki submission to the AmericasNLP shared task. In Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Ameri- cas, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF58": { "ref_id": "b58", "title": "ChrEn: Cherokee-English machine translation for endangered language revitalization", "authors": [ { "first": "Shiyue", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Benjamin", "middle": [], "last": "Frey", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "577--595", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.43" ] }, "num": null, "urls": [], "raw_text": "Shiyue Zhang, Benjamin Frey, and Mohit Bansal. 2020. ChrEn: Cherokee-English machine translation for endangered language revitalization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 577- 595, Online. Association for Computational Linguis- tics.", "links": null }, "BIBREF59": { "ref_id": "b59", "title": "Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining", "authors": [ { "first": "Francis", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Machel", "middle": [], "last": "Reid", "suffix": "" }, { "first": "Edison", "middle": [], "last": "Marrese-Taylor", "suffix": "" }, { "first": "Yutaka", "middle": [], "last": "Matsuo", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zheng, Francis, Machel Reid, Edison Marrese-Taylor, and Yutaka Matsuo. 2021. Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining. In Proceedings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas, Online. As- sociation for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Adequacy and fluency distribution scores for Shipibo-Konibo and Otom\u00ed.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Relationship between ChrF scores and annotations for adequacy (left) and fluency (right).", "uris": null, "type_str": "figure" }, "TABREF1": { "num": null, "type_str": "table", "content": "", "text": "The languages featured in the AmericasNLP 2021 Shared Task on OMT, their ISO codes, language families and dataset statistics. For the origins of the datasets, please refer to the text.", "html": null }, "TABREF3": { "num": null, "type_str": "table", "content": "
", "text": "", "html": null }, "TABREF5": { "num": null, "type_str": "table", "content": "
SystemaymbzdcnignhchnahotoquyshptarAvg.
57.59
NRC-CNRC-1---57.20 50.40 58.94---53.47 55.00 *
", "text": "Baseline 49.33 52.00 42.80 55.87 41.07 54.07 36.50 59.87 52.00 43.73 48.72 Helsinki-5 57.60 48.93 55.33 62.40 55.33 62.33 49.33 60.80 65.07 58.80", "html": null }, "TABREF7": { "num": null, "type_str": "table", "content": "
(a) Shipibo-Konibo: Adequacy
% adequacy scores
1 2 4 5
BaselineHelsinki.5systemTamalli.1UTokyo.3
(b) Otom\u00ed: Adequacy
% adequacy scores1 2 3 4 5
BaselineHelsinki.5systemTamalli.1UTokyo.4
(c) Shipibo-Konibo: Fluency
% fluency scores
1 2 3 4 5
BaselineHelsinki.5systemTamalli.1UTokyo.3
(d) Otom\u00ed: Fluency
% fluency scores
1 2 3 4 5
BaselineHelsinki.5systemTamalli.1UTokyo.4
", "text": "", "html": null }, "TABREF8": { "num": null, "type_str": "table", "content": "
Scores Sentences
C: 66.7 SRC: Un ni\u00f1o muri\u00f3 de los cinco.
A: 1REF: Westiora bakera mawata iki pichika batiayax.
F: 4OUT: Westiora bakera pichika mawata iki.
C: 60.9 SRC: S\u00e9 que no puedes o\u00edrme.
A: 4REF: Eanra onanke min ea ninkati atipanyama.
F: 3OUT: Minra ea ninkati atipanyamake.
C: 60.1 SRC: Necesito un minuto para recoger mis pensamientos.
A: 4REF: Eara westiora minuto kenai nokon shinanbo biti kopi.
F: 3 OUT: A: 5 REF: Ramara ea kama iki, jakopira en oinama iki.
F: 5OUT: Ramara ea kayamake, jaskarakopira en oinyamake
C: 53.6 SRC: El U2 tom\u00f3 mucha pel\u00edcula.
A: 5REF: Nato U2ninra kikin icha pel\u00edcula bike.
F: 5OUT: U2ninra icha pelicula bike.
C: 48.3 SRC: No ten\u00edamos televisi\u00f3n.
A: 5REF: Noara televisi\u00f3nma ika iki.
F: 5OUT: Televisi\u00f3nmara noa iwanke.
", "text": "Westiora serera ea kenai nokon shinanbo biti. C: 57.1 SRC: Hoy no he ido, as\u00ed que no lo he visto.", "html": null }, "TABREF9": { "num": null, "type_str": "table", "content": "
Scores Sentences
C: 49.6 SRC: Locust Hill oh claro, s\u00ed, genial
A: 1REF: Locust Hill handa h\u00e2
F: 4OUT: Locust Hill ohbuho j\u00e4'i
C: 42.2 SRC: Kennedy habl\u00f3 con los pilotos.
A: 4REF: Kennedy bi \u00f1ama nen ya pilotos.
F: 3OUT: Kennedy bi \u00f1\u00e4ui ya pihnyo.
C: 32.2 SRC: \u00bfTe gustan los libros de Harry Potter o no?
A: 4REF: \u00bf di ho-y ya ynttothoma on Harry Potter a hin?
F: 3OUT: \u00bf Gi pefihu na r\u00e4 libro ra Harry Potter o hina?
C: 13.1 SRC: Un ni\u00f1o muri\u00f3 de los cinco.
A: 5REF: n\u0101 mehtzi bid\u00fb on ya qda
F: 5
", "text": "Translation outputs of the best system (Helsinki) for Shipibo-Konibo. Top-3 samples have the highest ChrF (C) scores, whereas the bottom-3 have the best adequacy (A) and fluency (F) values.", "html": null } } } }