ACL-OCL / Base_JSON /prefixA /json /americasnlp /2021.americasnlp-1.26.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
96 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:13:20.095352Z"
},
"title": "Low-Resource Machine Translation Using Cross-Lingual Language Model Pretraining",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Zheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "francis@weblab.t.u-tokyo.ac.jp"
},
{
"first": "Machel",
"middle": [],
"last": "Reid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "machelreid@weblab.t.u-tokyo.ac.jp"
},
{
"first": "Edison",
"middle": [],
"last": "Marrese-Taylor",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Tokyo",
"location": {}
},
"email": "matsuo@weblab.t.u-tokyo.ac.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes UTokyo's submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas. We present a lowresource machine translation system that improves translation accuracy using cross-lingual language model pretraining. Our system uses an mBART implementation of FAIRSEQ to pretrain on a large set of monolingual data from a diverse set of high-resource languages before finetuning on 10 low-resource indigenous American languages: Aymara, Bribri, Ash\u00e1ninka, Guaran\u00ed, Wixarika, N\u00e1huatl, H\u00f1\u00e4h\u00f1u, Quechua, Shipibo-Konibo, and Rar\u00e1muri. On average, our system achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes UTokyo's submission to the AmericasNLP 2021 Shared Task on machine translation systems for indigenous languages of the Americas. We present a lowresource machine translation system that improves translation accuracy using cross-lingual language model pretraining. Our system uses an mBART implementation of FAIRSEQ to pretrain on a large set of monolingual data from a diverse set of high-resource languages before finetuning on 10 low-resource indigenous American languages: Aymara, Bribri, Ash\u00e1ninka, Guaran\u00ed, Wixarika, N\u00e1huatl, H\u00f1\u00e4h\u00f1u, Quechua, Shipibo-Konibo, and Rar\u00e1muri. On average, our system achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural machine translation (NMT) systems have produced translations of commendable accuracy under large-data training conditions but are datahungry (Zoph et al., 2016) and perform poorly in low resource languages, where parallel data is lacking (Koehn and Knowles, 2017) .",
"cite_spans": [
{
"start": 148,
"end": 167,
"text": "(Zoph et al., 2016)",
"ref_id": "BIBREF33"
},
{
"start": 245,
"end": 270,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many of the indigenous languages of the Americas lack adequate amounts of parallel data, so existing NMT systems have difficulty producing accurate translations for these languages. Additionally, many of these indigenous languages exhibit linguistic properties that are uncommon in high-resource languages, such as English or Chinese, that are used to train NMT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One striking feature of many indigenous American languages is their polysynthesis (Brinton, 1885; Payne, 2014) . Polysynthetic languages display high levels of inflection and are morphologically complex. However, NMT systems are weak in translating \"low-frequency words belonging to highly-inflected categories (e.g. verbs)\" (Koehn and Knowles, 2017) . Quechua, a low-resource, polysynthetic American language, has on average twice as many morphemes per word compared to English (Ortega et al., 2020b) , which makes machine translation difficult. Mager et al. (2018b) shows that information is often lost when translating polysynthetic languages into Spanish due to a misalignment of morphemes. Thus, existing NMT systems are not appropriate for indigenous American languages, which are low-resource, polysynthetic languages.",
"cite_spans": [
{
"start": 82,
"end": 97,
"text": "(Brinton, 1885;",
"ref_id": "BIBREF2"
},
{
"start": 98,
"end": 110,
"text": "Payne, 2014)",
"ref_id": "BIBREF25"
},
{
"start": 325,
"end": 350,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF14"
},
{
"start": 479,
"end": 501,
"text": "(Ortega et al., 2020b)",
"ref_id": "BIBREF22"
},
{
"start": 547,
"end": 567,
"text": "Mager et al. (2018b)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Despite the scarcity of parallel data for these indigenous languages, some are spoken widely and have a pressing need for improved machine translation. For example, Quechua is spoken by more than 10 million people in South America, but some Quechua speakers are not able to access health care due to a lack of Spanish ability (Freire, 2011) .",
"cite_spans": [
{
"start": 326,
"end": 340,
"text": "(Freire, 2011)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Other languages lack a large population of speakers and may appear to have relatively low demand for translation, but many of these languages are also crucial in many domains such as health care, the maintenance of cultural history, and international security (Klavans, 2018) . Improved translation techniques for low-resource, polysynthetic languages are thus of great value.",
"cite_spans": [
{
"start": 260,
"end": 275,
"text": "(Klavans, 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In light of this, we participated in the Americas-NLP 2021 Shared Task to help further the development of new approaches to low-resource machine translation of polysynthetic languages, which are not commonly studied in natural language processing. The task consisted of producing translations from Spanish to 10 different indigenous American languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe our system designed for the AmericasNLP 2021 Shared Task, which achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline on average. Our system improves translation accuracy by using monolingual data to improve understanding of natural language before finetuning for each of the 10 indigenous languages. We selected a variety of widely-spoken languages across the Americas, Asia, Europe, Africa, and Oceania for the monolingual data we used during our pretraining, allowing our model to learn from a wide range of language families and linguistic features. These monolingual data were acquired from CC100 1 . We use these monolingual data as part of our pretraining, as this has been shown to improve results with smaller parallel datasets (Conneau and Lample, 2019; Liu et al., 2020; Song et al., 2019) .",
"cite_spans": [
{
"start": 811,
"end": 837,
"text": "(Conneau and Lample, 2019;",
"ref_id": "BIBREF5"
},
{
"start": 838,
"end": 855,
"text": "Liu et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 856,
"end": 874,
"text": "Song et al., 2019)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The parallel data between Spanish and the indigenous American languages were provided by Amer-icasNLP 2021 (Mager et al., 2021) . We have summarized some important details of the training data and development/test sets (Ebrahimi et al., 2021) below. More details about these data can be found in the AmericasNLP 2021 official repository 2 .",
"cite_spans": [
{
"start": 107,
"end": 127,
"text": "(Mager et al., 2021)",
"ref_id": "BIBREF19"
},
{
"start": 219,
"end": 242,
"text": "(Ebrahimi et al., 2021)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "Aymara The Aymara-Spanish data came from translations by Global Voices and Facebook AI. The training data came primarily from Global Voices 3 (Prokopidis et al., 2016; Tiedemann, 2012) , but because translations were done by volunteers, the texts have potentially different writing styles. The development and test sets came from translations from Spanish texts into Aymara La Paz jilata, a Central Aymara variant.",
"cite_spans": [
{
"start": 142,
"end": 167,
"text": "(Prokopidis et al., 2016;",
"ref_id": "BIBREF28"
},
{
"start": 168,
"end": 184,
"text": "Tiedemann, 2012)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "Bribri The Bribri-Spanish data (Feldman and Coto-Solano, 2020) came from six different sources (a dictionary, a grammar, two language learning textbooks, one storybook, and transcribed sentences from a spoken corpus) and three major dialects (Amubri, Coroma, and Salitre). Two different orthographies are widely used for Bribri, so an intermediate representation was used to facilitate training.",
"cite_spans": [
{
"start": 31,
"end": 62,
"text": "(Feldman and Coto-Solano, 2020)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "Ash\u00e1ninka The Ash\u00e1ninka-Spanish data 4 were extracted and pre-processed by Richard Castro (Cushimariano Romano and Sebasti\u00e1n Q., 2008; Ortega et al., 2020a; Mihas, 2011) . Though the texts came from different pan-Ashaninka dialects, they were normalized using AshMorph (Ortega et al., 2020a) . The development and test sets came from translations of Spanish texts done by Feliciano Torres R\u00edos.",
"cite_spans": [
{
"start": 90,
"end": 134,
"text": "(Cushimariano Romano and Sebasti\u00e1n Q., 2008;",
"ref_id": "BIBREF6"
},
{
"start": 135,
"end": 156,
"text": "Ortega et al., 2020a;",
"ref_id": "BIBREF21"
},
{
"start": 157,
"end": 169,
"text": "Mihas, 2011)",
"ref_id": "BIBREF20"
},
{
"start": 269,
"end": 291,
"text": "(Ortega et al., 2020a)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "Guaran\u00ed The Guaran\u00ed-Spanish data (Chiruzzo et al., 2020) consisted of training data from web sources (blogs and news articles) written in a mix of dialects and development and test sets written in pure Guaran\u00ed. Translations were provided by Perla Alvarez Britez.",
"cite_spans": [
{
"start": 33,
"end": 56,
"text": "(Chiruzzo et al., 2020)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "Wixarika The Wixarika-Spanish data came from Mager et al. (2018a) . The training, development, and test sets all used the same dialect (Wixarika of Zoquipan) and orthography, though word boundaries were not consistent between the development/test and training sets. Translations were provided by Silvino Gonz\u00e1lez de la Cr\u00faz.",
"cite_spans": [
{
"start": 45,
"end": 65,
"text": "Mager et al. (2018a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "N\u00e1huatl The N\u00e1huatl-Spanish data came from Gutierrez-Vasques et al. (2016) . N\u00e1huatl has a wide dialectal variation and no standard orthography, but most of the training data were close to a Classical N\u00e1huatl orthographic \"standard.\" The development and test sets came from translations made from Spanish into modern N\u00e1huatl. An orthographic normalization was applied to these translations to make them closer to the Classical N\u00e1huatl orthography found in the training data. This normalization was done by employing a rule-based approach based on predictable orthographic changes between modern varieties and Classical N\u00e1huatl. Translations were provided by Giovany Martinez Sebasti\u00e1n, Jos\u00e9 Antonio, and Pedro Kapoltitan.",
"cite_spans": [
{
"start": 43,
"end": 74,
"text": "Gutierrez-Vasques et al. (2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "H\u00f1\u00e4h\u00f1u The H\u00f1\u00e4h\u00f1u-Spanish training data came from translations into Spanish from H\u00f1\u00e4h\u00f1u text from a set of different sources 5 . Most of these texts are in the Valle del Mezquital dialect. The development and test sets are in the \u00d1\u00fbhm\u00fb de Ixtenco, Tlaxcala variant. Translations were done by Jos\u00e9 Mateo Lino Cajero Vel\u00e1zquez.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "Quechua The training set for Quechua-Spanish data (Agi\u0107 and Vuli\u0107, 2019) came from Jehova's Witnesses texts (available in OPUS), sentences extracted from the official dictionary of the Minister of Education (MINEDU) in Peru for Quechua Ayacucho, and dictionary entries and samples collected and reviewed by Diego Huarcaya. Training sets were provided in both the Quchua Cuzco and Quechua Ayacucho variants, but our system only employed Quechua Ayacucho data during training. The development and test sets came from translations of Spanish text into Quechua Ayacucho, a standard version of Southern Quechua. Translations were provided by Facebook AI.",
"cite_spans": [
{
"start": 50,
"end": 72,
"text": "(Agi\u0107 and Vuli\u0107, 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Parallel Data",
"sec_num": "2.1.2"
},
{
"text": "The training set of the Shipibo-Konibo-Spanish data (Galarreta et al., 2017) was obtained from translations of flashcards and translations of sentences from books for bilingual education done by a bilingual teacher. Additionally, parallel sentences from a dictionary were used as part of the training data. The development and test sets came from translations from Spanish into Shipibo-Konibo done by Liz Ch\u00e1vez.",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Galarreta et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shipibo-Konibo",
"sec_num": null
},
{
"text": "Rar\u00e1muri The training set of the Rar\u00e1muri-Spanish data came from a dictionary (Brambila, 1976) . The development and tests sets came from translations from Spanish into the highlands Rar\u00e1muri by Mar\u00eda del C\u00e1rmen Sotelo Holgu\u00edn. The training set and development/test sets use different orthographies.",
"cite_spans": [
{
"start": 78,
"end": 94,
"text": "(Brambila, 1976)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shipibo-Konibo",
"sec_num": null
},
{
"text": "We tokenized all of our data together using Sen-tencePiece (Kudo and Richardson, 2018) in preparation for our multilingual model. We used a vocabulary size of 8000 and a character coverage of 0.9995, as the wide variety of languages covered carry a rich character set.",
"cite_spans": [
{
"start": 59,
"end": 86,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "Then, we sharded our data for faster processing. With our SentencePiece model and vocabulary, we used FAIRSEQ 6 (Ott et al., 2019) to build vocabularies and binarize our data.",
"cite_spans": [
{
"start": 112,
"end": 130,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "2.2"
},
{
"text": "We pretrained our model on the 20 languages described in 2.1 with an mBART (Liu et al., 2020) implementation of FAIRSEQ (Ott et al., 2019) . We pretrained on 32 NVIDIA V100 GPUs for three hours.",
"cite_spans": [
{
"start": 75,
"end": 93,
"text": "(Liu et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 120,
"end": 138,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretraining",
"sec_num": "2.3"
},
{
"text": "Due to the large variability in text data size between different languages, we used the exponential sampling technique used in Conneau and Lample (2019) ; Liu et al. (2020) , where the text is resampled according to smoothing parameter \u03b1 as follows:",
"cite_spans": [
{
"start": 127,
"end": 152,
"text": "Conneau and Lample (2019)",
"ref_id": "BIBREF5"
},
{
"start": 155,
"end": 172,
"text": "Liu et al. (2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing data across languages",
"sec_num": null
},
{
"text": "q i = p \u03b1 i N j=1 p \u03b1 j (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing data across languages",
"sec_num": null
},
{
"text": "In equation 1, q i refers to the resample probability for language i, given multinomial distribution",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing data across languages",
"sec_num": null
},
{
"text": "{q i } i=1...N with original sampling probability p i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing data across languages",
"sec_num": null
},
{
"text": "As we want our model to work well with the low-resource languages, we chose a smoothing parameter of \u03b1 = 0.25 (compared with \u03b1 = 0.7 used in mBART (Liu et al., 2020 )) to alleviate model bias towards the higher proportion of data from high-resource languages.",
"cite_spans": [
{
"start": 147,
"end": 164,
"text": "(Liu et al., 2020",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Balancing data across languages",
"sec_num": null
},
{
"text": "We used a six-layer Transformer with a hidden dimension of 512 and feed-forward size of 2048. We set the maximum sequence length to 512, with a batch size of 1024. We optimized the model using Adam (Kingma and Ba, 2015) using hyperparameters \u03b2 = (0.9, 0.98) and = 10 \u22126 . We used a learning rate of 6 \u00d7 10 \u22124 over 10,000 iterations. For regularization, we used a dropout rate of 0.5 and weight decay of 0.01. We also experimented with lower dropout rates but found that a higher dropout rate gave us a model that produces better translations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameters",
"sec_num": null
},
{
"text": "Using our pretrained model, we performed finetuning on each of the 10 indigenous American languages with the same hyperparameters used during pretraining. For each language, we conducted our finetuning using four NVIDIA V100 GPUs for three hours. 1 Baseline test results provided by AmericasNLP 2021, from a system where the development set was not used for training 2 Our own results on the development set 3 Our official test results for our system where the development set was used for training 4 Our official test results for our system where the development set was not used for training ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning",
"sec_num": "2.4"
},
{
"text": "Using the SacreBLEU library 7 (Post, 2018) , we evaluated our system outputs with detokenized BLEU (Papineni et al., 2002; Post, 2018) . Due to the polysynthetic nature of the languages involved in this task, we also used CHRF (Popovi\u0107, 2015) to measure performance at the character level and better see how well morphemes or parts of morphemes were translated, rather than whole words. For these reasons, we focused on optimizing the CHRF score.",
"cite_spans": [
{
"start": 30,
"end": 42,
"text": "(Post, 2018)",
"ref_id": "BIBREF27"
},
{
"start": 99,
"end": 122,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF24"
},
{
"start": 123,
"end": 134,
"text": "Post, 2018)",
"ref_id": "BIBREF27"
},
{
"start": 227,
"end": 242,
"text": "(Popovi\u0107, 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "2.5"
},
{
"text": "We describe our results in Table 1 . Our test results (Test1 and Test2) show considerable improvements over the baseline provided by AmericasNLP 2021. We also included our own results on the development set (Dev) for comparison. The trends we saw in the Dev results parallel our test results; languages for which our system achieved high scores in Dev (e.g. Wixarika and Guaran\u00ed) also demonstrated high scores in Test1 and Test2. Likewise, languages for which our system performed relatively poorly in Dev (e.g. Rar\u00e1muri, whose poor performance may be attributed to the difference in orthographies between the training set and development/test sets) also performed poorly in Test1 and Test2. This matches the trend seen in the baseline scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 27,
"end": 34,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "The baseline results and Test2 results were both 7 https://github.com/mjpost/sacrebleu produced using the same test set and by systems where the development set was not used for training. Thus, the baseline results and Test2 results can be directly compared. On average, our system used to produce the Test2 results achieved BLEU scores that were 1.54 higher and CHRF scores that were 0.0725 higher than the baseline. On the same test set, our Test1 system produced higher BLEU and CHRF scores for nearly every language. This is expected, as the system used to produce Test1 was trained on slightly more data; it used the development set of the indigenous American languages provided by AmericasNLP 2021 in addition to the training set. If we factor in our results from Test1 to our Test2 results, we achieved BLEU scores that were 1.64 higher and CHRF scores that were 0.0749 higher than the baseline on average. Overall, we attribute this improvement in scores primarily to the cross-lingual language model pretraining (Conneau and Lample, 2019) we performed, allowing our model to learn about natural language from the monolingual data before finetuning on each of the 10 indigenous languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "3"
},
{
"text": "We described our system to improve low-resource machine translation for the AmericasNLP 2021 Shared Task. We constructed a system using the mBART implementation of FAIRSEQ to translate from Spanish to 10 different low-resource indigenous languages from the Americas. We demon-strated strong improvements over the baseline by pretraining on a large amount of monolingual data before finetuning our model for each of the lowresource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "We are interested in using dictionary augmentation techniques and creating pseudo-monolingual data to use during the pretraining process, as we have seen improved results with these two techniques when translating several low-resource African languages. We can also incorporate these two techniques in an iterative pretraining procedure (Tran et al., 2020) to produce more pseudomonolingual data and further train our pretrained model for potentially better results.",
"cite_spans": [
{
"start": 337,
"end": 356,
"text": "(Tran et al., 2020)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "Future research should also explore using probabilistic finite-state morphological segmenters, which may improve translations by exploiting regular agglutinative patterns without the need for much linguistic knowledge (Mager et al., 2018a) and thus may work well with the low-resource, polysynthetic languages dealt with in this paper.",
"cite_spans": [
{
"start": 218,
"end": 239,
"text": "(Mager et al., 2018a)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "4"
},
{
"text": "http://data.statmt.org/cc-100/ 2 https://github.com/AmericasNLP/ americasnlp2021/blob/main/data/ information_datasets.pdf 3 https://opus.nlpl.eu/GlobalVoices.php",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/hinantin/ AshaninkaMT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://tsunkua.elotl.mx/about/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/pytorch/fairseq",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "JW300: A widecoverage parallel corpus for low-resource languages",
"authors": [
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3204--3210",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1310"
]
},
"num": null,
"urls": [],
"raw_text": "\u017deljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Diccionario Raramuri-Castellano (Tarahumara)",
"authors": [
{
"first": "David",
"middle": [],
"last": "Brambila",
"suffix": ""
}
],
"year": 1976,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Brambila. 1976. Diccionario Raramuri- Castellano (Tarahumara). Obra Nacional de la Buena Prensa, M\u00e9xico.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "On Polysynthesis and Incorporation: As Characteristics of American Languages. McCalla & Stavely",
"authors": [
{
"first": "D",
"middle": [
"G"
],
"last": "Brinton",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.G. Brinton. 1885. On Polysynthesis and Incorpo- ration: As Characteristics of American Languages. McCalla & Stavely.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Development of a Guarani -Spanish parallel corpus",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Chiruzzo",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Amarilla",
"suffix": ""
},
{
"first": "Adolfo",
"middle": [],
"last": "R\u00edos",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"Gim\u00e9nez"
],
"last": "Lugo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2629--2633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Chiruzzo, Pedro Amarilla, Adolfo R\u00edos, and Gus- tavo Gim\u00e9nez Lugo. 2020. Development of a Guarani -Spanish parallel corpus. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 2629-2633, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "8440--8451",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.747"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "7057--7067",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057-7067.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "\u00d1aantsipeta ash\u00e1ninkaki birakochaki. diccionario ash\u00e1ninka-castellano",
"authors": [
{
"first": "Cushimariano",
"middle": [],
"last": "Rub\u00e9n",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Romano",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Richer",
"suffix": ""
},
{
"first": "Q",
"middle": [],
"last": "Sebasti\u00e1n",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rub\u00e9n Cushimariano Romano and Richer C. Se- basti\u00e1n Q. 2008. \u00d1aantsipeta ash\u00e1ninkaki bi- rakochaki. diccionario ash\u00e1ninka-castellano. versi\u00f3n preliminar. http://www.lengamer.org/ publicaciones/diccionarios/.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Americasnli: Evaluating zero-shot natural language understanding of pretrained multilingual models",
"authors": [
{
"first": "Abteen",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Arturo",
"middle": [],
"last": "Oncevay",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Chiruzzo",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Ortega",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Ramos",
"suffix": ""
},
{
"first": "Annette",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vladimir",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"A"
],
"last": "Gim\u00e9nez-Lugo",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Mager",
"suffix": ""
}
],
"year": null,
"venue": "Ngoc Thang Vu, and Katharina Kann. 2021",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Gim\u00e9nez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, and Katharina Kann. 2021. Americasnli: Evaluating zero-shot nat- ural language understanding of pretrained multilin- gual models in truly low-resource languages.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Neural machine translation models with back-translation for the extremely low-resource indigenous language Bribri",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Rolando",
"middle": [],
"last": "Coto-Solano",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3965--3976",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.351"
]
},
"num": null,
"urls": [],
"raw_text": "Isaac Feldman and Rolando Coto-Solano. 2020. Neu- ral machine translation models with back-translation for the extremely low-resource indigenous language Bribri. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3965-3976, Barcelona, Spain (Online). Interna- tional Committee on Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Perspectivas en salud ind\u00edgena: cosmovisi\u00f3n, enfermedad y pol\u00edticas p\u00fablicas. Ediciones Abya-Yala",
"authors": [
{
"first": "Germ\u00e1n",
"middle": [],
"last": "Freire",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Germ\u00e1n Freire. 2011. Perspectivas en salud ind\u00edgena: cosmovisi\u00f3n, enfermedad y pol\u00edticas p\u00fablicas. Edi- ciones Abya-Yala.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Corpus creation and initial SMT experiments between Spanish and Shipibo-konibo",
"authors": [
{
"first": "Ana-Paula",
"middle": [],
"last": "Galarreta",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Melgar",
"suffix": ""
},
{
"first": "Arturo",
"middle": [],
"last": "Oncevay",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "238--244",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_033"
]
},
"num": null,
"urls": [],
"raw_text": "Ana-Paula Galarreta, Andr\u00e9s Melgar, and Arturo On- cevay. 2017. Corpus creation and initial SMT ex- periments between Spanish and Shipibo-konibo. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 238-244, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Axolotl: a web accessible parallel corpus for Spanish-Nahuatl",
"authors": [
{
"first": "Ximena",
"middle": [],
"last": "Gutierrez-Vasques",
"suffix": ""
},
{
"first": "Gerardo",
"middle": [],
"last": "Sierra",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Hernandez Pompa",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "4210--4214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ximena Gutierrez-Vasques, Gerardo Sierra, and Isaac Hernandez Pompa. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4210-4214, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Computational challenges for polysynthetic languages",
"authors": [
{
"first": "Judith",
"middle": [
"L"
],
"last": "Klavans",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages",
"volume": "",
"issue": "",
"pages": "1--11",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Judith L. Klavans. 2018. Computational challenges for polysynthetic languages. In Proceedings of the Workshop on Computational Modeling of Polysyn- thetic Languages, pages 1-11, Santa Fe, New Mex- ico, USA. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Six challenges for neural machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {
"DOI": [
"10.18653/v1/W17-3204"
]
},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six chal- lenges for neural machine translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation, pages 28-39, Vancouver. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "66--71",
"other_ids": {
"DOI": [
"10.18653/v1/D18-2012"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multilingual denoising pre-training for neural machine translation",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "726--742",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00343"
]
},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Probabilistic finite-state morphological segmenter for Wixarika (Huichol) language",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Di\u00f3nico",
"middle": [],
"last": "Carrillo",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Meza",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Intelligent & Fuzzy Systems",
"volume": "34",
"issue": "5",
"pages": "3081--3087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Di\u00f3nico Carrillo, and Ivan Meza. 2018a. Probabilistic finite-state morphological seg- menter for Wixarika (Huichol) language. Journal of Intelligent & Fuzzy Systems, 34(5):3081-3087.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Lost in translation: Analysis of information loss during machine translation between polysynthetic and fusional languages",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Alfonso",
"middle": [],
"last": "Medina-Urrea",
"suffix": ""
},
{
"first": "Ivan Vladimir Meza",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages",
"volume": "",
"issue": "",
"pages": "73--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Elisabeth Mager, Alfonso Medina- Urrea, Ivan Vladimir Meza Ruiz, and Katharina Kann. 2018b. Lost in translation: Analysis of in- formation loss during machine translation between polysynthetic and fusional languages. In Proceed- ings of the Workshop on Computational Modeling of Polysynthetic Languages, pages 73-83, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Arturo",
"middle": [],
"last": "Oncevay",
"suffix": ""
},
{
"first": "Abteen",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Ortega",
"suffix": ""
},
{
"first": "Annette",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Ximena",
"middle": [],
"last": "Gutierrez-Vasques",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Chiruzzo",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Gim\u00e9nez-Lugo",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Ramos",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Currey",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Ivan Vladimir Meza",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "Rolando",
"middle": [],
"last": "Coto-Solano",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the The First Workshop on NLP for Indigenous Languages of the Americas, Online. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Xi- mena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Gim\u00e9nez-Lugo, Ricardo Ramos, Anna Currey, Vishrav Chaudhary, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager, Ngoc Thang Vu, Graham Neubig, and Katha- rina Kann. 2021. Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. In Proceed- ings of the The First Workshop on NLP for Indige- nous Languages of the Americas, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A\u00f1aani katonkosatzi parenini, El idioma del alto Peren\u00e9",
"authors": [
{
"first": "Elena",
"middle": [],
"last": "Mihas",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elena Mihas. 2011. A\u00f1aani katonkosatzi parenini, El idioma del alto Peren\u00e9. Milwaukee, WI: Clarks Graphics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Overcoming resistance: The normalization of an Amazonian tribal language",
"authors": [
{
"first": "John",
"middle": [],
"last": "Ortega",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"Alexander"
],
"last": "Castro-Mamani",
"suffix": ""
},
{
"first": "Jaime Rafael Montoya",
"middle": [],
"last": "Samame",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages",
"volume": "",
"issue": "",
"pages": "1--13",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Ortega, Richard Alexander Castro-Mamani, and Jaime Rafael Montoya Samame. 2020a. Overcom- ing resistance: The normalization of an Amazonian tribal language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 1-13, Suzhou, China. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Neural machine translation with a polysynthetic low resource language",
"authors": [
{
"first": "John",
"middle": [
"E"
],
"last": "Ortega",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Castro Mamani",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2020,
"venue": "Machine Translation",
"volume": "34",
"issue": "4",
"pages": "325--346",
"other_ids": {
"DOI": [
"10.1007/s10590-020-09255-9"
]
},
"num": null,
"urls": [],
"raw_text": "John E. Ortega, Richard Castro Mamani, and Kyunghyun Cho. 2020b. Neural machine trans- lation with a polysynthetic low resource language. Machine Translation, 34(4):325-346.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4009"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Morphological Characteristics of Lowland South American Languages",
"authors": [
{
"first": "D",
"middle": [
"L"
],
"last": "Payne",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D.L. Payne. 2014. Morphological Characteristics of Lowland South American Languages. University of Texas Press.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "chrF: character n-gram F-score for automatic MT evaluation",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "392--395",
"other_ids": {
"DOI": [
"10.18653/v1/W15-3049"
]
},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6319"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Parallel Global Voices: a collection of multilingual corpora with citizen media stories",
"authors": [
{
"first": "Prokopis",
"middle": [],
"last": "Prokopidis",
"suffix": ""
},
{
"first": "Vassilis",
"middle": [],
"last": "Papavassiliou",
"suffix": ""
},
{
"first": "Stelios",
"middle": [],
"last": "Piperidis",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "900--905",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prokopis Prokopidis, Vassilis Papavassiliou, and Ste- lios Piperidis. 2016. Parallel Global Voices: a col- lection of multilingual corpora with citizen media stories. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 900-905, Portoro\u017e, Slovenia. Eu- ropean Language Resources Association (ELRA).",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "MASS: masked sequence to sequence pre-training for language generation",
"authors": [
{
"first": "Kaitao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 36th International Conference on Machine Learning, ICML 2019",
"volume": "97",
"issue": "",
"pages": "5926--5936",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. MASS: masked sequence to se- quence pre-training for language generation. In Pro- ceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Pro- ceedings of Machine Learning Research, pages 5926-5936. PMLR.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Parallel data, tools and interfaces in OPUS",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
"volume": "",
"issue": "",
"pages": "2214--2218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Cross-lingual retrieval for iterative self-supervised training",
"authors": [
{
"first": "Chau",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Xian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2020,
"venue": "34th Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chau Tran, Yuqing Tang, Xian Li, and Jiatao Gu. 2020. Cross-lingual retrieval for iterative self-supervised training. 34th Conference on Neural Informa- tion Processing Systems (NeurIPS 2020), Vancouver, Canada.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "CCNet: Extracting high quality monolingual datasets from web crawl data",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Marie-Anne",
"middle": [],
"last": "Lachaux",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "4003--4012",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzm\u00e1n, Ar- mand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Transfer learning for low-resource neural machine translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1163"
]
},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"TABREF2": {
"text": "Results",
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null
}
}
}
}