ACL-OCL / Base_JSON /prefixA /json /americasnlp /2021.americasnlp-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
99.9 kB
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:13:17.547562Z"
},
"title": "Open Machine Translation for Low Resource South American Languages (AmericasNLP 2021 Shared Task Contribution)",
"authors": [
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Idiap Research Institute",
"location": {
"settlement": "Martigny",
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "Subhadarshi",
"middle": [],
"last": "Panda",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "City University of New York",
"location": {
"country": "USA"
}
},
"email": "spanda@gradcenter.cuny.edu"
},
{
"first": "Amulya",
"middle": [
"Ratna"
],
"last": "Dash",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BITS",
"location": {
"settlement": "Pilani",
"country": "India"
}
},
"email": "yash@pilani.bits-pilani.ac.in"
},
{
"first": "Esa\u00fa",
"middle": [],
"last": "Villatoro-Tello",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Idiap Research Institute",
"location": {
"settlement": "Martigny",
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "A",
"middle": [],
"last": "Seza Dogru\u00f6z",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ghent University",
"location": {
"country": "Belgium"
}
},
"email": ""
},
{
"first": "Rosa",
"middle": [
"M"
],
"last": "Ortega-Mendoza",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Polit\u00e9cnica de Tulancingo",
"location": {
"settlement": "Hidalgo",
"country": "Mexico"
}
},
"email": ""
},
{
"first": "Amadeo",
"middle": [],
"last": "Hern\u00e1ndez",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Universidad Polit\u00e9cnica de Tulancingo",
"location": {
"settlement": "Hidalgo",
"country": "Mexico"
}
},
"email": "amadeo.hernandez1911001@upt.edu.mx"
},
{
"first": "Yashvardhan",
"middle": [],
"last": "Sharma",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "BITS",
"location": {
"settlement": "Pilani",
"country": "India"
}
},
"email": ""
},
{
"first": "Petr",
"middle": [],
"last": "Motlicek",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Idiap Research Institute",
"location": {
"settlement": "Martigny",
"country": "Switzerland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes the team (\"Tamalli\")'s submission to AmericasNLP2021 shared task on Open Machine Translation for low resource South American languages. Our goal was to evaluate different Machine Translation (MT) techniques, statistical and neural-based, under several configuration settings. We obtained the second-best results for the language pairs \"Spanish-Bribri\", \"Spanish-Ash\u00e1ninka\", and \"Spanish-Rar\u00e1muri\" in the category \"Development set not used for training\". Our performed experiments will serve as a point of reference for researchers working on MT with low-resource languages.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes the team (\"Tamalli\")'s submission to AmericasNLP2021 shared task on Open Machine Translation for low resource South American languages. Our goal was to evaluate different Machine Translation (MT) techniques, statistical and neural-based, under several configuration settings. We obtained the second-best results for the language pairs \"Spanish-Bribri\", \"Spanish-Ash\u00e1ninka\", and \"Spanish-Rar\u00e1muri\" in the category \"Development set not used for training\". Our performed experiments will serve as a point of reference for researchers working on MT with low-resource languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The main challenges in automatic Machine Translation (MT) are the acquisition and curation of parallel data and the allocation of hardware resources for training and inference purposes. This situation has become more evident for Neural Machine Translation (NMT) techniques, where their translation quality depends strongly on the amount of available training data when offering translation for a language pair. However, there is only a handful of languages that have available large-scale parallel corpora, or collections of sentences in both the source language and corresponding translations. Thus, applying recent NMT approaches to low-resource languages represent a challenging scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we describe the participation of our team (aka, Tamalli) in the Shared Task on Open Machine Translation held in the First Workshop on NLP for Indigenous Languages of the Americas (AmericasNLP) (Mager et al., 2021) . 1 The main goal of the shared task was to encourage the development of machine translation systems for indigenous languages of the Americas, categorized as low-resources languages. This year 8 different teams participated with 214 submissions.",
"cite_spans": [
{
"start": 208,
"end": 228,
"text": "(Mager et al., 2021)",
"ref_id": "BIBREF20"
},
{
"start": 231,
"end": 232,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Accordingly, our main goal was to evaluate the performance of traditional statistical MT techniques, as well as some recent NMT techniques under different configuration settings. Overall, our results outperformed the baseline proposed by the shared task organizers, and reach promising results for many of the considered pair languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The paper is organized as follows: Section 2 briefly describes some related work; Section 3 depicts the methodology we followed for performing our experiments. Section 4 provides the dataset descriptions. Section 5 provides the details from our different settings, and finally Section 6 depict our main conclusions and future work directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Machine Translation (Garg and Agarwal, 2018 ) is a field in NLP that aims to translate natural lan-guages. Particularly, the development of (MT) systems for indigenous languages in both South and North America, faces different challenges such as a high morphological richness, agglutination, polysynthesis, and orthographic variation (Mager et al., 2018b; Llitj\u00f3s et al., 2005) . In general, MT systems for these languages in the state-of-theart have been addressed by the sub-fields of machine translation: rule-based (Monson et al., 2006) , statistical (Mager Hois et al., 2016) and neuralbased approaches (Ortega et al., 2020; Le and Sadat, 2020). Recently, NMT approaches (Stahlberg, 2020) have gained prominence; they commonly are based on sequence-to-sequence models using encoder-decoder architectures and attention mechanisms (Yang et al., 2020) . From this perspective, different morphological segmentation techniques have been explored (Kann et al., 2018; Ortega et al., 2020) for Indigenous American languages.",
"cite_spans": [
{
"start": 20,
"end": 43,
"text": "(Garg and Agarwal, 2018",
"ref_id": "BIBREF8"
},
{
"start": 334,
"end": 355,
"text": "(Mager et al., 2018b;",
"ref_id": "BIBREF19"
},
{
"start": 356,
"end": 377,
"text": "Llitj\u00f3s et al., 2005)",
"ref_id": "BIBREF16"
},
{
"start": 519,
"end": 540,
"text": "(Monson et al., 2006)",
"ref_id": null
},
{
"start": 555,
"end": 580,
"text": "(Mager Hois et al., 2016)",
"ref_id": "BIBREF21"
},
{
"start": 676,
"end": 693,
"text": "(Stahlberg, 2020)",
"ref_id": "BIBREF31"
},
{
"start": 834,
"end": 853,
"text": "(Yang et al., 2020)",
"ref_id": "BIBREF33"
},
{
"start": 946,
"end": 965,
"text": "(Kann et al., 2018;",
"ref_id": "BIBREF10"
},
{
"start": 966,
"end": 986,
"text": "Ortega et al., 2020)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "It is known that the NMT approaches are based on big amounts of parallel corpora as source knowledge. To date, important efforts toward creating parallel corpora have been carried out for specific indigenous languages of America. For example, for Spanish-Nahuatl (Gutierrez-Vasques et al., 2016), Wixarika-Spanish (Mager et al., 2020) and Quechua-Spanish (Llitj\u00f3s et al., 2005) which includes morphological information. Also, the JHU Bible Corpus, a parallel text, has been extended by adding translations in more than 20 Indigenous North American languages (Nicolai et al., 2021) . The usability of the corpus was demonstrated by using multilingual NMT systems.",
"cite_spans": [
{
"start": 314,
"end": 334,
"text": "(Mager et al., 2020)",
"ref_id": "BIBREF18"
},
{
"start": 339,
"end": 377,
"text": "Quechua-Spanish (Llitj\u00f3s et al., 2005)",
"ref_id": null
},
{
"start": 558,
"end": 580,
"text": "(Nicolai et al., 2021)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "Since the data sizes are small in most language pairs as shown in Table 1 , we used a statistical machine translation model. We also used NMT models. In the following sections, we describe the details of each of these approaches.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 73,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "For statistical MT, we relied on an IBM model 2 (Brown et al., 1993) which comprises a lexical translation model and an alignment model. In addition to the word-level translation probability, it models the absolute distortion in the word positioning between source and the target languages by introducing an alignment probability, which enables to handle word reordering.",
"cite_spans": [
{
"start": 48,
"end": 68,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical MT",
"sec_num": "3.1"
},
{
"text": "For NMT, we first tokenized the text using sentence piece BPE tokenization (Kudo and Richardson, 2018) . 2 The translation model architecture we used for NMT is the transformer model (Vaswani et al., 2017) . We trained the model in two different setups as outlined below.",
"cite_spans": [
{
"start": 75,
"end": 102,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF13"
},
{
"start": 105,
"end": 106,
"text": "2",
"ref_id": null
},
{
"start": 183,
"end": 205,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural MT",
"sec_num": "3.2"
},
{
"text": "One-to-one: In this setup, we trained the model using the data from one source language and one target language only. In the AmericasNLP2021 3 shared task, the source language is always Spanish (es). We trained the transformer model using Spanish as the source language and one of the indigenous languages as the target language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neural MT",
"sec_num": "3.2"
},
{
"text": "One-to-many: Since the source language (Spanish) is constant for all the language pairs, we considered sharing the NMT parameters across language pairs to obtain gains in translation performance as shown in previous work (Dabre et al., 2020) . For this, we trained a one-to-many model by sharing the decoder parameters across all the indigenous languages. Since the model needs to generate the translation in the intended target language, we provided that information as a target language tag in the input (Lample and Conneau, 2019) . The token level representation is obtained by the sum of token embedding, positional embedding, and language embedding.",
"cite_spans": [
{
"start": 221,
"end": 241,
"text": "(Dabre et al., 2020)",
"ref_id": "BIBREF3"
},
{
"start": 506,
"end": 532,
"text": "(Lample and Conneau, 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural MT",
"sec_num": "3.2"
},
{
"text": "For training and evaluating our different configurations, we used the official datasets provided by the organizers of the shared task. It is worth mentioning that we did not use additional datasets or resources for our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "A brief description of the dataset composition is shown in Table 1 . For all the language pairs, the task was to translate from Spanish to some of the following indigenous languages: H\u00f1\u00e4h\u00f1u (oto), Wixarika (wix), Nahuatl (nah), Guaran\u00ed (gn), Bribri (bzd), Rar\u00e1muri (tar), Quechua (quy), Aymara (aym), Shipibo-Konibo (shp), Ash\u00e1ninka (cni). For the sake of brevity, we do not provide all the characteristics of every pair of languages. The interested reader is referred to (Gutierrez-Vasques et al., ",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4"
},
{
"text": "We used 5 settings for all the 10 pair translations. The output of each set is named as version [1] [2] [3] [4] [5] and submitted for evaluation (shown under column Submission# in Table 2 ). Among the 5 versions, version [1] is based on statistical MT, and version [2-5] is based on NMT with different model configurations. For model evaluation, organizers provided a script that uses the metrics BLEU and ChrF for machine translation evaluation. The versions and their configuration details are explained below. We included the best results only from all the versions [1-5] in Table 2 .",
"cite_spans": [
{
"start": 96,
"end": 99,
"text": "[1]",
"ref_id": null
},
{
"start": 100,
"end": 103,
"text": "[2]",
"ref_id": null
},
{
"start": 104,
"end": 107,
"text": "[3]",
"ref_id": null
},
{
"start": 108,
"end": 111,
"text": "[4]",
"ref_id": null
},
{
"start": 112,
"end": 115,
"text": "[5]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 180,
"end": 187,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 578,
"end": 585,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": "Version 1: Version 1 uses the statistical MT. The source and target language text were first tokenized using Moses tokenizer setting the language to Spanish. Then we trained the IBM translation model 2 (Brown et al., 1993) implemented in nltk.translate api. After obtaining the translation target tokens, the detokenization was carried out using the Moses Spanish detokenizer.",
"cite_spans": [
{
"start": 202,
"end": 222,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": "Version 2: This version uses the one-to-one NMT model. First, we learned sentence piece BPE tokenization (Kudo and Richardson, 2018) by combining the source and target language text. We set the maximum vocabulary size to {8k, 16k, 32k} in different runs and we considered the run that produced the best BLEU score on the dev set. The transformer model (Vaswani et al., 2017) was implemented using PyTorch (Paszke et al., 2019) . The number of encoder and decoder layers was set to 3 each and the number of heads in those layers was set to 8. The hidden dimension of the self-attention layer was set to 128 and the position-wise feedforward layer's dimension was set to 256. We used a dropout of 0.1 in both the encoder and the decoder. The encoder and decoder embedding layers were not tied. We trained the model using early stopping with a patience of 5 epochs, that is, we stop training if the validation loss does not improve for 5 consecutive epochs. We used greedy decoding for generating the translations during inference. The training and translation were done using one GPU.",
"cite_spans": [
{
"start": 105,
"end": 132,
"text": "(Kudo and Richardson, 2018)",
"ref_id": "BIBREF13"
},
{
"start": 352,
"end": 374,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 405,
"end": 426,
"text": "(Paszke et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": "Version 3: This version uses the one-to-many NMT model. For tokenization, we learned sentence piece BPE tokenization (Kudo and Richardson, 2018 ) by combining the source and target language text from all the languages (11 languages in total). We set the maximum shared vocabulary size to {8k, 16k, 32k} in different runs and we considered the run that produced the best BLEU score on the dev set. The transformer model's hyperparameters were the same as in version 2. The language embedding dimension in the decoder was set to 128. The encoder and decoder embedding layers were not tied. We first trained the one-to-many model till convergence using early stopping with the patience of 5 epochs, considering the concatenation of the dev data from all the language pairs. Then we fine-tuned the best checkpoint using each language pair's data separately. The fine-tuning process was also done using early stopping with patience of 5 epochs. Finally, we used greedy decoding for generating the translations during inference. The training and translation were done using one GPU.",
"cite_spans": [
{
"start": 117,
"end": 143,
"text": "(Kudo and Richardson, 2018",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": "Version 4: This version is based on one-to-one NMT. We have used the Transformer model as implemented in OpenNMT-py (PyTorch version) (Klein et al., 2017) . 4 . To train the model, we used a single GPU and followed the standard \"Noam\" learning rate decay, 5 see (Vaswani et al., 2017; Popel and Bojar, 2018) for more details. Our starting learning rate was 0.2 and we used 8000 warmup steps. The model es-nah trained up to 100K iterations and the model checkpoint at 35K was 4 http://opennmt.net/ 5 https://nvidia.github.io/OpenSeq2Seq/ html/api-docs/optimizers.html selected based on the evaluation score (BLEU) on the development set.",
"cite_spans": [
{
"start": 134,
"end": 154,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 157,
"end": 158,
"text": "4",
"ref_id": null
},
{
"start": 262,
"end": 284,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 285,
"end": 307,
"text": "Popel and Bojar, 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": "Version 5: This version is based on One-to-One NMT. We have used the Transformer model as implemented in OpenNMT-tf (Tensorflow version) (Klein et al., 2017) . To train the model, we used a single GPU and followed the standard \"Noam\" learning rate decay, 6 see (Vaswani et al., 2017; Popel and Bojar, 2018) for more details. We used 8K shared vocab size for the models and the model checkpoints were saved at an interval of 2500 steps. The starting learning rate was 0.2 and 8000 warmup steps were used for model training. The earlystopping criterion was 'less than 0.01 improvement in BLEU score' for 5 consecutive saved model checkpoints. The model es-gn was trained up to 37.5K iterations and the model checkpoint at 35K was selected based on evaluation scores on the development set. The model es-quy was trained up to 40K iterations and the model checkpoint at 32.5K was selected based on evaluation scores on the development set.",
"cite_spans": [
{
"start": 137,
"end": 157,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 261,
"end": 283,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF32"
},
{
"start": 284,
"end": 306,
"text": "Popel and Bojar, 2018)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": "We report the official automatic evaluation results in Table 2 . The machine translation evaluation matrices BLEU (Papineni et al., 2002) and ChrF (Popovi\u0107, 2017) used by the organizers to evaluate the submissions. Based on our observation, the statistical approach performed well as compared to NMT for many language pairs as shown in the Table 2 (Parida et al., 2019) . Also, among NMT model settings one-to-one and oneto-many perform well based on the language pairs.",
"cite_spans": [
{
"start": 114,
"end": 137,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF25"
},
{
"start": 147,
"end": 162,
"text": "(Popovi\u0107, 2017)",
"ref_id": "BIBREF29"
},
{
"start": 348,
"end": 369,
"text": "(Parida et al., 2019)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "5"
},
{
"text": "Our participation aimed at analyzing the performance of recent NMT techniques on translating indigenous languages of the Americas, low-resource languages. Our future work directions include: i) investigating corpus filtering and iterative augmentation for performance improvement (Dandapat and Federmann, 2018) , ii) review already existing extensive analyses of these low-resource languages from a linguistic point of view and adapt our methods for each language accordingly, iii) exploring transfer learning approach by training the model on a high resource language and later transfer it to a low resource language (Kocmi et al., 2018) .",
"cite_spans": [
{
"start": 280,
"end": 310,
"text": "(Dandapat and Federmann, 2018)",
"ref_id": "BIBREF4"
},
{
"start": 618,
"end": 638,
"text": "(Kocmi et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "http://turing.iimas.unam.mx/ americasnlp/st.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also compared the BPE subword tokenization to wordlevel tokenization using Moses tokenizer and character level tokenization. We found that the best results were obtained using the BPE subword tokenization.3 http://turing.iimas.unam.mx/ americasnlp/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://nvidia.github.io/OpenSeq2Seq/ html/api-docs/optimizers.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors Shantipriya Parida and Petr Motlicek were supported by the European Union's Horizon 2020 research and innovation program under grant agreement No. 833635 (project ROXANNE: Realtime network, text, and speaker analytics for combating organized crime, 2019-2022). Author Esa\u00fa Villatoro-Tello, was supported partially by Idiap Research Institute, SNI-CONACyT, and UAM-Cuajimalpa Mexico during the elaboration of this work. Author Rosa M. Ortega-Mendoza was supported partially by SNI-CONACyT.The authors do not see any significant ethical or privacy concerns that would prevent the processing of the data used in the study. The datasets do contain personal data, and these are processed in compliance with the GDPR and national law.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "JW300: A widecoverage parallel corpus for low-resource languages",
"authors": [
{
"first": "\u017deljko",
"middle": [],
"last": "Agi\u0107",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vuli\u0107",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3204--3210",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1310"
]
},
"num": null,
"urls": [],
"raw_text": "\u017deljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The mathematics of statistical machine translation: Parameter estimation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"A Della"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The math- ematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263- 311.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Development of a Guarani -Spanish parallel corpus",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Chiruzzo",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Amarilla",
"suffix": ""
},
{
"first": "Adolfo",
"middle": [],
"last": "R\u00edos",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"Gim\u00e9nez"
],
"last": "Lugo",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "2629--2633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Chiruzzo, Pedro Amarilla, Adolfo R\u00edos, and Gus- tavo Gim\u00e9nez Lugo. 2020. Development of a Guarani -Spanish parallel corpus. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 2629-2633, Marseille, France. Euro- pean Language Resources Association.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A survey of multilingual neural machine translation",
"authors": [
{
"first": "Raj",
"middle": [],
"last": "Dabre",
"suffix": ""
},
{
"first": "Chenhui",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "Anoop",
"middle": [],
"last": "Kunchukuttan",
"suffix": ""
}
],
"year": 2020,
"venue": "ACM Comput. Surv",
"volume": "",
"issue": "5",
"pages": "",
"other_ids": {
"DOI": [
"10.1145/3406095"
]
},
"num": null,
"urls": [],
"raw_text": "Raj Dabre, Chenhui Chu, and Anoop Kunchukuttan. 2020. A survey of multilingual neural machine translation. ACM Comput. Surv., 53(5).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Iterative data augmentation for neural machine translation: a low resource case study for english-telugu",
"authors": [
{
"first": "Sandipan",
"middle": [],
"last": "Dandapat",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 21st Annual Conference of the European Association for Machine Translation",
"volume": "",
"issue": "",
"pages": "287--292",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandipan Dandapat and Christian Federmann. 2018. It- erative data augmentation for neural machine trans- lation: a low resource case study for english-telugu. In Proceedings of the 21st Annual Conference of the European Association for Machine Translation: 28- 30 May 2018, Universitat d'Alacant, Alacant, Spain, pages 287-292. European Association for Machine Translation.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Americasnli: Evaluating zero-shot natural language understanding of pretrained multilingual models",
"authors": [
{
"first": "Abteen",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Arturo",
"middle": [],
"last": "Oncevay",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Chiruzzo",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Ortega",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Ramos",
"suffix": ""
},
{
"first": "Annette",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vladimir",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"A"
],
"last": "Gim\u00e9nez-Lugo",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Mager",
"suffix": ""
}
],
"year": null,
"venue": "Ngoc Thang Vu, and Katharina Kann. 2021",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Gim\u00e9nez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, and Katharina Kann. 2021. Americasnli: Evaluating zero-shot nat- ural language understanding of pretrained multilin- gual models in truly low-resource languages.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Neural machine translation models with back-translation for the extremely low-resource indigenous language Bribri",
"authors": [
{
"first": "Isaac",
"middle": [],
"last": "Feldman",
"suffix": ""
},
{
"first": "Rolando",
"middle": [],
"last": "Coto-Solano",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3965--3976",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.351"
]
},
"num": null,
"urls": [],
"raw_text": "Isaac Feldman and Rolando Coto-Solano. 2020. Neu- ral machine translation models with back-translation for the extremely low-resource indigenous language Bribri. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3965-3976, Barcelona, Spain (Online). Interna- tional Committee on Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Corpus creation and initial SMT experiments between Spanish and Shipibo-konibo",
"authors": [
{
"first": "Ana-Paula",
"middle": [],
"last": "Galarreta",
"suffix": ""
},
{
"first": "Andr\u00e9s",
"middle": [],
"last": "Melgar",
"suffix": ""
},
{
"first": "Arturo",
"middle": [],
"last": "Oncevay",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "238--244",
"other_ids": {
"DOI": [
"10.26615/978-954-452-049-6_033"
]
},
"num": null,
"urls": [],
"raw_text": "Ana-Paula Galarreta, Andr\u00e9s Melgar, and Arturo On- cevay. 2017. Corpus creation and initial SMT ex- periments between Spanish and Shipibo-konibo. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 238-244, Varna, Bulgaria. INCOMA Ltd.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Machine translation: A literature review. arXiv",
"authors": [
{
"first": "Ankush",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Agarwal",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ankush Garg and Mayank Agarwal. 2018. Machine translation: A literature review. arXiv.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Axolotl: A web accessible parallel corpus for Spanish-Nahuatl",
"authors": [
{
"first": "Ximena",
"middle": [],
"last": "Gutierrez-Vasques",
"suffix": ""
},
{
"first": "Gerardo",
"middle": [],
"last": "Sierra",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Hernandez",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation, LREC 2016",
"volume": "",
"issue": "",
"pages": "4210--4214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ximena Gutierrez-Vasques, Gerardo Sierra, and Isaac Hernandez. 2016. Axolotl: A web accessible par- allel corpus for Spanish-Nahuatl. Proceedings of the 10th International Conference on Language Re- sources and Evaluation, LREC 2016, pages 4210- 4214.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fortification of neural morphological segmentation models for polysynthetic minimal-resource languages",
"authors": [
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
},
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Meza-Ruiz",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2018,
"venue": "NAACL HLT 2018 -2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Conference",
"volume": "1",
"issue": "",
"pages": "47--57",
"other_ids": {
"DOI": [
"10.18653/v1/n18-1005"
]
},
"num": null,
"urls": [],
"raw_text": "Katharina Kann, Manuel Mager, Ivan Meza-Ruiz, and Hinrich Sch\u00fctze. 2018. Fortification of neural mor- phological segmentation models for polysynthetic minimal-resource languages. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies -Proceedings of the Conference, volume 1, pages 47-57.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "OpenNMT: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P17-4012"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Cuni nmt system for wat 2018 translation tasks",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation: 5th Workshop on Asian Translation: 5th Workshop on Asian Translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kocmi, Shantipriya Parida, and Ond\u0159ej Bojar. 2018. Cuni nmt system for wat 2018 translation tasks. In Proceedings of the 32nd Pacific Asia Con- ference on Language, Information and Computation: 5th Workshop on Asian Translation: 5th Workshop on Asian Translation.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Richardson",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.06226"
]
},
"num": null,
"urls": [],
"raw_text": "Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Crosslingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Low-Resource NMT: an Empirical Study on the Effect of Rich Morphological Word Segmentation on Inuktitut. Proceedings of the 14th Conference of the Association for Machine Translation in the",
"authors": [
{
"first": "Ngoc",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Fatiha",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sadat",
"suffix": ""
}
],
"year": 2012,
"venue": "Research Track)",
"volume": "1",
"issue": "",
"pages": "165--172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tan Ngoc Le and Fatiha Sadat. 2020. Low-Resource NMT: an Empirical Study on the Effect of Rich Mor- phological Word Segmentation on Inuktitut. Pro- ceedings of the 14th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track), 1(2012):165-172.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Building Machine translation systems for indigenous languages",
"authors": [
{
"first": "Lori",
"middle": [],
"last": "Ariadna Font Llitj\u00f3s",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Levin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Aranovich",
"suffix": ""
}
],
"year": 2005,
"venue": "Communities",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ariadna Font Llitj\u00f3s, Lori Levin, and Roberto Ara- novich. 2005. Building Machine translation systems for indigenous languages. Communities.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Probabilistic finite-state morphological segmenter for wixarika (huichol) language",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Di\u00f3nico",
"middle": [],
"last": "Carrillo",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Meza",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Intelligent & Fuzzy Systems",
"volume": "34",
"issue": "5",
"pages": "3081--3087",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Di\u00f3nico Carrillo, and Ivan Meza. 2018a. Probabilistic finite-state morphological seg- menter for wixarika (huichol) language. Journal of Intelligent & Fuzzy Systems, 34(5):3081-3087.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Wixarika-Spanish Parallel Corpus The Wixarika-Spanish Parallel Corpus",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Carrillo",
"middle": [],
"last": "Dionico",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Meza",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Carrillo Dionico, and Ivan Meza. 2020. The Wixarika-Spanish Parallel Corpus The Wixarika-Spanish Parallel Corpus. (August 2018).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Challenges of language technologies for the indigenous languages of the Americas",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Ximena",
"middle": [],
"last": "Gutierrez-Vasques",
"suffix": ""
},
{
"first": "Gerardo",
"middle": [],
"last": "Sierra",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Meza-Ruiz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "55--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, and Ivan Meza-Ruiz. 2018b. Challenges of language technologies for the indigenous languages of the Americas. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 55-69, Santa Fe, New Mexico, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas",
"authors": [
{
"first": "Manuel",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Arturo",
"middle": [],
"last": "Oncevay",
"suffix": ""
},
{
"first": "Abteen",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Ortega",
"suffix": ""
},
{
"first": "Annette",
"middle": [],
"last": "Rios",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Ximena",
"middle": [],
"last": "Gutierrez-Vasques",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Chiruzzo",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Gim\u00e9nez-Lugo",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Ramos",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Currey",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Ivan Vladimir Meza",
"middle": [],
"last": "Ruiz",
"suffix": ""
},
{
"first": "Rolando",
"middle": [],
"last": "Coto-Solano",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Elisabeth",
"middle": [],
"last": "Mager",
"suffix": ""
},
{
"first": "Ngoc",
"middle": [
"Thang"
],
"last": "Vu",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Kann",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of theThe First Workshop on NLP for Indigenous Languages of the Americas, Online. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Xi- mena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Gim\u00e9nez-Lugo, Ricardo Ramos, Anna Currey, Vishrav Chaudhary, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager, Ngoc Thang Vu, Graham Neubig, and Katha- rina Kann. 2021. Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. In Proceed- ings of theThe First Workshop on NLP for Indige- nous Languages of the Americas, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Traductor estad\u00edstico wixarika-espa\u00f1ol usando descomposici\u00f3n morfol\u00f3gica",
"authors": [
{
"first": "Jesus",
"middle": [],
"last": "Manuel Mager",
"suffix": ""
},
{
"first": "Carlos",
"middle": [
"Barr\u00f3n"
],
"last": "Hois",
"suffix": ""
},
{
"first": "Ivan Vladimir Meza",
"middle": [],
"last": "Romero",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ruiz",
"suffix": ""
}
],
"year": 2016,
"venue": "Comtel",
"volume": "",
"issue": "",
"pages": "63--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jesus Manuel Mager Hois, Carlos Barr\u00f3n Romero, and Ivan Vladimir Meza Ruiz. 2016. Traductor estad\u00eds- tico wixarika-espa\u00f1ol usando descomposici\u00f3n mor- fol\u00f3gica. Comtel, pages 63-68.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Carbonell, and A. Lavie. 2006. Building nlp systems for two resource-scarce indigenous languages : Mapudungun and quechua",
"authors": [
{
"first": "C",
"middle": [],
"last": "Monson",
"suffix": ""
},
{
"first": "Ariadna",
"middle": [
"Font"
],
"last": "Llitj\u00f3s",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Aranovich",
"suffix": ""
},
{
"first": "Lori",
"middle": [
"S"
],
"last": "Levin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Peterson",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Jaime",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Monson, Ariadna Font Llitj\u00f3s, Roberto Aranovich, Lori S. Levin, R. Brown, E. Peterson, Jaime G. Car- bonell, and A. Lavie. 2006. Building nlp systems for two resource-scarce indigenous languages : Ma- pudungun and quechua.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Expanding the JHU Bible Corpus for Machine Translation of the Indigenous Languages of",
"authors": [
{
"first": "Garrett",
"middle": [],
"last": "Nicolai",
"suffix": ""
},
{
"first": "Edith",
"middle": [],
"last": "Coates",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Miikka",
"middle": [],
"last": "Silfverberg",
"suffix": ""
}
],
"year": 2021,
"venue": "North America",
"volume": "1",
"issue": "",
"pages": "1--5",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Garrett Nicolai, Edith Coates, Ming Zhang, and Mi- ikka Silfverberg. 2021. Expanding the JHU Bible Corpus for Machine Translation of the Indigenous Languages of North America. 1:1-5.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Neural machine translation with a polysynthetic low resource language. Machine Translation",
"authors": [
{
"first": "E",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Ortega",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Castro Mamani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "34",
"issue": "",
"pages": "325--346",
"other_ids": {
"DOI": [
"10.1007/s10590-020-09255-9"
]
},
"num": null,
"urls": [],
"raw_text": "John E Ortega, Richard Castro Mamani, and Kyunghyun Cho. 2020. Neural machine translation with a polysynthetic low resource language. Ma- chine Translation, 34(4):325-346.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Odiencorp: Odia-english and odiaonly corpus for machine translation",
"authors": [
{
"first": "Shantipriya",
"middle": [],
"last": "Parida",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Satya Ranjan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dash",
"suffix": ""
}
],
"year": 2019,
"venue": "Smart Intelligent Computing and Applications: Proceedings of the Third International Conference on Smart Computing and Informatics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shantipriya Parida, Ond\u0159ej Bojar, and Satya Ranjan Dash. 2019. Odiencorp: Odia-english and odia- only corpus for machine translation. In Smart Intel- ligent Computing and Applications: Proceedings of the Third International Conference on Smart Com- puting and Informatics, Volume 1, volume 159, page 495. Springer Nature.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Pytorch: An imperative style, high-performance deep learning library",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Massa",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Killeen",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Natalia",
"middle": [],
"last": "Gimelshein",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Kopf",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Raison",
"suffix": ""
},
{
"first": "Alykhan",
"middle": [],
"last": "Tejani",
"suffix": ""
},
{
"first": "Sasank",
"middle": [],
"last": "Chilamkurthy",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Steiner",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Bai",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "8024--8035",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024-8035. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Training tips for the transformer model",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2018,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "110",
"issue": "1",
"pages": "43--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Popel and Ond\u0159ej Bojar. 2018. Training tips for the transformer model. The Prague Bulletin of Mathematical Linguistics, 110(1):43-70.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "chrf++: words helping character n-grams",
"authors": [
{
"first": "Maja",
"middle": [],
"last": "Popovi\u0107",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the second conference on machine translation",
"volume": "",
"issue": "",
"pages": "612--618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maja Popovi\u0107. 2017. chrf++: words helping character n-grams. In Proceedings of the second conference on machine translation, pages 612-618.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Parallel Global Voices: a collection of multilingual corpora with citizen media stories",
"authors": [
{
"first": "Prokopis",
"middle": [],
"last": "Prokopidis",
"suffix": ""
},
{
"first": "Vassilis",
"middle": [],
"last": "Papavassiliou",
"suffix": ""
},
{
"first": "Stelios",
"middle": [],
"last": "Piperidis",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "900--905",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prokopis Prokopidis, Vassilis Papavassiliou, and Ste- lios Piperidis. 2016. Parallel Global Voices: a col- lection of multilingual corpora with citizen media stories. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 900-905, Portoro\u017e, Slovenia. Eu- ropean Language Resources Association (ELRA).",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Neural machine translation: A review",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Stahlberg",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Artificial Intelligence Research",
"volume": "69",
"issue": "",
"pages": "343--418",
"other_ids": {
"DOI": [
"10.1613/JAIR.1.12007"
]
},
"num": null,
"urls": [],
"raw_text": "Felix Stahlberg. 2020. Neural machine translation: A review. Journal of Artificial Intelligence Research, 69:343-418.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A Survey of Deep Learning Techniques for Neural Machine Translation. arXiv e-prints",
"authors": [
{
"first": "Shuoheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yuxin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xiaowen",
"middle": [],
"last": "Chu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.07526"
]
},
"num": null,
"urls": [],
"raw_text": "Shuoheng Yang, Yuxin Wang, and Xiaowen Chu. 2020. A Survey of Deep Learning Techniques for Neu- ral Machine Translation. arXiv e-prints, page arXiv:2002.07526.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"content": "<table><tr><td>Task</td><td colspan=\"7\">Baseline BLEU CharF Submission# BLEU CharF BLEU CharF Tamalli Best Competitor</td></tr><tr><td>es-aym</td><td>0.01</td><td>0.157</td><td>4</td><td>0.03</td><td>0.202</td><td>2.29</td><td>0.283</td></tr><tr><td>es-bzd</td><td>0.01</td><td>0.068</td><td>3</td><td>1.09</td><td>0.132</td><td>2.39</td><td>0.165</td></tr><tr><td>es-cni</td><td>0.01</td><td>0.102</td><td>1</td><td>0.01</td><td>0.253</td><td>3.05</td><td>0.258</td></tr><tr><td>es-gn</td><td>0.12</td><td>0.193</td><td>5</td><td>1.9</td><td>0.207</td><td>6.13</td><td>0.336</td></tr><tr><td>es-hch</td><td>2.2</td><td>0.126</td><td>1</td><td>0.01</td><td>0.214</td><td>9.63</td><td>0.304</td></tr><tr><td>es-nah</td><td>0.01</td><td>0.157</td><td>1</td><td>0.03</td><td>0.218</td><td>2.38</td><td>0.266</td></tr><tr><td>es-oto</td><td>0</td><td>0.054</td><td>1</td><td>0.01</td><td>0.118</td><td>1.69</td><td>0.147</td></tr><tr><td>es-quy</td><td>0.05</td><td>0.304</td><td>5</td><td>0.96</td><td>0.273</td><td>2.91</td><td>0.346</td></tr><tr><td>es-shp</td><td>0.01</td><td>0.121</td><td>1</td><td>0.06</td><td>0.204</td><td>5.43</td><td>0.329</td></tr><tr><td>es-tar</td><td>0</td><td>0.039</td><td>1</td><td>0.04</td><td>0.155</td><td>1.07</td><td>0.184</td></tr></table>",
"text": "Statistics of the official dataset. The statistics include the number of sentences and tokens (train/dev/test) for each language pair.",
"num": null,
"type_str": "table"
},
"TABREF2": {
"html": null,
"content": "<table><tr><td>2016; Mager et al., 2018a; Chiruzzo et al., 2020;</td></tr><tr><td>Feldman and Coto-Solano, 2020; Agi\u0107 and Vuli\u0107,</td></tr><tr><td>2019; Prokopidis et al., 2016; Galarreta et al., 2017;</td></tr><tr><td>Ebrahimi et al., 2021) for knowing these details.</td></tr></table>",
"text": "Evaluation Results. All results are from the \"Track2: Development Set Not Used for Training\". For all the tasks, the source language is Spanish. The table contains the best results of our team against the best score by the competitor in its track.",
"num": null,
"type_str": "table"
}
}
}
}