{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:08.013091Z" }, "title": "NRC-CNRC Machine Translation Systems for the 2021 AmericasNLP Shared Task", "authors": [ { "first": "Rebecca", "middle": [], "last": "Knowles", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Darlene", "middle": [], "last": "Stewart", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Samuel", "middle": [], "last": "Larkin", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Patrick", "middle": [], "last": "Littell", "suffix": "", "affiliation": {}, "email": "patrick.littell@nrc-cnrc.gc.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We describe the NRC-CNRC systems submitted to the AmericasNLP shared task on machine translation. We submitted systems translating from Spanish into Wix\u00e1rika, Nahuatl, Rar\u00e1muri, and Guaran\u00ed. Our best neural machine translation systems used multilingual pretraining, ensembling, finetuning, training on parts of the development data, and subword regularization. We also submitted translation memory systems as a strong baseline.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We describe the NRC-CNRC systems submitted to the AmericasNLP shared task on machine translation. We submitted systems translating from Spanish into Wix\u00e1rika, Nahuatl, Rar\u00e1muri, and Guaran\u00ed. Our best neural machine translation systems used multilingual pretraining, ensembling, finetuning, training on parts of the development data, and subword regularization. We also submitted translation memory systems as a strong baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper describes experiments on translation from Spanish into Wix\u00e1rika, Nahuatl, Rar\u00e1muri, and Guaran\u00ed, as part of the First Workshop on Natural Language Processing (NLP) for Indigenous Languages of the Americas (AmericasNLP) 2021", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Shared Task on open-ended machine translation. Our approach to this task was to explore the application of simple, known methods of performing neural machine translation (NMT) for low-resource languages to a subset of the task languages. Our initial experiments were primarily focused on the following questions: (1) How well does multilingual NMT work in these very low resource settings? (2) Is it better to build multilingual NMT systems using only closely-related languages or does it help to add data from additional languages?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(3) Is applying subword regularization helpful?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As we progressed through the task, it raised questions regarding domain and about use cases for low-resource machine translation. The approaches that we used for this task are not entirely languageagnostic; they might be more appropriately characterized as \"language na\u00efve\" in that we applied some simple language-specific pre-and post-processing, but did not incorporate any tools that required indepth knowledge of the language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We submitted four systems, including ensembles, single systems, and a translation memory baseline. Our best system (S.0) consisted of an ensemble of systems incorporating multilingual training and finetuning (including on development data as pseudo-in-domain data).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The shared task provided data for 10 language pairs, all with the goal of translating from Spanish. We chose to start with Wix\u00e1rika (hch; Mager et al., 2018) , Nahuatl (nah; Gutierrez-Vasques et al., 2016) , and Rar\u00e1muri (tar; Brambila, 1976) as our main three languages of interest, all of which are languages in the Uto-Aztecan family indigenous to Mexico. We added Guaran\u00ed (gn; Chiruzzo et al., 2020) as an unrelated language (as spoken in Paraguay), to explore building multilingual NMT systems within and across language families. Ebrahimi et al. (2021) describes work on collecting development and test sets for the languages in the shared task. The datasets vary in size, dialect and orthographic variation/consistency, and level of domain match to the development and test data. Due to space considerations, we direct readers to the task page and the dataset information page for more information on the languages and on the datasets provided for the task. 1 Given the size of the data (Table 1) , additional data collection (particularly of data in the domain of interest) is likely one of the most effective ways to improve machine translation quality. However, noting both ethical (Lewis et al., 2020) and quality (Caswell et al., 2021) concerns when it comes to collecting or using data for Indigenous languages without community collaboration, we limited our experiments to data provided for the shared task.", "cite_spans": [ { "start": 138, "end": 157, "text": "Mager et al., 2018)", "ref_id": "BIBREF15" }, { "start": 160, "end": 173, "text": "Nahuatl (nah;", "ref_id": null }, { "start": 174, "end": 205, "text": "Gutierrez-Vasques et al., 2016)", "ref_id": "BIBREF6" }, { "start": 227, "end": 242, "text": "Brambila, 1976)", "ref_id": "BIBREF1" }, { "start": 381, "end": 403, "text": "Chiruzzo et al., 2020)", "ref_id": "BIBREF3" }, { "start": 536, "end": 558, "text": "Ebrahimi et al. (2021)", "ref_id": null }, { "start": 1192, "end": 1212, "text": "(Lewis et al., 2020)", "ref_id": null }, { "start": 1225, "end": 1247, "text": "(Caswell et al., 2021)", "ref_id": null } ], "ref_spans": [ { "start": 994, "end": 1003, "text": "(Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Data and Preprocessing", "sec_num": "2" }, { "text": "We used standard preprocessing scripts from Moses (Koehn et al., 2007) : clean-corpus-n.perl (on training data only), normalize-punctuation.perl, and tokenizer.perl (applied to all text, regardless of whether it already appeared tokenized). 2 The only language-specific preprocessing we performed was to replace \"+\" with an alternative character (reverted in postprocessing) for Wix\u00e1rika text to prevent the tokenizer from oversegmenting the text. We note that the 13a tokenizer used by sacrebleu (Post, 2018) tokenizes \"+\", meaning that scores that incorporate word n-grams, like BLEU (Papineni et al., 2002) , are artificially inflated for Wix\u00e1rika.", "cite_spans": [ { "start": 50, "end": 70, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF13" }, { "start": 241, "end": 242, "text": "2", "ref_id": null }, { "start": 497, "end": 509, "text": "(Post, 2018)", "ref_id": "BIBREF19" }, { "start": 586, "end": 609, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing and Postprocessing", "sec_num": "2.1" }, { "text": "We detokenize (after unBPEing) the text and perform a small amount of language-specific postprocessing, which we found to have minimal effect on CHRF (Popovi\u0107, 2015) and some effect on BLEU on development data.", "cite_spans": [ { "start": 150, "end": 165, "text": "(Popovi\u0107, 2015)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing and Postprocessing", "sec_num": "2.1" }, { "text": "Following (Ding et al., 2019) , we sweep a range of byte-pair encoding (BPE; Sennrich et al., 2016) vocabulary sizes: 500, 1000, 2000, 4000, and 8000 merges (we do not go beyond this, because of sparsity/data size concerns, though some results suggest we should consider larger sizes).", "cite_spans": [ { "start": 10, "end": 29, "text": "(Ding et al., 2019)", "ref_id": "BIBREF4" }, { "start": 77, "end": 99, "text": "Sennrich et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "BPE and BPE-Dropout", "sec_num": "2.2" }, { "text": "For each language pair or multilingual grouping, we learned a BPE model jointly from the concatenation of the source and target sides of the parallel data using subword-nmt (Sennrich et al., 2016) , and then extracted separate source-and target-side vocabularies. We then applied the joint BPE model, filtered by the source or target vocabulary, to the corresponding data.", "cite_spans": [ { "start": 173, "end": 196, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "BPE and BPE-Dropout", "sec_num": "2.2" }, { "text": "We apply BPE-dropout (Provilkov et al., 2020 ) in part to assist with data sparsity and in part because it may be an effective way of handing orthographic variation (as a generalization of the spelling errors that it helps systems become more robust to). Usually, BPE-dropout would be performed during training as mini-batches are generated, but we opted to generate 10 BPE-dropout versions of the training corpus using a dropout rate of 0.1 as part of our preprocessing. We then simply concatenate all 10 alternate versions to form the training corpus.", "cite_spans": [ { "start": 21, "end": 44, "text": "(Provilkov et al., 2020", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "BPE and BPE-Dropout", "sec_num": "2.2" }, { "text": "We report CHRF (Popovi\u0107, 2015) scores computed with sacrebleu (Post, 2018) .", "cite_spans": [ { "start": 15, "end": 30, "text": "(Popovi\u0107, 2015)", "ref_id": "BIBREF18" }, { "start": 62, "end": 74, "text": "(Post, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Models and Experiments", "sec_num": "3" }, { "text": "We trained Transformer (Vaswani et al., 2017) models using Sockeye-1.18.115 (Hieber et al., 2018) and cuda-10.1. We used the default value of 6 encoder/decoder layers, 8 attention heads, the Adam (Kingma and Ba, 2015) optimizer, label smoothing of 0.1, a cross-entropy loss, a model size of 512 units with a FFN size of 2048, and the vocabulary was not shared. We performed early stopping after 32 checkpoints without improvement. We chose custom checkpoint intervals of approximately two checkpoints per epoch. We optimized for CHRF instead of BLEU and used the whole validation set during validation. The batch size was set to 8192 tokens, and the maximum sequence length for both source and target was set to 200 tokens. We did not use weight tying, but we set gradient clipping to absolute and lowered the initial learning rate to 0.0001.", "cite_spans": [ { "start": 23, "end": 45, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF24" }, { "start": 76, "end": 97, "text": "(Hieber et al., 2018)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.1" }, { "text": "We performed preliminary experiments decreasing the number of encoder and decoder layers in our bilingual systems to 3 each, but did not observe improvements. Nevertheless, a wider search of architecture parameters, as in Araabi and Monz (2020) , could yield improvements. After submission, we performed some additional experiments, building multilingual models with a range of numbers of decoder heads (1, 2, 4, 8), finding that a smaller number of decoder heads (e.g., 2) may be a promising avenue to explore in future work. Other approaches from Araabi and Monz (2020) also appear to show promise in our preliminary post-submission experiments, including a 4 layer encoder with a 6 layer decoder and changing layer normalization from pre to post, demonstrating that there are additional ways to improve upon our submitted systems.", "cite_spans": [ { "start": 222, "end": 244, "text": "Araabi and Monz (2020)", "ref_id": "BIBREF0" }, { "start": 549, "end": 571, "text": "Araabi and Monz (2020)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3.1" }, { "text": "For each of the four language pairs, we build baseline systems translating out of Spanish. The best baseline systems with their respective BPE sizes are shown in Table 2 . All of our baseline CHRF scores are higher than the official baselines released during the shared task, 3 likely due in part to more consistent tokenization between training and development/test (see Appendix C for additional discussion of training and development/test mismatch). For all languages except Rar\u00e1muri, adding BPEdropout improved performance.", "cite_spans": [], "ref_spans": [ { "start": 162, "end": 169, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "MT Baselines", "sec_num": "3.2" }, { "text": "Both Johnson et al. 2017and Rikters et al. (2018) train multilingual systems by prepending a special token at the start of the source sentence to indicate the language into which the text should be translated. For example, the token \"\" prepended (space-separated) to a Spanish source sentence indicates that the text should be translated into Nahuatl.", "cite_spans": [ { "start": 28, "end": 49, "text": "Rikters et al. (2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual Systems", "sec_num": "3.3" }, { "text": "To train such a model, we concatenate all training data after adding these special tokens; the development data is similarly the concatenation of all development data. We do not perform any upsampling or downsampling to even out the distribution of languages in our training or development data (rather, we rely on language finetuning, as described in Section 3.4 to improve translation quality). One of our initial questions was whether language relatedness mattered for building multilingual systems, so we first built a threelanguage (Wix\u00e1rika, Nahuatl, Rar\u00e1muri) model, Multiligual-3, and then built a four-language (Guaran\u00ed, Wix\u00e1rika, Nahuatl, Rar\u00e1muri) model, Multilingual-4. The Multilingual-4 system had consistently higher scores for all languages than the Multilingual-3 system, so we moved forward with experiments on Multilingual-4. Adding BPEdropout to Multilingual-4 appeared to improve performance for all languages, but in the case of Wix\u00e1rika (the language with the smallest amount of data), it was nearly identical to the baseline. Within the scope of this paper, we do not experiment with a wider range of languages (i.e., the remaining 6 languages), though it would not be surprising to find that additional language resources might also be beneficial. For the Multilingual-3 and Multilingual-4 models, the vocabulary is trained and extracted from the respective concatenated training corpus, so the target vocabulary is shared by all target languages as a single embedding matrix. Where languages share subwords, these are shared in the vocabulary (i.e., the language-specific tags are applied at the sentence level, not at the token level). The consequence of this is that each particular target language may not use the full multilingual vocabulary; we expect the system to learn which vocabulary items to associate (or not associate) with each language. For example, with a vocabulary produced through 8k merges, the full Multilingual-4 target side training corpus contains 7431 unique subwords, but the language-specific subcorpora that combine to make it only use subsets of that: Guaran\u00ed training data contains 5936 unique subwords, while Wix\u00e1rika contains only 1389 (the overlap between Guaran\u00ed and Wix\u00e1rika subwords is 1089 subwords). Table 3 shows the number of unique subwords in the target language training corpus for the Multilingual-4 setting. Our systems are free to generate any subword from the full combined vocabulary of target subwords since there is no explicit restriction during decoding. Thus, in some cases, our multilingual systems do generate subwords that were not seen in a specific language's training data vocabulary sub-set; while some of these could result in translation errors, a preliminary qualitative analysis suggests that many of them may be either source language words (being copied) or numerical tokens, both of which point to potential benefits of having used the larger concatenated multilingual corpus.", "cite_spans": [], "ref_spans": [ { "start": 2264, "end": 2271, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Multilingual Systems", "sec_num": "3.3" }, { "text": "We can then finetune 4 the multilingual models to be language-specific models. 5 The intuition here is that the multilingual model may be able to encode useful information about the source language, terms that should be copied (e.g., names/numbers), target grammar, or other useful topics, and can then be specialized for a specific language, while still retaining the most relevant or most general things learned from all languages trained on. We do this finetuning based on continued training on each language's training data, with that language's development data, building a new child system for each language based on the parent Multilingual-4 system (with or without dropout). 6 When we do this, we no longer use the language-specific tags used during multilingual model training.", "cite_spans": [ { "start": 79, "end": 80, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Language Finetuning", "sec_num": "3.4" }, { "text": "Language finetuning appears to produce improvements, with some performing better with dropout and some better without, as seen in the final two lines of Table 2 . Rar\u00e1muri appears to have a drop in performance after language finetuning with dropout. However, all Rar\u00e1muri scores are extremely low; it is likely that many of the decisions we make on Rar\u00e1muri do not represent real improvements or performance drops, but rather noise, so we have very low confidence in the generalizability of the choices (Mathur et al., 2020) .", "cite_spans": [ { "start": 503, "end": 524, "text": "(Mathur et al., 2020)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 153, "end": 160, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Language Finetuning", "sec_num": "3.4" }, { "text": "Noting that the development data was of a different domain, and sometimes even a different dialect or orthography than the training data, we followed an approach used in Knowles et al. (2020) : we divided the development set (in this case in half), performing finetuning with half of it and using the remainder for early stopping (and evaluation). We acknowledge that, given the very small sizes of the development sets, minor differences we observe are likely to be noise rather than true improvements (or true drops in performance); while we made choices about what systems to submit based on those, we urge caution in generalizing these results or drawing strong conclusions.", "cite_spans": [ { "start": 170, "end": 191, "text": "Knowles et al. (2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Development Finetuning", "sec_num": "3.5" }, { "text": "We show performance of models finetuned on the first half of the development set (performance measured on the second half of the development set), both with and without first finetuning for language, in Table 4 . We also compare these against the best systems we trained without training on development data, as well as with the translation memory approach (Section 4.3). We submitted single systems (not ensembled) that were trained using the first half of the development set (labeled S.2 in submission). They were selected based on highest scores on the second half of the development set (see Table 4 for scores and vocabulary sizes). For Guaran\u00ed, Wix\u00e1rika, and Nahuatl, we selected systems of the type Multi.-4 + BPE Dr.; Lang. finetuning; 1/2 Dev. finetuning. For Rar\u00e1muri, we selected a system with only 1/2 dev. finetuning (Multi.-4 + BPE Dr.; 1/2 Dev. Ft.).", "cite_spans": [], "ref_spans": [ { "start": 203, "end": 210, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 597, "end": 604, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Development Finetuning", "sec_num": "3.5" }, { "text": "Our best systems were ensembles (labeled S.0 in submission) of the systems described above and their corresponding system trained with the second half of the development set. For Guaran\u00ed, we also submitted an ensemble of four systems; the two Multi.-4 + BPE Dr.; Lang. finetuning; 1/2 Dev finetuning systems and the two Multi.-4 + BPE Dr.; 1/2 Dev Ft. systems (S.4). It performed similarly to the two-system ensemble.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Submitted Systems", "sec_num": "4" }, { "text": "We also submitted systems that were not trained on development data. For these, we were able to select the best system from our experiments, based on its CHRF score on the full development set. For Guaran\u00ed and Nahuatl, these were Multi.4 + BPE Dr.; Lang. ft. systems, for Rar\u00e1muri it was the Multi.4 + BPE Dr.; Lang. ft. (no dr.) system, and for Wix\u00e1rika it was an ensemble of the two.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Systems without Dev. (S.1)", "sec_num": "4.2" }, { "text": "Noting the very low automatic metric scores across languages and without target language expertise to determine if the output is fluent but not adequate, adequate but not fluent, or neither fluent nor adequate, we decided to build a translation memory submission. In computer aided translation (CAT), a \"translation memory\" (TM) is a database of prior source-target translation pairs produced by human translators. It can be used in CAT as follows: when a new sentence arrives to be translated, the system finds the closest source-language \"fuzzy match\" (typically a proprietary measure that determines similarity; could be as simple as Levenshtein distance) and returns its translation (possibly with annotations about the areas where the sentences differed) to the translator for them to \"post-edit\" (modify until it is a valid translation of the new sentence to be translated).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Translation Memory (S.3)", "sec_num": "4.3" }, { "text": "With the understanding that the development and test sets are closer to one another in terms of domain and dialect than they are to the training data, we treat the development set as a TM. Following Simard and Fujita (2012), we use an MT evaluation metric (CHRF) as the similarity score between the test source sentences and the TM source sentences, with the translation of the closest source development set sentence as the output. 7 We validated this approach on the two halves of the development set (using the first half as a TM for the second half and vice versa). On half the development set, for all languages except for Guaran\u00ed, the TM outperformed the system trained without 7 In the event of a tie, we chose the first translation.", "cite_spans": [ { "start": 433, "end": 434, "text": "7", "ref_id": null }, { "start": 684, "end": 685, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Translation Memory (S.3)", "sec_num": "4.3" }, { "text": "any development data (S.1), highlighting the differences between the training and development/test data (Table 4) , particularly striking because the TM used for these experiments consisted of only half the development set (<500 lines) as compared to the full training set. 8 On the test set, only the Rar\u00e1muri TM outperformed the best of our MT systems built without training on development.", "cite_spans": [], "ref_spans": [ { "start": 104, "end": 113, "text": "(Table 4)", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Translation Memory (S.3)", "sec_num": "4.3" }, { "text": "Our results consistently placed our submissions as the second-ranking team (behind Helsinki's top 2-3 submissions) in the with-development-set group, and second or third ranking team (2nd, 3rd, or 4th submission) within the no-development-set cluster as measured by CHRF. For Wix\u00e1rika and Rar\u00e1muri particularly, our TM submission proved to be a surprisingly strong baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "We note that CHRF and BLEU are not strictly correlated, and for all languages, scores are low. This raises questions about goals, metrics, and use cases for very low resource machine translation. We provide a short discussion of this in Appendix A. It will require future work and human evaluation to determine whether such systems are useful or harmful in downstream tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "A When does it make sense to build MT systems?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Our recent participation in shared tasks has made us consider scenarios and use cases for low-resource MT, which we discuss in this appendix. At the WMT 2020 News translation task, the Inuktitut-English translation task was arguably midresource (over a million lines of parallel legislative text), with the Hansard (legislative assembly) portion of the development and test set being a strong domain match to the training data. The news data in the development and test sets represented a domain mismatch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In the supervised low-resource task at WMT, there was an arguably low-resource (approximately 60,000 lines of parallel text) language pair of German-Upper Sorbian. However, the test set was extremely well-matched to the training data (though not exact duplicates), resulting in surprisingly high automatic metric scores (BLEU scores in the 50s and 60s).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In this AmericasNLP shared task, we observed perhaps the hardest scenario (outside of zero-shot): low resource with domain/dialect/orthographic mismatch. It should come as no surprise, then, that we observe extremely low automatic metric scores for this task.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Mismatch Low-Res. Upper Sorbian AmericasNLP Mid-Res. Inuktitut Hansard Inuktitut News For both the Inuktitut and Upper Sorbian systems, we know of community and/or government organizations that may be interested in using machine translation technology, for example as part of a computer aided translation (CAT) tool. 10 Provided that human evaluation found the quality level of the machine translation output appropriately high (no human evaluation was performed in the Upper Sorbian task, and the Inuktitut human evaluation is ongoing), there appear to be clear suitable use cases here, such as as part of a human translation workflow translating the Hansard as it is 10 For example, the presentation of the Upper Sorbian-German machine translation tool sotra (https://soblex. de/sotra/) encourages users to proofread and correct the output where necessary: https://www.powtoon.com/ online-presentation/cr2llmDWRR9/ produced or translating more of the same domain Upper Sorbian/German text. It is less clear, where there is a domain mismatch, whether the quality is anywhere near high enough for use in a CAT setting. We know that the usefulness of machine translation in CAT tools varies by translator (Koehn and Germann, 2014) ; some find even relatively low-quality translations useful, while others benefit only from very high-quality translations, and so on. There are also potential concerns that MT may influence the way translators choose to translate text.", "cite_spans": [ { "start": 669, "end": 671, "text": "10", "ref_id": null }, { "start": 1204, "end": 1229, "text": "(Koehn and Germann, 2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "But what about this low-resource, domain mismatch setting? While human evaluation would be the real test, we suspect that the output quality may be too low to be beneficial to most translators. As a brief example, we consider the CHRF scores that were generated between two Spanish sentences as a byproduct of the creation of our translation memory submission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "\u2022 Washington ha perdido todos los partidos.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "(Washington has lost all the games.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "\u2022 Continuaron visitando todos los d\u00edas. (They continued visiting every day.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "In part on the basis of the 10-character (spaces ignored) substring \"do todos los\" (for which \"todos los\" can be glossed as \"every\", but the string-initial \"do\" suffix belongs to two different verbs, one of which is in its past participle form and the other of which is in its present participle form), these sentences have a score of 0.366 CHRF (if we consider the first to be the \"system\" output and the second to be the \"reference\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "Here of course both sentences are grammatical, but they are clearly not semantic equivalents. Nevertheless, comparing the two produces a CHRF score comparable to the the highest scores observed in this task. 11 We argue then, that if the goal is CAT, then it may be better to consider a TM-based approach, even though it has lower scores, given that CAT tools are well-equipped to handle TMs, and typically provide some sort of indication about the differences between the sentence to be translated and its fuzzy-match from the TM as a guide for the translator. In an MT-based approach, the translator may be confronted with fluent text that is not semantically related to the source, ungrammatical language, or types of other problematic output.", "cite_spans": [ { "start": 208, "end": 210, "text": "11", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "If the goal of these MT tools is not CAT, but rather for a reader to access text in their preferred language, we expect that neither the MT systems nor the TMs would provide the kind of quality that users of online MT systems have come to expect. This raises questions of how to alert potential users to the potential for low-quality MT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "It is possible that there may be other use cases, in which case a downstream evaluation may be more appropriate than automatic metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Match", "sec_num": null }, { "text": "Training corpora (but not development or test corpora) were processed using the Moses clean-corpus-n.perl script (Koehn et al., 2007) , with a sentence length ratio of 15:1 and minimum and maximum lengths of 1 and 200, respectively. All corpora were preprocessed with the normalize-punctuation.perl script, with the language set to Spanish (since no languagespecific rules are available for the other languages in this task), and all instances of U+FEFF ZERO WIDTH NO-BREAK SPACE were removed. The only additional language-specific preprocessing that we performed was to replace \"+\" with U+0268 LATIN SMALL LETTER I WITH STROKE in the Wix\u00e1rika text; this prevents the text from being oversegmented by the tokenizer, and is reverted in postprocessing. 12 We note that it might be desirable to perform a similar replacement of apostrophes with a modifier letter apostrophe, but because some of the training data was released in tokenized format we were not confident that we could guarantee consistency in such an approach. 13 All text is then tokenized with the Moses tokenizer tokenizer.perl, with aggressive hyphen splitting, language set to Spanish, and no HTML escaping. 14 Note that we apply the tokenization even to already-tokenized training data, in the hopes of making the different datasets as consistent as possible.", "cite_spans": [ { "start": 113, "end": 133, "text": "(Koehn et al., 2007)", "ref_id": "BIBREF13" }, { "start": 751, "end": 753, "text": "12", "ref_id": null }, { "start": 1022, "end": 1024, "text": "13", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "B Pre-and Post-processing Details", "sec_num": null }, { "text": "Postprocessing consists of unBPEing then detokenizing using Moses' detokenizer.perl. An extra step is needed for Wix\u00e1rika to revert back to 12 Note, however, that the 13a tokenizer used by sacrebleu (Post, 2018) tokenizes \"+\", meaning that BLEU scores and other scores that incorporate word n-grams are artificially inflated for Wix\u00e1rika.", "cite_spans": [ { "start": 140, "end": 142, "text": "12", "ref_id": null }, { "start": 199, "end": 211, "text": "(Post, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "B Pre-and Post-processing Details", "sec_num": null }, { "text": "13 With CHRF as the main metric, this is less of a concern than it would be were the main metric BLEU or human evaluation. We note that even the use of CHRF++, with its use of word bigrams, would make this a concern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Pre-and Post-processing Details", "sec_num": null }, { "text": "14 tokenizer.perl -a -l es -no-escape the \"+\" character. We also perform a small amount of extra language-specific postprocessing, which has limited effects on CHRF (it primarily involves tokenization) with some effect on BLEU. For example, for Guaran\u00ed, we delete spaces around apostrophes and replace sequences of three periods with U+2026 HORIZONTAL ELLIPSIS. For Wix\u00e1rika, we add a space after the \"\u00bf\" and \"\u00a1\" characters. For Nahuatl, we make sure that \"$\" is separated from alphabetic characters by a space. For Rar\u00e1muri, we replace three periods with the horizontal ellipsis, convert single apostrophes or straight quotation marks before \"u\" or \"U\" to U+2018 LEFT SINGLE QUOTATION MARK and remove the space between it and the letter, and then convert any remaining apostrophes or single straight quotes to U+2019 RIGHT SINGLE QUOTATION MARK as well as removing any surrounding spaces. These are all heuristics based on frequencies of those characters in the development data, and we note that their effect on BLEU scores and CHRF scores is minimal (as measured on development data).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Pre-and Post-processing Details", "sec_num": null }, { "text": "The Wix\u00e1rika and Guaran\u00ed data was provided untokenized, but Nahuatl and Rar\u00e1muri datasets contained training data that was tokenized while the development and test data was untokenized. Here we briefly illustrate the impact of the mismatch, through token and type coverage. In Table 7 , we show what percentage of target language development tokens (and types) were also observed in the training data, before and after applying tokenization. Table 8 shows the same for source language. Table 9 shows source coverage for the test data instead of the development data. Finally, Table 10 shows what percentage of the source test data is contained in the development set. Unsurprisingly, coverage is higher across the board for Spanish (source), which is less morphologically complex than the target languages. Spanish-Rar\u00e1muri has the lowest coverage in both source and target. Spanish-Nahuatl has the second-highest coverage on the source side, but not on the target side, perhaps due to the historical content in the training data and/or the orthographic conversions applied. Spanish-Guaran\u00ed has the highest coverage on both source and target. Applying BPE results in approximately 100% coverage, but it is still worth noting the low fullword coverage, as novel vocabulary may be hard", "cite_spans": [], "ref_spans": [ { "start": 277, "end": 284, "text": "Table 7", "ref_id": null }, { "start": 442, "end": 449, "text": "Table 8", "ref_id": null }, { "start": 486, "end": 493, "text": "Table 9", "ref_id": null }, { "start": 576, "end": 584, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "C Coverage", "sec_num": null }, { "text": "Task page: http://turing.iimas.unam.mx/ americasnlp/, Dataset descriptions:https:// github.com/AmericasNLP/americasnlp2021/ blob/main/data/information_datasets.pdf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Appendix B for details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/AmericasNLP/ americasnlp2021/tree/main/baseline_ system", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "In our tables, we use the following notation to indicate finetuning: \"[parent model]; [child finetuning]\" and this notation stacks, such that \"X; Y; Z\" indicates a parent model X, finetuned as Y, and then subsequently finetuned as Z.5 We note that all finetuning experiments reported in this paper used BPE-dropout unless otherwise noted.6 We note that some catastrophic forgetting may occur during this process; it may be worth considering modifying the learning rate for finetuning, but we leave this to future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "See Appendix C for additional detail on vocabulary coverage between training, development, and test data.9 Full list for all languages available here: https:// github.com/AmericasNLP/americasnlp2021/ blob/main/data/information_datasets.pdf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We acknowledge that this is an imperfect comparison, since the scores in this task are of course not on Spanish output and thus should not be compared directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We thank the translators who provided their expertise and time to make this work possible: 9 Giovany Martinez Sebasti\u00e1n, Pedro Kapoltitan, Jos\u00e9 Antonio (Nahuatl), Silvino Gonz\u00e1lez de la Cr\u00faz (Wix\u00e1rika), Perla Alvarez Britez (Guaran\u00ed), and Mar\u00eda del C\u00e1rmen Sotelo Holgu\u00edn (Rar\u00e1muri). We thank the anonymous reviewers and our colleagues Michel Simard, Gabriel Bernier-Colborne, and Chikiu Lo for their comments and suggestions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "for the systems to translate or to generate. For all languages except Guaran\u00ed, the first half of the development set had higher target language coverage on the second half of the development set, as compared to training target language coverage on the full development set (or second half of the development set), which may explain both the improved performance of systems that trained on development data and the quality of the translation memory system. I.e., for raw es-hch data, 66.3% of target language tokens in the second half of the development set appeared somewhere in the first half of the development set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Optimizing transformer for low-resource neural machine translation", "authors": [ { "first": "Ali", "middle": [], "last": "Araabi", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3429--3435", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.304" ] }, "num": null, "urls": [], "raw_text": "Ali Araabi and Christof Monz. 2020. Optimizing transformer for low-resource neural machine transla- tion. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 3429- 3435, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Diccionario Raramuri-Castellano (Tarahumara)", "authors": [ { "first": "David", "middle": [], "last": "Brambila", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Brambila. 1976. Diccionario Raramuri- Castellano (Tarahumara). Obra Nacional de la Buena Prensa, M\u00e9xico.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Development of a Guarani -Spanish parallel corpus", "authors": [ { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Amarilla", "suffix": "" }, { "first": "Adolfo", "middle": [], "last": "R\u00edos", "suffix": "" }, { "first": "Gustavo", "middle": [ "Gim\u00e9nez" ], "last": "Lugo", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2629--2633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luis Chiruzzo, Pedro Amarilla, Adolfo R\u00edos, and Gus- tavo Gim\u00e9nez Lugo. 2020. Development of a Guarani -Spanish parallel corpus. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 2629-2633, Marseille, France. Euro- pean Language Resources Association.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A call for prudent choice of subword merge operations in neural machine translation", "authors": [ { "first": "Shuoyang", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Adithya", "middle": [], "last": "Renduchintala", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Duh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of Machine Translation Summit XVII", "volume": "1", "issue": "", "pages": "204--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shuoyang Ding, Adithya Renduchintala, and Kevin Duh. 2019. A call for prudent choice of subword merge operations in neural machine translation. In Proceedings of Machine Translation Summit XVII Volume 1: Research Track, pages 204-213, Dublin, Ireland. European Association for Machine Transla- tion.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Americasnli: Evaluating zero-shot natural language understanding of pretrained multilingual models", "authors": [ { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Ramos", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vladimir", "suffix": "" }, { "first": "Gustavo", "middle": [ "A" ], "last": "Gim\u00e9nez-Lugo", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Mager", "suffix": "" } ], "year": null, "venue": "Ngoc Thang Vu, and Katharina Kann. 2021", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Gim\u00e9nez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, and Katharina Kann. 2021. Americasnli: Evaluating zero-shot nat- ural language understanding of pretrained multilin- gual models in truly low-resource languages.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Axolotl: a web accessible parallel corpus for Spanish-Nahuatl", "authors": [ { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "" }, { "first": "Gerardo", "middle": [], "last": "Sierra", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Hernandez Pompa", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4210--4214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ximena Gutierrez-Vasques, Gerardo Sierra, and Isaac Hernandez Pompa. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4210-4214, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "The sockeye neural machine translation toolkit at AMTA 2018", "authors": [ { "first": "Felix", "middle": [], "last": "Hieber", "suffix": "" }, { "first": "Tobias", "middle": [], "last": "Domhan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Denkowski", "suffix": "" }, { "first": "David", "middle": [], "last": "Vilar", "suffix": "" }, { "first": "Artem", "middle": [], "last": "Sokolov", "suffix": "" }, { "first": "Ann", "middle": [], "last": "Clifton", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 13th Conference of the Association for Machine Translation in the Americas", "volume": "1", "issue": "", "pages": "200--207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Hieber, Tobias Domhan, Michael Denkowski, David Vilar, Artem Sokolov, Ann Clifton, and Matt Post. 2018. The sockeye neural machine translation toolkit at AMTA 2018. In Proceedings of the 13th Conference of the Association for Machine Transla- tion in the Americas (Volume 1: Research Track), pages 200-207, Boston, MA. Association for Ma- chine Translation in the Americas.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": { "DOI": [ "10.1162/tacl_a_00065" ] }, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Samuel Larkin, and Patrick Littell. 2020. NRC systems for the 2020", "authors": [ { "first": "Rebecca", "middle": [], "last": "Knowles", "suffix": "" }, { "first": "Darlene", "middle": [], "last": "Stewart", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rebecca Knowles, Darlene Stewart, Samuel Larkin, and Patrick Littell. 2020. NRC systems for the 2020", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Inuktitut-English news translation task", "authors": [], "year": null, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "156--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Inuktitut-English news translation task. In Proceed- ings of the Fifth Conference on Machine Translation, pages 156-170, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "The impact of machine translation quality on human postediting", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Ulrich", "middle": [], "last": "Germann", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the EACL 2014 Workshop on Humans and Computer-assisted Translation", "volume": "", "issue": "", "pages": "38--46", "other_ids": { "DOI": [ "10.3115/v1/W14-0307" ] }, "num": null, "urls": [], "raw_text": "Philipp Koehn and Ulrich Germann. 2014. The im- pact of machine translation quality on human post- editing. In Proceedings of the EACL 2014 Work- shop on Humans and Computer-assisted Transla- tion, pages 38-46, Gothenburg, Sweden. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Moses: Open source toolkit for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Hoang", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Callison-Burch", "suffix": "" }, { "first": "Marcello", "middle": [], "last": "Federico", "suffix": "" }, { "first": "Nicola", "middle": [], "last": "Bertoldi", "suffix": "" }, { "first": "Brooke", "middle": [], "last": "Cowan", "suffix": "" }, { "first": "Wade", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Christine", "middle": [], "last": "Moran", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Zens", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Constantin", "suffix": "" }, { "first": "Evan", "middle": [], "last": "Herbst", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions", "volume": "", "issue": "", "pages": "177--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the As- sociation for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Ses- sions, pages 177-180, Prague, Czech Republic. As- sociation for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Indigenous protocol and artificial intelligence position paper", "authors": [ { "first": "Jason", "middle": [ "Edward" ], "last": "Lewis", "suffix": "" }, { "first": "Angie", "middle": [], "last": "Abdilla", "suffix": "" }, { "first": "Noelani", "middle": [], "last": "Arista", "suffix": "" }, { "first": "Kaipulaumakaniolono", "middle": [], "last": "Baker", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Benesiinaabandan", "suffix": "" }, { "first": "Michelle", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Melanie", "middle": [], "last": "Cheung", "suffix": "" }, { "first": "Meredith", "middle": [], "last": "Coleman", "suffix": "" }, { "first": "Ashley", "middle": [], "last": "Cordes", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Davison", "suffix": "" }, { "first": "K\u016bpono", "middle": [], "last": "Duncan", "suffix": "" }, { "first": "Sergio", "middle": [], "last": "Garzon", "suffix": "" }, { "first": "D", "middle": [ "Fox" ], "last": "Harrell", "suffix": "" }, { "first": "Peter-Lucas", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Kekuhi", "middle": [], "last": "Kealiikanakaoleohaililani", "suffix": "" }, { "first": "Megan", "middle": [], "last": "Kelleher", "suffix": "" }, { "first": "Suzanne", "middle": [], "last": "Kite", "suffix": "" }, { "first": "Olin", "middle": [], "last": "Lagon", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Leigh", "suffix": "" }, { "first": "Maroussia", "middle": [], "last": "Levesque", "suffix": "" } ], "year": null, "venue": "Aboriginal Territories in Cyberspace", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.11573/spec-trum.library.concordia.ca.00986506" ] }, "num": null, "urls": [], "raw_text": "Jason Edward Lewis, Angie Abdilla, Noelani Arista, Kaipulaumakaniolono Baker, Scott Benesiinaaban- dan, Michelle Brown, Melanie Cheung, Meredith Coleman, Ashley Cordes, Joel Davison, K\u016bpono Duncan, Sergio Garzon, D. Fox Harrell, Peter- Lucas Jones, Kekuhi Kealiikanakaoleohaililani, Megan Kelleher, Suzanne Kite, Olin Lagon, Ja- son Leigh, Maroussia Levesque, Keoni Mahelona, Caleb Moses, Isaac ('Ika'aka) Nahuewai, Kari Noe, Danielle Olson, '\u014ciwi Parker Jones, Caro- line Running Wolf, Michael Running Wolf, Mar- lee Silva, Skawennati Fragnito, and H\u0113mi Whaanga. 2020. Indigenous protocol and artificial intelli- gence position paper. Project Report 10.11573/spec- trum.library.concordia.ca.00986506, Aboriginal Ter- ritories in Cyberspace, Honolulu, HI. Edited by Ja- son Edward Lewis.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Probabilistic finite-state morphological segmenter for wixarika (huichol) language", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Di\u00f3nico", "middle": [], "last": "Carrillo", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Meza", "suffix": "" } ], "year": 2018, "venue": "Journal of Intelligent & Fuzzy Systems", "volume": "34", "issue": "5", "pages": "3081--3087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Mager, Di\u00f3nico Carrillo, and Ivan Meza. 2018. Probabilistic finite-state morphological segmenter for wixarika (huichol) language. Journal of Intel- ligent & Fuzzy Systems, 34(5):3081-3087.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Tangled up in BLEU: Reevaluating the evaluation of automatic machine translation evaluation metrics", "authors": [ { "first": "Nitika", "middle": [], "last": "Mathur", "suffix": "" }, { "first": "Timothy", "middle": [], "last": "Baldwin", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4984--4997", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.448" ] }, "num": null, "urls": [], "raw_text": "Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2020. Tangled up in BLEU: Reevaluating the eval- uation of automatic machine translation evaluation metrics. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 4984-4997, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "chrF: character n-gram F-score for automatic MT evaluation", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "392--395", "other_ids": { "DOI": [ "10.18653/v1/W15-3049" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": { "DOI": [ "10.18653/v1/W18-6319" ] }, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "BPE-dropout: Simple and effective subword regularization", "authors": [ { "first": "Ivan", "middle": [], "last": "Provilkov", "suffix": "" }, { "first": "Dmitrii", "middle": [], "last": "Emelianenko", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Voita", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1882--1892", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.170" ] }, "num": null, "urls": [], "raw_text": "Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 1882-1892, Online. Association for Computational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Training and adapting multilingual NMT for less-resourced and morphologically rich languages", "authors": [ { "first": "Mat\u012bss", "middle": [], "last": "Rikters", "suffix": "" }, { "first": "M\u0101rcis", "middle": [], "last": "Pinnis", "suffix": "" }, { "first": "Rihards", "middle": [], "last": "Kri\u0161lauks", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mat\u012bss Rikters, M\u0101rcis Pinnis, and Rihards Kri\u0161lauks. 2018. Training and adapting multilingual NMT for less-resourced and morphologically rich languages. In Proceedings of the Eleventh International Confer- ence on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A poor man's translation memory using machine translation evaluation metrics", "authors": [ { "first": "Michel", "middle": [], "last": "Simard", "suffix": "" }, { "first": "Atsushi", "middle": [], "last": "Fujita", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 10th Bienniall Conference of the Association for Machine Translation in the Americas. Association for Machine Translation in the Americas", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Simard and Atsushi Fujita. 2012. A poor man's translation memory using machine translation evalu- ation metrics. In Proceedings of the 10th Bienniall Conference of the Association for Machine Transla- tion in the Americas. Association for Machine Trans- lation in the Americas.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 5998-6008.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Systems with Dev. (S.0, S.2, and S.4)", "uris": null, "type_str": "figure" }, "TABREF1": { "text": "Language, language family, and number of lines of training and development data.", "content": "", "html": null, "type_str": "table", "num": null }, "TABREF3": { "text": "System scores (CHRF) on the development set. Vocabulary size in parentheses.", "content": "
", "html": null, "type_str": "table", "num": null }, "TABREF5": { "text": "Number of unique subwords in each language's training corpus (target side) for 1k, 2k, 4k, and 8k BPE merges in a Multilingual-4 scenario.", "content": "
", "html": null, "type_str": "table", "num": null }, "TABREF6": { "text": "Dr.; Lang. Finetune; 1/2 Dev. Ft. 0.338 (4k) 0.368 (8k) 0.376", "content": "
Systemgnhchnahtar
Multi.-4 + Dropout0.249 (8k) 0.228 (2k) 0.247 (8k)0.145 (1k)
Multi.-4 + Dr.; Lang. Finetune0.260 (8k) 0.261 (2k) 0.252 (2k) 0.137 (500)
Multi.-4 + Dr.; 1/2 Dev. Finetune0.331 (4k) 0.367 (4k) 0.368 (8k)0.289 (4k)
Multi.-4 + (8k)0.280 (2k)
S.1 (no dev)0.260 (8k) 0.266 (2k) 0.252 (2k)0.150 (8k)
S.2 (1/2 dev, single system)0.338 (4k) 0.368 (8k) 0.376 (8k)0.289 (4k)
Translation Memory0.257 (na) 0.273 (na) 0.285 (na)0.246 (na)
", "html": null, "type_str": "table", "num": null }, "TABREF7": { "text": "System scores on the second half of the development set.", "content": "
Systemgnhchnahtar
S.00.304 0.327 0.277 0.247
S.40.303---
S.20.288 0.315 0.273 0.239
S.3/TM0.163 0.200 0.181 0.165
S.1/no dev 0.261 0.264 0.237 0.143
Helsinki 2 0.376 0.360 0.301 0.258
", "html": null, "type_str": "table", "num": null }, "TABREF8": { "text": "Submitted systems scores (CHRF) on test data.", "content": "
Final row shows best overall submitted system for each
language, Helsinki submission 2.
", "html": null, "type_str": "table", "num": null }, "TABREF9": { "text": "Comparison of recent shared tasks on lowresource machine translation.", "content": "", "html": null, "type_str": "table", "num": null } } } }