{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:21.600905Z" }, "title": "The Helsinki submission to the AmericasNLP shared task", "authors": [ { "first": "Ra\u00fal", "middle": [], "last": "V\u00e1zquez", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": {} }, "email": "" }, { "first": "Yves", "middle": [], "last": "Scherrer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": {} }, "email": "" }, { "first": "Sami", "middle": [], "last": "Virpioja", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": {} }, "email": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Helsinki", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The University of Helsinki participated in the AmericasNLP shared task for all ten language pairs. Our multilingual NMT models reached the first rank on all language pairs in track 1, and first rank on nine out of ten language pairs in track 2. We focused our efforts on three aspects: (1) the collection of additional data from various sources such as Bibles and political constitutions, (2) the cleaning and filtering of training data with the OpusFilter toolkit, and (3) different multilingual training techniques enabled by the latest version of the OpenNMT-py toolkit to make the most efficient use of the scarce data. This paper describes our efforts in detail.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The University of Helsinki participated in the AmericasNLP shared task for all ten language pairs. Our multilingual NMT models reached the first rank on all language pairs in track 1, and first rank on nine out of ten language pairs in track 2. We focused our efforts on three aspects: (1) the collection of additional data from various sources such as Bibles and political constitutions, (2) the cleaning and filtering of training data with the OpusFilter toolkit, and (3) different multilingual training techniques enabled by the latest version of the OpenNMT-py toolkit to make the most efficient use of the scarce data. This paper describes our efforts in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The University of Helsinki participated in the AmericasNLP 2021 Shared Task on Open Machine Translation for all ten language pairs. The shared task is aimed at developing machine translation (MT) systems for indigenous languages of the Americas, all of them paired with Spanish (Mager et al., 2021) . Needless to say, these language pairs pose big challenges since none of them benefits from large quantities of parallel data and there is limited monolingual data. For our participation, we focused our efforts mainly on three aspects: (1) gathering additional parallel and monolingual data for each language, taking advantage in particular of the OPUS corpus collection (Tiedemann, 2012) , the JHU Bible corpus (McCarthy et al., 2020) and translations of political constitutions of various Latin American countries, (2) cleaning and filtering the corpora to maximize their quality with the OpusFilter toolbox (Aulamo et al., 2020) , and (3) contrasting different training techniques that could take advantage of the scarce data available.", "cite_spans": [ { "start": 278, "end": 298, "text": "(Mager et al., 2021)", "ref_id": "BIBREF15" }, { "start": 671, "end": 688, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF25" }, { "start": 712, "end": 735, "text": "(McCarthy et al., 2020)", "ref_id": "BIBREF18" }, { "start": 910, "end": 931, "text": "(Aulamo et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We pre-trained NMT systems to produce backtranslations for the monolingual portions of the data. We also trained multilingual systems that make use of language labels on the source sentence to specify the target language (Johnson et al., 2017) . This has been shown to leverage the information available data across different language pairs and boosts performance on the low-resource scenarios.", "cite_spans": [ { "start": 221, "end": 243, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We submitted five runs for each language pair, three in track 1 (development set included in training) and two in track 2 (development set not included in training). The best-performing model is a multilingual Transformer pre-trained on Spanish-English data and fine-tuned to the ten indigenous languages. The (partial or complete) inclusion of the development set during training consistently led to substantial improvements.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The collected data sets and data processing code are available from our fork of the organizers' Git repository. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A main part of our effort was directed to finding relevant corpora that could help with the translation tasks, as well as to make the best out of the data provided by the organizers. In order to have an efficient procedure to maintain and process the data sets for all the ten languages, we utilized the Opus-Filter toolbox 2 (Aulamo et al., 2020) . It provides both ready-made and extensible methods for combining, cleaning, and filtering parallel and monolingual corpora. OpusFilter uses a configuration file that lists all the steps for processing the data; in order to make quick changes and extensions programmatically, we generated the configuration file with a Python script. Figure 1 shows a part of the applied OpusFilter workflow for a single language pair, Spanish-Raramuri, and restricted to the primary training data. The provided training set and (concatenated) additional parallel data are first independently normalized and cleaned (preprocess), then concatenated, preprocessed with common normalizations, filtered from duplicates, and finally filtered from noisy segments.", "cite_spans": [ { "start": 326, "end": 347, "text": "(Aulamo et al., 2020)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 683, "end": 691, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Data preparation", "sec_num": "2" }, { "text": "We collected parallel and monolingual data from several sources. An overview of the resources, including references and URLs, is given in Tables 3 and 4 in the appendix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "2.1" }, { "text": "Organizer-provided resources The shared task organizers provided parallel datasets for training for all ten languages. These datasets are referred to as train in this paper. For some of the languages (Ashaninka, Wixarika and Shipibo-Konibo), the organizers pointed participants to repositories containing additional parallel or monolingual data. We refer to these resources as extra and mono respectively. Furthermore, the organizers provided development and test sets for all ten language pairs of the shared task (Ebrahimi et al., 2021) .", "cite_spans": [ { "start": 515, "end": 538, "text": "(Ebrahimi et al., 2021)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "2.1" }, { "text": "OPUS The OPUS corpus collection (Tiedemann, 2012) provides only few datasets for the relevant languages. Besides the resources for Aymara and Quechua provided by the organizers as offi-cial training data, we found an additional parallel dataset for Spanish-Quechua, and monolingual data for Aymara, Guarani, H\u00f1\u00e4h\u00f1u, Nahuatl and Quechua. These resources are also listed under extra and mono.", "cite_spans": [ { "start": 32, "end": 49, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "2.1" }, { "text": "Constitutions We found translations of the Mexican constitution into H\u00f1\u00e4h\u00f1u, Nahuatl, Raramuri and Wixarika, of the Bolivian constitution into Aymara and Quechua, and of the Peruvian constitution into Quechua. 3 We extracted the data from the HTML or PDF sources and aligned them with the Spanish version on paragraph and sentence levels. The latter was done using a standard lengthbased approach with lexical re-alignment, as in hunalign 4 (Varga et al., 2005) , using paragraph breaks as hard boundaries. They are part of the extra resources.", "cite_spans": [ { "start": 210, "end": 211, "text": "3", "ref_id": null }, { "start": 441, "end": 461, "text": "(Varga et al., 2005)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "2.1" }, { "text": "Bibles The JHU Bible corpus (McCarthy et al., 2020) covers all languages of the shared task with at least one Bible translation. We found that some translations were near-duplicates that only differed in tokenization, and removed them. For those languages for which several dialectal varieties were available, we attempted to select subsets based on the target varieties of the shared task, as specified by the organizers (see Tables 3 and 4 for details). All Spanish Bible translations in the JHUBC are limited to the New Testament. In order to maximize the amount of parallel data, we substituted them by full-coverage Spanish Bible translations from Mayer and Cysouw (2014). 5 Since we have multiple versions of the Bible in Spanish as well as in some of the target languages, we applied the product method in OpusFilter to randomly take at most 5 different versions of the same sentence (skipping empty and duplicate lines).", "cite_spans": [ { "start": 28, "end": 51, "text": "(McCarthy et al., 2020)", "ref_id": "BIBREF18" }, { "start": 678, "end": 679, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 427, "end": 441, "text": "Tables 3 and 4", "ref_id": null } ], "eq_spans": [], "section": "Data collection", "sec_num": "2.1" }, { "text": "We noticed that some of the corpora in the same language used different orthographic conventions and had other issues that would hinder NMT model training. We applied various data normalization language code train extra combined dedup filtered bibles monoling backtr dev Ashaninka cni 3883 0 3883 3860 3858 38846 13195 17278 883 Aymara aym 6531 8970 15501 8889 8352 154520 16750 17886 996 Bribri bzd 7508 0 7508 7303 7303 38502 0 0 996 Guarani gn 26032 0 26032 14495 14483 39457 40516 62703 995 H\u00f1\u00e4h\u00f1u oto 4889 2235 7124 7056 7049 39726 537 366 and cleaning steps to improve the quality of the data, with the goal of making the training data more similar to the development data (which we expected to be similar to the test data). For Bribri, Raramuri and Wixarika, we found normalization scripts or guidelines on the organizers' Github page or sources referenced therein (cf. the norm entries in Tables 3 and 4 ). We reimplemented them as custom OpusFilter preprocessors.", "cite_spans": [], "ref_spans": [ { "start": 194, "end": 596, "text": "language code train extra combined dedup filtered bibles monoling backtr dev Ashaninka cni 3883 0 3883 3860 3858 38846 13195 17278 883 Aymara aym 6531 8970 15501 8889 8352 154520 16750 17886 996 Bribri bzd 7508 0 7508 7303 7303 38502 0 0 996 Guarani gn 26032 0 26032 14495 14483 39457 40516 62703 995 H\u00f1\u00e4h\u00f1u oto 4889 2235 7124 7056 7049 39726 537 366", "ref_id": "TABREF2" }, { "start": 949, "end": 963, "text": "Tables 3 and 4", "ref_id": null } ], "eq_spans": [], "section": "Data normalization and cleaning", "sec_num": "2.2" }, { "text": "Bribri, H\u00f1\u00e4h\u00f1u, Nahuatl, and Raramuri training sets were originally tokenized. Following our decision to use untokenized input for unsupervised word segmentation, we detokenized the respective corpora with the Moses detokenizer supported by OpusFilter, using the English patterns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data normalization and cleaning", "sec_num": "2.2" }, { "text": "Finally, for all datasets, we applied OpusFilter's WhitespaceNormalizer preprocessor, which replaces all sequences of whitespace characters with a single space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data normalization and cleaning", "sec_num": "2.2" }, { "text": "The organizer-provided and extra training data sets were concatenated before the filtering phase. Then all exact duplicates were removed from the data using OpusFilter's duplicate removal step. After duplicate removal, we applied some predefined filters from OpusFilter. Not all filters were applied to all languages; instead, we selected the appropriate filters based on manual observation of the data and the proportion of sentences removed by the filter. Appendix A describes the filters in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data filtering", "sec_num": "2.3" }, { "text": "We translated all monolingual data to Spanish, using early versions of both Model A and Model B (see Section 3), in order to create additional synthetic parallel training data. A considerable amount of the back-translations produced by Model A ended up in a different language than Spanish, whereas some translations by Model B remained empty. We kept both outputs, but aggressively filtered them (see Appendix A), concatenated them, and removed exact duplicates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Back-translations", "sec_num": "2.4" }, { "text": "For most language pairs, the Bibles made up the largest portion of the data. Thus we decided to keep the Bibles separate from the other smaller, but likely more useful, training sources. Table 1 shows the sizes of the training datasets before and after filtering as well as the additional datasets. It can be seen that there is a difference of almost two orders of magnitude between the smallest (cni) and largest (quy) combined training data sets. The addition of the Bibles and back-translations evens out the differences to some extent.", "cite_spans": [], "ref_spans": [ { "start": 187, "end": 194, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Data sizes", "sec_num": "2.5" }, { "text": "Model B (see below) takes advantage of abundant parallel data for Spanish-English. These resources come exclusively from OPUS (Tiedemann, 2012) and include the following sources: Open-Subtitles, Europarl, JW300, GlobalVoices, News-Commentary, TED2020, Tatoeba, bible-uedin. All corpora are again filtered and deduplicated, yielding 17,5M sentence pairs from OpenSubtitles and 4,4M sentence pairs from the other sources taken together. During training, both parts are assigned the same weight to avoid overfitting on subtitle data. The Spanish-English WMT-News corpus, also from OPUS, is used for validation. ", "cite_spans": [ { "start": 126, "end": 143, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish-English data", "sec_num": "2.6" }, { "text": "We experimented with two major model setups, which we refer to by A and B below. Both are multilingual NMT models based on the Transformer architecture (Vaswani et al., 2017) and are implemented with OpenNMT-py 2.0 (Klein et al., 2017) . All models were trained on a single GPU. The training data is segmented using Sentence-Piece (Kudo and Richardson, 2018) subword models with 32k units, trained jointly on all languages. Following our earlier experience (Scherrer et al., 2020) , subword regularization (Kudo, 2018) is applied during training. Further details of the configurations are listed in Appendix B.", "cite_spans": [ { "start": 152, "end": 174, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF27" }, { "start": 215, "end": 235, "text": "(Klein et al., 2017)", "ref_id": "BIBREF11" }, { "start": 331, "end": 358, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF13" }, { "start": 457, "end": 480, "text": "(Scherrer et al., 2020)", "ref_id": "BIBREF23" }, { "start": 506, "end": 518, "text": "(Kudo, 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "3" }, { "text": "Model A is a multilingual translation model with 11 source languages (10 indigenous languages + Spanish) and the same 11 target languages. It is trained on all available parallel data in both directions as well as all available monolingual data. The target language is specified with a language label on the source sentence (Johnson et al., 2017) .", "cite_spans": [ { "start": 324, "end": 346, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Model A", "sec_num": "3.1" }, { "text": "The model was first trained for 200 000 steps, weighting the Bibles data to occur only 0.3 times as much as all the other corpora. We picked the last checkpoint, since it attained the best accuracy and perplexity in the combined development set. This model constitutes submission A-0dev.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model A", "sec_num": "3.1" }, { "text": "Then, independently for each of the languages, we fine-tuned this model for another 2 500 steps on language-specific data, including 50% of the development set of the corresponding language. These models, one per language, constitute submission A-50dev.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model A", "sec_num": "3.1" }, { "text": "Model B is a multilingual translation model with one source language (Spanish) and 11 target languages (10 indigenous languages + English). It is trained on all available parallel data with Spanish on the source side using target language labels. 6 The training takes place in two phases. In the first phase, the model is trained on 90% of Spanish-English data and 1% of data coming from each of the ten American languages. With this first phase, we aim to take advantage of the large amounts of data to obtain a good Spanish encoder. In the second phase, the proportion of Spanish-English data is reduced to 50%. 7 We train the first phase for 100k steps and pick the best intermediate savepoint according to the English-only validation set, which occurred after 72k steps. We then initialize two phase 2 models with this savepoint. For model B-0dev, we change the proportions of the training data and include the back-translations. For model B-50dev, we additionally include a randomly sampled 50% of each language's development set. We train both models until 200 000 steps and pick the best intermediate savepoint according to an eleven-language validation set, consisting of WMT-News and the remaining halves of the ten development sets.", "cite_spans": [ { "start": 247, "end": 248, "text": "6", "ref_id": null }, { "start": 614, "end": 615, "text": "7", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Model B", "sec_num": "3.2" }, { "text": "Since the inclusion of development data showed massive improvements, we decided to continue training from the best savepoint of B-50dev (156k), adding also the remaining half of the development set to the training data. This model, referred to as B-100dev, was trained for an additional 14k steps until validation perplexity reached a local minimum.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model B", "sec_num": "3.2" }, { "text": "We submitted three systems to track 1 (development set allowed for training), namely A-50dev, B-50dev and B-100dev, and two systems to track 2 (development set not allowed for training), namely A-0dev and B-0dev. The results are in Table 2 . In track 1, our model B-100dev reached first rank and B-50dev reached second rank for all ten languages. Model A-50dev was ranked third to sixth, depending on the language. This shows that model B consistently outperformed model A, presumably thanks to its Spanish-English pre-training. Including the full development set in training (B-100dev) further improves the performance, although this implies that savepoint selection becomes guesswork.", "cite_spans": [], "ref_spans": [ { "start": 232, "end": 239, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "For track 2, the tendency is similar. Model B-0dev was ranked first for nine out of ten languages, taking 2nd rank for Spanish-Quechua. A-0dev was ranked second to fourth on all except Quechua. 8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "4" }, { "text": "We investigate the impact of our data selection strategies via an ablation study where we repeat the second training phase of model B with several variants of the B-0dev setup. In Figure 2 we show intermediate evaluations on the concatenation of the 10 development sets every 2000 training steps.", "cite_spans": [], "ref_spans": [ { "start": 180, "end": 188, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Ablation study", "sec_num": "4.1" }, { "text": "8 After submission, we noticed that the Quechua backtranslations were generated with the wrong model. This may explain the poor performance of our systems on this language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation study", "sec_num": "4.1" }, { "text": "The green curve, which corresponds to the B-0dev model, obtains the highest maximum scores. The impact of the back-translations is considerable (blue vs. green curve) despite their presumed low quality. The addition of Bibles did not improve the chrF2 scores (blue vs. orange curve). We presume that this is due to the mismatch in linguistic varieties, spelling and genre. It would be instructive to break down this effect according to the language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation study", "sec_num": "4.1" }, { "text": "The application of the OpusFilter pipeline to the train and extra data (yellow vs. orange curve) shows a positive effect at the beginning of the training, but this effect fades out later.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation study", "sec_num": "4.1" }, { "text": "Finally, and rather unsurprisingly, our corpus weighting strategy (50% English, 50% indigenous languages, blue curve) outperforms the weighting strategy employed during the first training phase (90% English, 10% indigenous languages, grey curve). It could be interesting to experiment with even lower proportions of English data, taking into account the risk of catastrophic forgetting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation study", "sec_num": "4.1" }, { "text": "In this paper, we describe our submissions to the AmericasNLP shared task, where we submitted translations for all ten language pairs in both tracks. Our strongest system is the result of gathering additional relevant data, carefully filtering the data for each language pair and pre-training a Transformer-based multilingual NMT system with large Spanish-English parallel data. Except for Spanish-Quechua in track 2, all our submissions ranked top for both tracks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "https://github.com/Helsinki-NLP/ americasnlp2021-st 2 https://github.com/Helsinki-NLP/ OpusFilter, version 2.0.0-beta.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Two additional resources, a translation of a Peruvian law into Shipibo-Konibo and a translation of the Paraguayan constitution into Guarani, are provided on our repository, but they became available too late to be included in the translation models. They are listed under extra* inTables 3 and 4.4 https://github.com/danielvarga/ hunalign5 We would like to thank Garrett Nicolai for helping us with the conversion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "To generate the back-translations, we used an analogous, but distinct model trained on 11 source languages and one target language.7 We experimented also with language-specific second phase training, but ultimately opted for a single run combining all eleven language pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Table 4: Data used for training (2). train refers to the official training data provided by the organizers, whereas extra refers to additional parallel non-Bible data. Corpora marked with extra* are available on our repository but were not used in the translation experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is part of the FoTran project, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement \u2116 771113).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "The following filters were used for the training data except for back-translated data and Bibles:\u2022 LengthFilter: Remove sentences longer than 1000 characters. Applied to Aymara, Nahuatl, Quechua, Raramuri.\u2022 LengthRatioFilter: Remove sentences with character length ratio of 4 or more. Applied to Ashaninka, Aymara, Guarani, H\u00f1\u00e4h\u00f1u, Nahuatl, Quechua, Raramuri, Wixarika.\u2022 CharacterScoreFilter: Remove sentences for which less than 90% characters are from the Latin alphabet. Applied to Aymara, Quechua, Raramuri.\u2022 TerminalPunctuationFilter: Remove sentences with dissimilar punctuation; threshold -2 (V\u00e1zquez et al., 2019) . Applied to Aymara, Quechua.\u2022 NonZeroNumeralsFilter: Remove sentences with dissimilar numerals; threshold 0.5 (V\u00e1zquez et al., 2019) . Applied to Aymara, Quechua, Raramuri, Wixarika.The Bribri and Shipibo-Konibo corpora seemed clean enough that we did not apply any filters for them. After generating the Bible data, we noticed that some of the lines contained only a single 'BLANK' string. The segments with these lines were removed afterwards.From the provided monolingual datasets, we filtered out sentences with more than 500 words.The back-translated data was filtered with the following filters:\u2022 LengthRatioFilter with threshold 2 and word units\u2022 CharacterScoreFilter with Latin script and threshold 0.9 on the Spanish side and 0.7 on the other side\u2022 LanguageIDFilter with a threshold of 0.8 for the Spanish side only.", "cite_spans": [ { "start": 599, "end": 621, "text": "(V\u00e1zquez et al., 2019)", "ref_id": "BIBREF28" }, { "start": 733, "end": 755, "text": "(V\u00e1zquez et al., 2019)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "A OpusFilter settings", "sec_num": null }, { "text": "Model A uses a 6-layered Transformer with 8 heads, 512 dimensions in the embeddings and 1024 dimensions in the feed-forward layers. The batch size is 4096 tokens, with an accumulation count of 8. The Adam optimizer is used with beta1=0.9 and beta2=0.998. The Noam decay method is used with a learning rate of 3.0 and 40000 warm-up steps. Subword sampling is applied during training (20 samples, \u03b1 = 0.1).Model B uses a 8-layered Transformer with 16 heads, 1024 dimensions in the embeddings and 4096 dimensions in the feed-forward layers. The batch size is 9200 tokens in phase 1 and 4600 tokens in phase 2, with an accumulation count of 4. The Adam optimizer is used with beta1=0.9 and beta2=0.997. The Noam decay method is used with a learning rate of 2.0 and 16000 warm-up steps. Subword sampling is applied during training (20 samples, \u03b1 = 0.1). As a post-processing step, we removed the tokens from the outputs of model B.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Hyperparameters", "sec_num": null }, { "text": "GlobalVoices (Tiedemann, 2012; Prokopidis et al., 2016) extra BOconst: https://www.kas.de/c/document_library/get_ file?uuid=8b51d469-63d2-f001-ef6f-9b561eb65ed4& groupId=288373 bibles ayr-x-bible- 2011-v1, ayr-x-bible-1997-v1 mono Wikipedia crawls (Tiedemann, 2020) Bribri bzd train (Feldman and Coto-Solano, 2020) bibles bzd-x-bible-bzd-v1norm https://github.com/AmericasNLP/ americasnlp2021/blob/main/data/bribri-spanish/ orthographic-conversion.csvAshaninka cni train https://github.com/hinantin/AshaninkaMT (Ortega et al., 2020; Cushimariano Romano and Sebasti\u00e1n Q., 2008; Mihas, 2011) bibles cni-x-bible-cni-v1mono ShaShiYaYi (Bustamante et al., 2020) : https://github.com/ iapucp/multilingual-data-peruGuarani gn train (Chiruzzo et al., 2020) extra* PYconst: http://ej.org.py/principal/ constitucion-nacional-en-guarani/ bibles gug-x-bible-gug-v1 mono Wikipedia crawls (Tiedemann, 2020) Wixarika hch train https://github.com/pywirrarika/wixarikacorpora (Mager et al., 2018) extra MXconst: https://constitucionenlenguas.inali.gob.mx/ bibles hch-x-bible-hch-v1mono https://github.com/pywirrarika/wixarikacorpora (Mager et al., 2018) norm https://github.com/pywirrarika/wixnlp/blob/master/ normwix.py (Mager Hois et al., 2016) Nahuatl nah train Axolotl (Gutierrez-Vasques et al., 2016) extra MXConst: https://constitucionenlenguas.inali.gob.mx/ bibles nch-x-bible-nch-v1, ngu-x-bible-ngu-v1, nhe-x-bible-nhe-v1, nhw-x-bible-nhw-v1mono Wikipedia crawls (Tiedemann, 2020) mono JW300 (Tiedemann, 2012; Agi\u0107 and Vuli\u0107, 2019) Quechua quy train JW300 (quy+quz) (Agi\u0107 and Vuli\u0107, 2019) MINEDU + dict_misc: https://github.com/AmericasNLP/ americasnlp2021/tree/main/data/quechua-spanish extraTatoeba (Tiedemann, 2012) BOconst: https://www.kas.de/documents/252038/ 253252/7_dokument_dok_pdf_33453_4.pdf/ 9e3dfb1f-0e05-523f-5352-d2f9a44a21de?version=1. 0&t=1539656169513PEconst: https://www.wipo.int/edocs/lexdocs/laws/qu/ pe/pe035qu.pdf bibles quy-x-bible-quy-v1, quz-x-bible-quz-v1 mono Wikipedia crawls (Tiedemann, 2020) Shipibo-Konibo shp train (Galarreta et al., 2017; Montoya et al., 2019) extra Educational and Religious from http://chana.inf.pucp.edu.pe/ resources/parallel-corpus/ extra* LeyArtesano: https://cdn.www.gob.pe/uploads/document/ file/579690/Ley_Artesano_Shipibo_Konibo_baja__1_ .pdf bibles shp-SHPTBL mono ShaShiYaYi (Bustamante et al., 2020) : https://github.com/ iapucp/multilingual-data-peruRaramuri tar train (Brambila, 1976) extra MXConst: https://constitucionenlenguas.inali.gob.mx/ bibles tac-x-bible-tac-v1norm https://github.com/AmericasNLP/americasnlp2021/ pull/5 Spanish bibles spa-x-bible-americas, spa-x-bible-hablahoi-latina, spa-x-bible-lapalabra, spax-bible-newworld, spa-x-bible-nuevadehoi, spa-x-bible-nuevaviviente, spa-xbible-nuevointernacional, spa-x-bible-reinavaleracontemporanea", "cite_spans": [ { "start": 13, "end": 30, "text": "(Tiedemann, 2012;", "ref_id": "BIBREF25" }, { "start": 31, "end": 55, "text": "Prokopidis et al., 2016)", "ref_id": "BIBREF22" }, { "start": 197, "end": 225, "text": "2011-v1, ayr-x-bible-1997-v1", "ref_id": null }, { "start": 248, "end": 265, "text": "(Tiedemann, 2020)", "ref_id": "BIBREF24" }, { "start": 283, "end": 314, "text": "(Feldman and Coto-Solano, 2020)", "ref_id": "BIBREF7" }, { "start": 511, "end": 532, "text": "(Ortega et al., 2020;", "ref_id": "BIBREF21" }, { "start": 533, "end": 576, "text": "Cushimariano Romano and Sebasti\u00e1n Q., 2008;", "ref_id": "BIBREF5" }, { "start": 577, "end": 589, "text": "Mihas, 2011)", "ref_id": "BIBREF19" }, { "start": 631, "end": 656, "text": "(Bustamante et al., 2020)", "ref_id": "BIBREF3" }, { "start": 725, "end": 748, "text": "(Chiruzzo et al., 2020)", "ref_id": "BIBREF4" }, { "start": 875, "end": 892, "text": "(Tiedemann, 2020)", "ref_id": "BIBREF24" }, { "start": 959, "end": 979, "text": "(Mager et al., 2018)", "ref_id": "BIBREF14" }, { "start": 1116, "end": 1136, "text": "(Mager et al., 2018)", "ref_id": "BIBREF14" }, { "start": 1204, "end": 1229, "text": "(Mager Hois et al., 2016)", "ref_id": "BIBREF16" }, { "start": 1256, "end": 1288, "text": "(Gutierrez-Vasques et al., 2016)", "ref_id": "BIBREF9" }, { "start": 1455, "end": 1472, "text": "(Tiedemann, 2020)", "ref_id": "BIBREF24" }, { "start": 1484, "end": 1501, "text": "(Tiedemann, 2012;", "ref_id": "BIBREF25" }, { "start": 1502, "end": 1523, "text": "Agi\u0107 and Vuli\u0107, 2019)", "ref_id": "BIBREF0" }, { "start": 1558, "end": 1580, "text": "(Agi\u0107 and Vuli\u0107, 2019)", "ref_id": "BIBREF0" }, { "start": 1693, "end": 1710, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF25" }, { "start": 1997, "end": 2014, "text": "(Tiedemann, 2020)", "ref_id": "BIBREF24" }, { "start": 2040, "end": 2064, "text": "(Galarreta et al., 2017;", "ref_id": "BIBREF8" }, { "start": 2065, "end": 2086, "text": "Montoya et al., 2019)", "ref_id": "BIBREF20" }, { "start": 2330, "end": 2355, "text": "(Bustamante et al., 2020)", "ref_id": "BIBREF3" }, { "start": 2426, "end": 2442, "text": "(Brambila, 1976)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Aymara aym train", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "JW300: A widecoverage parallel corpus for low-resource languages", "authors": [ { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3204--3210", "other_ids": { "DOI": [ "10.18653/v1/P19-1310" ] }, "num": null, "urls": [], "raw_text": "\u017deljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "OpusFilter: A configurable parallel corpus filtering toolbox", "authors": [ { "first": "Mikko", "middle": [], "last": "Aulamo", "suffix": "" }, { "first": "Sami", "middle": [], "last": "Virpioja", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "150--156", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-demos.20" ] }, "num": null, "urls": [], "raw_text": "Mikko Aulamo, Sami Virpioja, and J\u00f6rg Tiedemann. 2020. OpusFilter: A configurable parallel corpus filtering toolbox. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 150-156, Online. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Diccionario Raramuri -Castellano (Tarahumara)", "authors": [ { "first": "David", "middle": [], "last": "Brambila", "suffix": "" } ], "year": 1976, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Brambila. 1976. Diccionario Raramuri -Castel- lano (Tarahumara). Obra Nacional de la Buena Prensa, M\u00e9xico.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru", "authors": [ { "first": "Gina", "middle": [], "last": "Bustamante", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zariquiey", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2914--2923", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gina Bustamante, Arturo Oncevay, and Roberto Zariquiey. 2020. No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, pages 2914-2923, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Development of a Guarani -Spanish parallel corpus", "authors": [ { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Amarilla", "suffix": "" }, { "first": "Adolfo", "middle": [], "last": "R\u00edos", "suffix": "" }, { "first": "Gustavo", "middle": [ "Gim\u00e9nez" ], "last": "Lugo", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2629--2633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Luis Chiruzzo, Pedro Amarilla, Adolfo R\u00edos, and Gus- tavo Gim\u00e9nez Lugo. 2020. Development of a Guarani -Spanish parallel corpus. In Proceedings of the 12th Language Resources and Evaluation Con- ference, pages 2629-2633, Marseille, France. Euro- pean Language Resources Association.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "\u00d1aantsipeta ash\u00e1ninkaki birakochaki. diccionario ash\u00e1ninka-castellano", "authors": [ { "first": "Cushimariano", "middle": [], "last": "Rub\u00e9n", "suffix": "" }, { "first": "", "middle": [], "last": "Romano", "suffix": "" }, { "first": "C", "middle": [], "last": "Richer", "suffix": "" }, { "first": "Q", "middle": [], "last": "Sebasti\u00e1n", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rub\u00e9n Cushimariano Romano and Richer C. Se- basti\u00e1n Q. 2008. \u00d1aantsipeta ash\u00e1ninkaki bi- rakochaki. diccionario ash\u00e1ninka-castellano. versi\u00f3n preliminar. http://www.lengamer.org/ publicaciones/diccionarios/.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Americasnli: Evaluating zero-shot natural language understanding of pretrained multilingual models", "authors": [ { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Ramos", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vladimir", "suffix": "" }, { "first": "Gustavo", "middle": [ "A" ], "last": "Gim\u00e9nez-Lugo", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Mager", "suffix": "" } ], "year": null, "venue": "Ngoc Thang Vu, and Katharina Kann. 2021", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Gim\u00e9nez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, and Katharina Kann. 2021. Americasnli: Evaluating zero-shot nat- ural language understanding of pretrained multilin- gual models in truly low-resource languages.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Neural machine translation models with back-translation for the extremely low-resource indigenous language Bribri", "authors": [ { "first": "Isaac", "middle": [], "last": "Feldman", "suffix": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "3965--3976", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.351" ] }, "num": null, "urls": [], "raw_text": "Isaac Feldman and Rolando Coto-Solano. 2020. Neu- ral machine translation models with back-translation for the extremely low-resource indigenous language Bribri. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3965-3976, Barcelona, Spain (Online). Interna- tional Committee on Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Corpus creation and initial SMT experiments between Spanish and Shipibo-konibo", "authors": [ { "first": "Ana-Paula", "middle": [], "last": "Galarreta", "suffix": "" }, { "first": "Andr\u00e9s", "middle": [], "last": "Melgar", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "238--244", "other_ids": { "DOI": [ "10.26615/978-954-452-049-6_033" ] }, "num": null, "urls": [], "raw_text": "Ana-Paula Galarreta, Andr\u00e9s Melgar, and Arturo On- cevay. 2017. Corpus creation and initial SMT ex- periments between Spanish and Shipibo-konibo. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 238-244, Varna, Bulgaria. INCOMA Ltd.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Axolotl: a web accessible parallel corpus for Spanish-Nahuatl", "authors": [ { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "" }, { "first": "Gerardo", "middle": [], "last": "Sierra", "suffix": "" }, { "first": "Isaac", "middle": [], "last": "Hernandez Pompa", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "4210--4214", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ximena Gutierrez-Vasques, Gerardo Sierra, and Isaac Hernandez Pompa. 2016. Axolotl: a web accessible parallel corpus for Spanish-Nahuatl. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4210-4214, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "Melvin", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Quoc", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "Maxim", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Yonghui", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Zhifeng", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "Fernanda", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "Macduff", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": { "DOI": [ "10.1162/tacl_a_00065" ] }, "num": null, "urls": [], "raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi\u00e9gas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Opennmt: Open-source toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-4012" ] }, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proc. ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Subword regularization: Improving neural network translation models with multiple subword candidates", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "66--75", "other_ids": { "DOI": [ "10.18653/v1/P18-1007" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 66-75, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Probabilistic finite-state morphological segmenter for wixarika (huichol) language", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Di\u00f3nico", "middle": [], "last": "Carrillo", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Meza", "suffix": "" } ], "year": 2018, "venue": "Journal of Intelligent & Fuzzy Systems", "volume": "34", "issue": "5", "pages": "3081--3087", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Mager, Di\u00f3nico Carrillo, and Ivan Meza. 2018. Probabilistic finite-state morphological segmenter for wixarika (huichol) language. Journal of Intel- ligent & Fuzzy Systems, 34(5):3081-3087.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Gustavo", "middle": [], "last": "Gim\u00e9nez-Lugo", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Ramos", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Currey", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Ivan Vladimir Meza", "middle": [], "last": "Ruiz", "suffix": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Kann", "suffix": "" } ], "year": 2021, "venue": "Proceedings of theThe First Workshop on NLP for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Xi- mena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Gim\u00e9nez-Lugo, Ricardo Ramos, Anna Currey, Vishrav Chaudhary, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager, Ngoc Thang Vu, Graham Neubig, and Katha- rina Kann. 2021. Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. In Proceed- ings of theThe First Workshop on NLP for Indige- nous Languages of the Americas, Online. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Traductor estad\u00edstico wixarika -espa\u00f1ol usando descomposici\u00f3n morfol\u00f3gica", "authors": [ { "first": "Jes\u00fas", "middle": [], "last": "Manuel Mager", "suffix": "" }, { "first": "Carlos", "middle": [ "Barron" ], "last": "Hois", "suffix": "" }, { "first": "Ivan Vladimir Meza", "middle": [], "last": "Romero", "suffix": "" }, { "first": "", "middle": [], "last": "Ru\u00edz", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jes\u00fas Manuel Mager Hois, Carlos Barron Romero, and Ivan Vladimir Meza Ru\u00edz. 2016. Traductor estad\u00eds- tico wixarika -espa\u00f1ol usando descomposici\u00f3n mor- fol\u00f3gica. COMTEL, 6.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Creating a massively parallel Bible corpus", "authors": [ { "first": "Thomas", "middle": [], "last": "Mayer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Cysouw", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)", "volume": "", "issue": "", "pages": "3158--3163", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Mayer and Michael Cysouw. 2014. Creating a massively parallel Bible corpus. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 3158- 3163, Reykjavik, Iceland. European Language Re- sources Association (ELRA).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration", "authors": [ { "first": "D", "middle": [], "last": "Arya", "suffix": "" }, { "first": "Rachel", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "Dylan", "middle": [], "last": "Wicks", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Winston", "middle": [], "last": "Mueller", "suffix": "" }, { "first": "Oliver", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Garrett", "middle": [], "last": "Adams", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Nicolai", "suffix": "" }, { "first": "David", "middle": [], "last": "Post", "suffix": "" }, { "first": "", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2884--2892", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Garrett Nico- lai, Matt Post, and David Yarowsky. 2020. The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration. In Proceed- ings of the 12th Language Resources and Evaluation Conference, pages 2884-2892, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A\u00f1aani katonkosatzi parenini, El idioma del alto Peren\u00e9", "authors": [ { "first": "Elena", "middle": [], "last": "Mihas", "suffix": "" } ], "year": 2011, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Mihas. 2011. A\u00f1aani katonkosatzi parenini, El idioma del alto Peren\u00e9. Milwaukee, WI: Clarks Graphics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A continuous improvement framework of machine translation for Shipibo-konibo", "authors": [ { "first": "H\u00e9ctor Erasmo G\u00f3mez", "middle": [], "last": "Montoya", "suffix": "" }, { "first": "Kervy Dante Rivas", "middle": [], "last": "Rojas", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "17--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "H\u00e9ctor Erasmo G\u00f3mez Montoya, Kervy Dante Rivas Rojas, and Arturo Oncevay. 2019. A continuous improvement framework of machine translation for Shipibo-konibo. In Proceedings of the 2nd Work- shop on Technologies for MT of Low Resource Lan- guages, pages 17-23, Dublin, Ireland. European As- sociation for Machine Translation.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Overcoming resistance: The normalization of an Amazonian tribal language", "authors": [ { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Richard", "middle": [ "Alexander" ], "last": "Castro-Mamani", "suffix": "" }, { "first": "Jaime Rafael Montoya", "middle": [], "last": "Samame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Ortega, Richard Alexander Castro-Mamani, and Jaime Rafael Montoya Samame. 2020. Overcom- ing resistance: The normalization of an Amazonian tribal language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 1-13, Suzhou, China. Association for Compu- tational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Parallel Global Voices: a collection of multilingual corpora with citizen media stories", "authors": [ { "first": "Prokopis", "middle": [], "last": "Prokopidis", "suffix": "" }, { "first": "Vassilis", "middle": [], "last": "Papavassiliou", "suffix": "" }, { "first": "Stelios", "middle": [], "last": "Piperidis", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "900--905", "other_ids": {}, "num": null, "urls": [], "raw_text": "Prokopis Prokopidis, Vassilis Papavassiliou, and Ste- lios Piperidis. 2016. Parallel Global Voices: a col- lection of multilingual corpora with citizen media stories. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 900-905, Portoro\u017e, Slovenia. Eu- ropean Language Resources Association (ELRA).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "The University of Helsinki and aalto university submissions to the WMT 2020 news and low-resource translation tasks", "authors": [ { "first": "Yves", "middle": [], "last": "Scherrer", "suffix": "" }, { "first": "Stig-Arne", "middle": [], "last": "Gr\u00f6nroos", "suffix": "" }, { "first": "Sami", "middle": [], "last": "Virpioja", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "1129--1138", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yves Scherrer, Stig-Arne Gr\u00f6nroos, and Sami Virpi- oja. 2020. The University of Helsinki and aalto university submissions to the WMT 2020 news and low-resource translation tasks. In Proceedings of the Fifth Conference on Machine Translation, pages 1129-1138, Online. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The Tatoeba translation challenge -realistic data sets for low resource and multilingual MT", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Fifth Conference on Machine Translation", "volume": "", "issue": "", "pages": "1174--1182", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2020. The Tatoeba translation chal- lenge -realistic data sets for low resource and multi- lingual MT. In Proceedings of the Fifth Conference on Machine Translation, pages 1174-1182, Online. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Parallel data, tools and interfaces in OPUS", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Parallel corpora for medium density languages", "authors": [ { "first": "D", "middle": [], "last": "Varga", "suffix": "" }, { "first": "L", "middle": [], "last": "N\u00e9meth", "suffix": "" }, { "first": "P", "middle": [], "last": "Hal\u00e1csy", "suffix": "" }, { "first": "A", "middle": [], "last": "Kornai", "suffix": "" }, { "first": "V", "middle": [], "last": "Tr\u00f3n", "suffix": "" }, { "first": "V", "middle": [], "last": "Nagy", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Recent Advances in Natural Language Processing (RANLP-2005)", "volume": "", "issue": "", "pages": "590--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Varga, L. N\u00e9meth, P. Hal\u00e1csy, A. Kornai, V. Tr\u00f3n, and V. Nagy. 2005. Parallel corpora for medium density languages. In Proceedings of Recent Ad- vances in Natural Language Processing (RANLP- 2005), pages 590-596.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998-6008, Long Beach, California, USA.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "The University of Helsinki submission to the WMT19 parallel corpus filtering task", "authors": [ { "first": "Ra\u00fal", "middle": [], "last": "V\u00e1zquez", "suffix": "" }, { "first": "Umut", "middle": [], "last": "Sulubacak", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Fourth Conference on Machine Translation", "volume": "3", "issue": "", "pages": "294--300", "other_ids": { "DOI": [ "10.18653/v1/W19-5441" ] }, "num": null, "urls": [], "raw_text": "Ra\u00fal V\u00e1zquez, Umut Sulubacak, and J\u00f6rg Tiedemann. 2019. The University of Helsinki submission to the WMT19 parallel corpus filtering task. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 3: Shared Task Papers, Day 2), pages 294-300, Florence, Italy. Association for Computa- tional Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "uris": null, "text": "Diagram of the OpusFilter workflow used for Spanish (es) -Raramuri (tar) training data. Boxes are OpusFilter steps and arrows are data files." }, "FIGREF1": { "type_str": "figure", "num": null, "uris": null, "text": "ChrF2 scores obtained with different training configurations of model B. Note: to improve the readability of the graph, the plotted values are smoothed by averaging over three consecutive training steps." }, "TABREF2": { "text": "Numbers of segments in the data sets (train: training set provided by the organizers, extra: additional training data collected by the organizers and us, combined: combined training data, dedup: combined training without duplicates, filtered: training data filtered with all filters, bibles: generated Bible data segments after filtering, monoling: monolingual data after filtering, backtr: back-translations created from monolingual data after filtering, dev: development set)", "content": "", "html": null, "num": null, "type_str": "table" }, "TABREF4": { "text": "chrF2 scores for the five submissions, computed on the development set and test set. Note that only 50% of the development set is used for evaluation for the 50dev submissions. The chrF2 scores for B-100dev on the development set are all above 0.98, but they are not meaningful since it was fully included in training. The Run column provides the numeric IDs with which our submissions are listed in the overview paper.", "content": "
", "html": null, "num": null, "type_str": "table" } } } }