{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:20.754375Z" }, "title": "The REPUcs' Spanish-Quechua Submission to the AmericasNLP 2021 Shared Task on Open Machine Translation", "authors": [ { "first": "Oscar", "middle": [], "last": "Moreno Veliz", "suffix": "", "affiliation": { "laboratory": "", "institution": "Pontificia Universidad Cat\u00f3lica del Per\u00fa", "location": {} }, "email": "omoreno@pucp.edu.pe" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present the submission of REPUcs 1 to the AmericasNLP machine translation shared task for the low resource language pair Spanish-Quechua. Our neural machine translation system ranked first in Track two (development set not used for training) and third in Track one (training includes development data). Our contribution is focused on: (i) the collection of new parallel data from different web sources (poems, lyrics, lexicons, handbooks), and (ii) using large Spanish-English data for pre-training and then fine-tuning the Spanish-Quechua system. This paper describes the new parallel corpora and our approach in detail.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present the submission of REPUcs 1 to the AmericasNLP machine translation shared task for the low resource language pair Spanish-Quechua. Our neural machine translation system ranked first in Track two (development set not used for training) and third in Track one (training includes development data). Our contribution is focused on: (i) the collection of new parallel data from different web sources (poems, lyrics, lexicons, handbooks), and (ii) using large Spanish-English data for pre-training and then fine-tuning the Spanish-Quechua system. This paper describes the new parallel corpora and our approach in detail.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "REPUcs participated in the AmericasNLP 2021 machine translation shared task (Mager et al., 2021) for the Spanish-Quechua language pair. Quechua is one of the most spoken languages in South America (Simons and Fenning, 2019) , with several variants, and for this competition, the target language is Southern Quechua. A disadvantage of working with indigenous languages is that there are few documents per language from which to extract parallel or even monolingual corpora. Additionally, most of these languages are traditionally oral, which is the case of Quechua. In order to compensate the lack of data we first obtain a collection of new parallel corpora to augment the available data for the shared task. In addition, we propose to use transfer learning (Zoph et al., 2016) using large Spanish-English data in a neural machine translation (NMT) model. To boost the performance of our transfer learning approach, we follow the work of Kocmi and Bojar (2018) , which demonstrated that sharing the source language and a vocabulary of subword 1 \"Research Experience for Peruvian Undergraduates -Computer Science\" is a program that connects Peruvian students with researchers worldwide. The author was part of the 2021 cohort: https://www.repuprogram.org/repu-cs. units can improve the performance of low resource languages.", "cite_spans": [ { "start": 76, "end": 96, "text": "(Mager et al., 2021)", "ref_id": "BIBREF11" }, { "start": 197, "end": 223, "text": "(Simons and Fenning, 2019)", "ref_id": "BIBREF22" }, { "start": 758, "end": 777, "text": "(Zoph et al., 2016)", "ref_id": "BIBREF26" }, { "start": 938, "end": 960, "text": "Kocmi and Bojar (2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Quechua is the most widespread language family in South America, with more than 6 millions speakers and several variants. For the AmericasNLP Shared Task, the development and test sets were prepared using the Standard Southern Quechua writing system, which is based on the Quechua Ayacucho (quy) variant (for simplification, we will refer to it as Quechua for the rest of the paper). This is an official language in Peru, and according to Zariquiey et al. (2019) it is labelled as endangered. Quechua is essentially a spoken language so there is a lack of written materials. Moreover, it is a polysynthetic language, meaning that it usually express large amount of information using several morphemes in a single word. Hence, subword segmentation methods will have to minimise the problem of addressing \"rare words\" for an NMT system.", "cite_spans": [ { "start": 439, "end": 462, "text": "Zariquiey et al. (2019)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish\u2192Quechua", "sec_num": "2" }, { "text": "To the best of our knowledge, Ortega et al. (2020b) is one of the few studies that employed a sequence-to-sequence NMT model for Southern Quechua, and they focused on transfer learning with Finnish, an agglutinative language similar to Quechua. Likewise, Huarcaya Taquiri (2020) used the Jehovah Witnesses dataset (Agi\u0107 and Vuli\u0107, 2019) , together with additional lexicon data, to train an NMT model that reached up to 39 BLEU points on Quechua. However, the results in both cases were high because the development and test set are split from the same distribution (domain) as the training set. On the other hand, Ortega and Pillaipakkamnatt (2018) improved alignments for Quechua by using Finnish(an agglutinative language) as the pivot language. The corpus source is the parallel treebank of Rios et al. (Rios et al., 2012) ., so we deduce that they worked with Quechua Cuzco (quz). (Ortega et al., 2020a) In the AmericasNLP shared task, new out-of-domain evaluation sets were released, and there were two tracks: using or not the validation set for training the final submission. We addressed both tracks by collecting more data and pre-training the NMT model with large Spanish-English data.", "cite_spans": [ { "start": 314, "end": 336, "text": "(Agi\u0107 and Vuli\u0107, 2019)", "ref_id": "BIBREF0" }, { "start": 806, "end": 825, "text": "(Rios et al., 2012)", "ref_id": "BIBREF20" }, { "start": 885, "end": 907, "text": "(Ortega et al., 2020a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Spanish\u2192Quechua", "sec_num": "2" }, { "text": "3 Data and pre-processing", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spanish\u2192Quechua", "sec_num": "2" }, { "text": "In this competition we are going to use the Ameri-casNLP Shared Task datasets and new corpora extracted from documents and websites in Quechua.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Spanish\u2192Quechua", "sec_num": "2" }, { "text": "For training, the available parallel data comes from dictionaries and Jehovah Witnesses dataset (JW300; Agi\u0107 and Vuli\u0107, 2019) . AmericasNLP also released parallel corpus aligned with English (en) and the close variant of Quechua Cusco (quz) to enhance multilingual learning. For validation, there is a development set made with 994 sentences from Spanish and Quechua (quy) (Ebrahimi et al., 2021) . Detailed information from all the available datasets with their corresponding languages is as follows:", "cite_spans": [ { "start": 104, "end": 125, "text": "Agi\u0107 and Vuli\u0107, 2019)", "ref_id": "BIBREF0" }, { "start": 373, "end": 396, "text": "(Ebrahimi et al., 2021)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "AmericasNLP datasets", "sec_num": "3.1" }, { "text": "\u2022 JW300 (quy, quz, en): texts from the religious domain available in OPUS (Tiedemann, 2012) . JW300 has 121k sentences. The problems with this dataset are misaligned sentences, misspelled words and blank translations. \u2022 MINEDU (quy): Sentences extracted from the official dictionary of the Ministry of Education in Peru (MINEDU). This dataset contains open-domain short sentences. A considerable number of sentences are related to the countryside. It only has 650 sentences. \u2022 Dict_misc (quy): Dictionary entries and samples collected and reviewed by Huarcaya Taquiri (2020). This dataset is made from 9k sentences, phrases and word translations. Furthermore, to examine the domain resemblance, it is important to analyse the similarity between the training and development. Table 1 shows the percentage of the development set tokens that overlap with the tokens in the training datasets on Spanish (es) and Quechua (quy) after deleting all types of symbols.", "cite_spans": [ { "start": 74, "end": 91, "text": "(Tiedemann, 2012)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 775, "end": 782, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "AmericasNLP datasets", "sec_num": "3.1" }, { "text": "We observe from Table 1 that the domain of the training and development set are different as the overlapping in Quechua does not even go above 50%. There are two approaches to address this Dataset % Dev overlapping es quy JW300 85% 45% MINEDU 15% 5% Dict_misc 40% 18% Table 1 : Word overlapping ratio between the development and the available training sets in AmericasNLP problem: to add part of the development set into the training or to obtain additional data from the same or a more similar domain. In this paper, we focus on the second approach.", "cite_spans": [], "ref_spans": [ { "start": 16, "end": 23, "text": "Table 1", "ref_id": null }, { "start": 268, "end": 275, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "AmericasNLP datasets", "sec_num": "3.1" }, { "text": "Sources of Quechua documents Even though Quechua is an official language in Peru, official government websites are not translated to Quechua or any other indigenous language, so it is not possible to perform web scrapping (Bustamante et al., 2020) . However, the Peruvian Government has published handbooks and lexicons for Quechua Ayacucho and Quechua Cusco, plus other educational resources to support language learning in indigenous communities. In addition, there are official documents such as the Political Constitution of Peru and the Regulation of the Amazon Parliament that are translated to the Quechua Cusco variant. We have found three unofficial sources to extract parallel corpora from Quechua Ayacucho (quy). The first one is a website, made by Maximiliano Duran (Duran, 2010) , that encourages the learning of Quechua Ayacucho. The site contains poems, stories, riddles, songs, phrases and a vocabulary for Quechua. The second one is a website for different lyrics of poems and songs which have available translations for both variants of Quechua (Lyrics translate, 2008). The third source is a Quechua handbook for the Quechua Ayacucho variant elaborated by Iter and C\u00e1rdenas (2019) .", "cite_spans": [ { "start": 222, "end": 247, "text": "(Bustamante et al., 2020)", "ref_id": "BIBREF1" }, { "start": 778, "end": 791, "text": "(Duran, 2010)", "ref_id": "BIBREF2" }, { "start": 1175, "end": 1199, "text": "Iter and C\u00e1rdenas (2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "New parallel corpora", "sec_num": "3.2" }, { "text": "Sources that were extracted but not used due to time constrains were the Political Constitution of Peru and the Regulation of the Amazon Parliament. Other non-extracted source is a dictionary for Quechua Ayacucho from a website called InkaTour 2 . This source was not used because we already had a dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New parallel corpora", "sec_num": "3.2" }, { "text": "Methodology for corpus creation The available vocabulary in Duran (2010) was extracted manually and transformed into parallel corpora using the first pair of parenthesis as separators. We will call this dataset \"Lexicon\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "New parallel corpora", "sec_num": "3.2" }, { "text": "All the additional sentences in Duran (2010) and a few poems from (Lyrics translate, 2008) were manually aligned to obtain the Web Miscellaneous (WebMisc) corpus. Likewise, translations from the Quechua educational handbook (Iter and C\u00e1rdenas, 2019) were manually aligned to obtain a parallel corpus (Handbook). 3 In the case of the official documents for Quechua Cusco, there was a specific format were the Spanish text was followed by the Quechua translation. After manually arranging the line breaks to separate each translation pair, we automatically constructed a parallel corpus for both documents. Paragraphs with more than 2 sentences that had the same number of sentences as their translation were split into small sentences and the unmatched paragraphs were deleted.", "cite_spans": [ { "start": 224, "end": 249, "text": "(Iter and C\u00e1rdenas, 2019)", "ref_id": null }, { "start": 312, "end": 313, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "New parallel corpora", "sec_num": "3.2" }, { "text": "We perform a large number or rare events (LNRE) modelling to analyse the WebMisc, Lexicon and Handbook datasets 4 . The values are shown in Table 2 : Corpora description: S = #sentences in corpus; N = number of tokens; V = vocabulary size; V1 = number of tokens occurring once (hapax); V/N = vocabulary growth rate; V1/N = hapax growth rate; mean = word frequency mean", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 147, "text": "Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Corpora description", "sec_num": null }, { "text": "We notice that the vocabulary and hapax growth rate is similar for Quechua (quy) in WebMisc and Handbook even though the latter has more than twice the number of sentences. In addition, it was expected that the word frequency mean and the vocabulary size were lower for Quechua, as this demonstrates its agglutinative property. However, this does not happens in the Lexicon dataset, since is understandable as it is a dictionary that has one or two words for the translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpora description", "sec_num": null }, { "text": "Moreover, there is a high presence of tokens occurring only once in both languages. In other words, there is a possibility that our datasets have spelling errors or presence of foreign words (Nagata et al., 2018) . However, in this case this could be more related to the vast vocabulary, as the datasets are made of sentences from different domains (poems, songs, teaching, among others).", "cite_spans": [ { "start": 191, "end": 212, "text": "(Nagata et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Corpora description", "sec_num": null }, { "text": "Furthermore, it is important to examine the similarities between the new datasets and the development set. The percentage of the development set words that overlap with the words of the new datasets on Spanish (es) and Quechua (quy) after eliminating all symbols is shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 275, "end": 282, "text": "Table 3", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Corpora description", "sec_num": null }, { "text": "% Dev overlapping es quy WebMisc 18.6% 4% Lexicon 20% 3.4% Handbook 28%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "10.6% Although at first glance the analysis may show that there is not a significant similarity with the development set, we have to take into account that in Table 1 , JW300 has 121k sentences and Dict_misc is a dictionary, so it is easy to overlap some of the development set words at least once.However , in the case of WebMisc and Handbook datasets, the quantity of sentences are less than 3k per dataset and even so the percentage of overlapping in Spanish is quite good. This result goes according to the contents of the datasets, as they contain common phrases and open domain sentences, which are the type of sentences that the development set has.", "cite_spans": [], "ref_spans": [ { "start": 159, "end": 166, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dataset", "sec_num": null }, { "text": "For pre-training, we used the EuroParl dataset for Spanish-English (1.9M sentences) (Koehn, 2005) and its development corpora for evaluation.", "cite_spans": [ { "start": 84, "end": 97, "text": "(Koehn, 2005)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "English-Spanish dataset", "sec_num": "3.3" }, { "text": "From the Europarl dataset, we extracted 3,000 sentences for validation. For testing we used the devel-opment set from the WMT2006 campaign (Koehn and Monz, 2006) .", "cite_spans": [ { "start": 139, "end": 161, "text": "(Koehn and Monz, 2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Approach used 4.1 Evaluation", "sec_num": "4" }, { "text": "In the case of Quechua, as the official development set contains only 1,000 sentences there was no split for the testing. Hence, validation results will be taken into account as testing ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach used 4.1 Evaluation", "sec_num": "4" }, { "text": "The main metric in this competition is chrF (Popovi\u0107, 2017) which evaluates character n-grams and is a useful metric for agglutinative languages such as Quechua. We also reported the BLEU scores (Papineni et al., 2002) . We used the implementations of sacreBLEU (Post, 2018) .", "cite_spans": [ { "start": 44, "end": 59, "text": "(Popovi\u0107, 2017)", "ref_id": "BIBREF18" }, { "start": 195, "end": 218, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF17" }, { "start": 262, "end": 274, "text": "(Post, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Approach used 4.1 Evaluation", "sec_num": "4" }, { "text": "Subword segmentation is a crucial process for the translation of polysinthetic languages such as Quechua. We used the Byte-Pair-Encoding (BPE; Sennrich et al., 2016) implementation in Sentence-Piece (Kudo and Richardson, 2018) with a vocabulary size of 32,000. To generate a richer vocabulary, we trained a segmentation model with all three languages (Spanish, English and Quechua), where we upsampled the Quechua data to reach a uniform distribution.", "cite_spans": [ { "start": 199, "end": 226, "text": "(Kudo and Richardson, 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Subword segmentation", "sec_num": "4.2" }, { "text": "For all experiments, we used a Transformer-based model (Vaswani et al., 2017) with default parameters from the Fairseq toolkit (Ott et al., 2019) . The criteria for early stopping was cross-entropy loss for 15 steps.", "cite_spans": [ { "start": 55, "end": 77, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF24" }, { "start": 127, "end": 145, "text": "(Ott et al., 2019)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.3" }, { "text": "We first pre-trained a Spanish-English model on the Europarl dataset in order to obtain a good encoding capability on the Spanish side. Using this pre-trained model, we implemented two different versions for fine-tunning. First, with the JW300 dataset, which was the largest Spanish-Quechua corpus, and the second one with all the available datasets (including the ones that we obtained) for Quechua.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "4.3" }, { "text": "The results from the transfer learning models and the baseline are shown in Table 4 . We observe that the best result on BLEU and chrF was obtained using the provided datasets together with the extracted datasets. This shows that the new corpora were helpful to improve translation performance.", "cite_spans": [], "ref_spans": [ { "start": 76, "end": 83, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and discussion", "sec_num": "5" }, { "text": "From Table 4 , we observe that using transfer learning showed a considerable improvement in comparison with the baseline (+0.56 in BLEU and .007 in chrF). Moreover, using transfer learning with all the available datasets obtained the best BLEU and chrF score. Specially, it had a 0.012 increase in chrF which is quite important as chrF is the metric that best evaluates translation in this case. Overall, the results do not seem to be good in terms of BLEU. However, a manual analysis of the sentences shows that the model is learning to translate a considerable amount of affixes.", "cite_spans": [], "ref_spans": [ { "start": 5, "end": 12, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Results and discussion", "sec_num": "5" }, { "text": "El control de armas probablemente no es popular en Texas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input (ES)", "sec_num": null }, { "text": "Weapon control is probably not popular in Texas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input (EN)", "sec_num": null }, { "text": "Texaspiqa sutillapas arma controlayqa manachusmi hinachu apakun Output", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference (QUY)", "sec_num": null }, { "text": "Texas llaqtapi armakuna controlayqa manam runakunapa runachu For instance, the subwords \"arma\", \"mana\", among others, have been correctly translated but are not grouped in the same words as in the reference. In addition, only the word \"controlayqa\" is translated correctly, which would explain the low results in BLEU. Decoding an agglutinative language is a very difficult task, and the low BLEU scores cannot suggest a translation with proper adequacy and/or fluency (as we can also observe this from the example). Nevertheless, BLEU works at word-level so other character-level metrics should be considered to inspect agglutinative languages. This would be the case of chrF (Popovi\u0107, 2017) were there is an increase of around 3% when using the AmericasNLP altogether with the new extracted corpora.", "cite_spans": [ { "start": 677, "end": 692, "text": "(Popovi\u0107, 2017)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Reference (QUY)", "sec_num": null }, { "text": "Translations using the transfer learning model trained with all available Quechua datasets were submitted for track 2 (Development set not used for Training). For the submission of track 1 (Development set used for Training) we retrained the best transfer learning model adding the validation to the training for 40 epochs. The official results of the competition are shown in ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reference (QUY)", "sec_num": null }, { "text": "In this paper, we focused on extracting new datasets for Spanish-Quechua, which helped to improve the performance of our model. Moreover, we found that using transfer learning was beneficial to the results even without the additional data. By combining the new corpora in the fine-tuning step, we managed to obtain the first place on Track 2 and the third place on Track 1 of the AmericasNLP Shared Task. Due to time constrains, the Quechua Cusco data was not used, but it can be beneficial for further work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "In general, we found that the translating Quechua is a challenging task for two reasons. Firstly, there is a lack of data for all the variants of Quechua, and the available documents are hard to extract. In this research, all the new datasets were extracted and aligned mostly manually. Secondly, the agglutinative nature of Quechua motivates more research about effective subword segmentation methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "https://www.inkatour.com/dico/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "All documents are published in: https://github.com/ Ceviche98/REPUcs-AmericasNLP20214 We used the LNRE calculator created by Kyle Gorman: https://gist.github.com/kylebgorman/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work could not be possible without the support of REPU Computer Science (Research Experience for Peruvian Undergraduates), a program that connects Peruvian students with researchers across the world. The author is thankful to the REPU's directors and members, and in particular, to Fernando Alva-Manchego and David Freidenson, who were part of the early discussions for the participation in the Shared Task. Furthermore, the author is grateful to the insightful feedback of Arturo Oncevay, Barry Haddow and Alexandra Birch, from the University of Edinburgh, where the author worked as an intern as part of the REPU's 2021 cohort.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": " Table 7 : Description of the corpora extracted, but not used, for Quechua Cusco (quz). S = #sentences in corpus; N = number of tokens; V = vocabulary size; V1 = number of tokens occurring once (hapax); V/N = vocabulary growth rate; V1/N = hapax growth rate; mean = word frequency mean", "cite_spans": [], "ref_spans": [ { "start": 1, "end": 8, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "A Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "JW300: A widecoverage parallel corpus for low-resource languages", "authors": [ { "first": "\u017deljko", "middle": [], "last": "Agi\u0107", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vuli\u0107", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "3204--3210", "other_ids": { "DOI": [ "10.18653/v1/P19-1310" ] }, "num": null, "urls": [], "raw_text": "\u017deljko Agi\u0107 and Ivan Vuli\u0107. 2019. JW300: A wide- coverage parallel corpus for low-resource languages. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3204-3210, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru", "authors": [ { "first": "Gina", "middle": [], "last": "Bustamante", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Zariquiey", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "2914--2923", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gina Bustamante, Arturo Oncevay, and Roberto Zariquiey. 2020. No data to crawl? monolingual corpus creation from PDF files of truly low-resource languages in Peru. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 2914-2923, Marseille, France. European Language Resources Association.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Lengua general de los Incas", "authors": [ { "first": "Maximiliano", "middle": [], "last": "Duran", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "2021--2024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maximiliano Duran. 2010. Lengua general de los In- cas. http://quechua-ayacucho.org/es/index_es.php. Accessed: 2021-03-15.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Americasnli: Evaluating zero-shot natural language understanding of pretrained multilingual models", "authors": [ { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Ramos", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vladimir", "suffix": "" }, { "first": "Gustavo", "middle": [ "A" ], "last": "Gim\u00e9nez-Lugo", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Mager", "suffix": "" } ], "year": null, "venue": "Ngoc Thang Vu, and Katharina Kann. 2021", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir, Gustavo A. Gim\u00e9nez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando A. Coto Solano, Ngoc Thang Vu, and Katharina Kann. 2021. Americasnli: Evaluating zero-shot nat- ural language understanding of pretrained multilin- gual models in truly low-resource languages.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Traducci\u00f3n autom\u00e1tica neuronal para lengua nativa peruana. Bachelor's thesis", "authors": [ { "first": "Diego", "middle": [], "last": "Huarcaya", "suffix": "" }, { "first": "Taquiri", "middle": [], "last": "", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diego Huarcaya Taquiri. 2020. Traducci\u00f3n autom\u00e1tica neuronal para lengua nativa peruana. Bachelor's the- sis, Universidad Peruana Uni\u00f3n.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Trivial transfer learning for low-resource neural machine translation", "authors": [ { "first": "Tom", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "244--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244-252, Bel- gium, Brussels. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Europarl: A parallel corpus for statistical machine translation", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" } ], "year": 2005, "venue": "MT summit", "volume": "5", "issue": "", "pages": "79--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, vol- ume 5, pages 79-86. Citeseer.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Manual and automatic evaluation of machine translation between european languages", "authors": [ { "first": "Philipp", "middle": [], "last": "Koehn", "suffix": "" }, { "first": "Christof", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2006, "venue": "Proceedings on the Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "102--121", "other_ids": {}, "num": null, "urls": [], "raw_text": "Philipp Koehn and Christof Monz. 2006. Manual and automatic evaluation of machine translation between european languages. In Proceedings on the Work- shop on Statistical Machine Translation, pages 102- 121, New York City. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "John", "middle": [], "last": "Richardson", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "66--71", "other_ids": { "DOI": [ "10.18653/v1/D18-2012" ] }, "num": null, "urls": [], "raw_text": "Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Lyrics translate. https:// lyricstranslate", "authors": [], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "2021--2024", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lyrics translate. 2008. Lyrics translate. https:// lyricstranslate.com/. Accessed: 2021-03-15.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas", "authors": [ { "first": "Manuel", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "Abteen", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Ximena", "middle": [], "last": "Gutierrez-Vasques", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Chiruzzo", "suffix": "" }, { "first": "Gustavo", "middle": [], "last": "Gim\u00e9nez-Lugo", "suffix": "" }, { "first": "Ricardo", "middle": [], "last": "Ramos", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Currey", "suffix": "" }, { "first": "Vishrav", "middle": [], "last": "Chaudhary", "suffix": "" }, { "first": "Ivan Vladimir Meza", "middle": [], "last": "Ruiz", "suffix": "" }, { "first": "Rolando", "middle": [], "last": "Coto-Solano", "suffix": "" }, { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Elisabeth", "middle": [], "last": "Mager", "suffix": "" }, { "first": "Ngoc", "middle": [ "Thang" ], "last": "Vu", "suffix": "" }, { "first": "Graham", "middle": [], "last": "Neubig", "suffix": "" }, { "first": "Katharina", "middle": [], "last": "Kann", "suffix": "" } ], "year": 2021, "venue": "Proceedings of theThe First Workshop on NLP for Indigenous Languages of the Americas, Online. Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Xi- mena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Gim\u00e9nez-Lugo, Ricardo Ramos, Anna Currey, Vishrav Chaudhary, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager, Ngoc Thang Vu, Graham Neubig, and Katha- rina Kann. 2021. Findings of the AmericasNLP 2021 Shared Task on Open Machine Translation for Indigenous Languages of the Americas. In Proceed- ings of theThe First Workshop on NLP for Indige- nous Languages of the Americas, Online. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Exploring the Influence of Spelling Errors on Lexical Variation Measures", "authors": [ { "first": "Ryo", "middle": [], "last": "Nagata", "suffix": "" }, { "first": "Taisei", "middle": [], "last": "Sato", "suffix": "" }, { "first": "Hiroya", "middle": [], "last": "Takamura", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "2391--2398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryo Nagata, Taisei Sato, and Hiroya Takamura. 2018. Exploring the Influence of Spelling Errors on Lex- ical Variation Measures. Proceedings of the 27th International Conference on Computational Linguis- tics, (2012):2391-2398.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Overcoming resistance: The normalization of an Amazonian tribal language", "authors": [ { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Richard", "middle": [ "Alexander" ], "last": "Castro-Mamani", "suffix": "" }, { "first": "Jaime Rafael Montoya", "middle": [], "last": "Samame", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Ortega, Richard Alexander Castro-Mamani, and Jaime Rafael Montoya Samame. 2020a. Overcom- ing resistance: The normalization of an Amazonian tribal language. In Proceedings of the 3rd Workshop on Technologies for MT of Low Resource Languages, pages 1-13, Suzhou, China. Association for Compu- tational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Using morphemes from agglutinative languages like Quechua and Finnish to aid in low-resource translation", "authors": [ { "first": "John", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Krishnan", "middle": [], "last": "Pillaipakkamnatt", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the AMTA 2018 Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Ortega and Krishnan Pillaipakkamnatt. 2018. Us- ing morphemes from agglutinative languages like Quechua and Finnish to aid in low-resource trans- lation. In Proceedings of the AMTA 2018 Workshop on Technologies for MT of Low Resource Languages (LoResMT 2018), pages 1-11.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Neural machine translation with a polysynthetic low resource language", "authors": [ { "first": "E", "middle": [], "last": "John", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Ortega", "suffix": "" }, { "first": "Kyunghyun", "middle": [], "last": "Castro Mamani", "suffix": "" }, { "first": "", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2020, "venue": "Machine Translation", "volume": "34", "issue": "4", "pages": "325--346", "other_ids": {}, "num": null, "urls": [], "raw_text": "John E Ortega, Richard Castro Mamani, and Kyunghyun Cho. 2020b. Neural machine trans- lation with a polysynthetic low resource language. Machine Translation, 34(4):325-346.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Alexei", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Nathan", "middle": [], "last": "Ng", "suffix": "" }, { "first": "David", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)", "volume": "", "issue": "", "pages": "48--53", "other_ids": { "DOI": [ "10.18653/v1/N19-4009" ] }, "num": null, "urls": [], "raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Min- nesota. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "311--318", "other_ids": { "DOI": [ "10.3115/1073083.1073135" ] }, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "chrF++: words helping character n-grams", "authors": [ { "first": "Maja", "middle": [], "last": "Popovi\u0107", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the Second Conference on Machine Translation", "volume": "", "issue": "", "pages": "612--618", "other_ids": { "DOI": [ "10.18653/v1/W17-4770" ] }, "num": null, "urls": [], "raw_text": "Maja Popovi\u0107. 2017. chrF++: words helping charac- ter n-grams. In Proceedings of the Second Con- ference on Machine Translation, pages 612-618, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "Matt", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": { "DOI": [ "10.18653/v1/W18-6319" ] }, "num": null, "urls": [], "raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Parallel Treebanking Spanish-Quechua: how and how well do they align? Linguistic Issues in Language", "authors": [ { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" }, { "first": "Anne", "middle": [], "last": "G\u00f6hring", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Volk", "suffix": "" } ], "year": 2012, "venue": "Technology", "volume": "7", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annette Rios, Anne G\u00f6hring, and Martin Volk. 2012. Parallel Treebanking Spanish-Quechua: how and how well do they align? Linguistic Issues in Lan- guage Technology, 7(1).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": { "DOI": [ "10.18653/v1/P16-1162" ] }, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Ethnologue: Languages of the World. Twentysecond edition. Dallas Texas: SIL international. Online version", "authors": [ { "first": "F", "middle": [], "last": "Gary", "suffix": "" }, { "first": "Charles", "middle": [ "D" ], "last": "Simons", "suffix": "" }, { "first": "", "middle": [], "last": "Fenning", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gary F. Simons and Charles D. Fenning, editors. 2019. Ethnologue: Languages of the World. Twenty- second edition. Dallas Texas: SIL international. On- line version: http://www.ethnologue.com.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Parallel data, tools and interfaces in opus", "authors": [ { "first": "J\u00f6rg", "middle": [], "last": "Tiedemann", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in opus. In Proceedings of the Eight Interna- tional Conference on Language Resources and Eval- uation (LREC'12), Istanbul, Turkey. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Obsolescencia ling\u00fc\u00edstica, descripci\u00f3n gramatical y documentaci\u00f3n de lenguas en el Per\u00fa: hacia un estado de la cuesti\u00f3n", "authors": [ { "first": "Roberto", "middle": [], "last": "Zariquiey", "suffix": "" }, { "first": "Harald", "middle": [], "last": "Hammarstr\u00f6m", "suffix": "" }, { "first": "M\u00f3nica", "middle": [], "last": "Arakaki", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay", "suffix": "" }, { "first": "John", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2019, "venue": "Lexis", "volume": "43", "issue": "2", "pages": "271--337", "other_ids": { "DOI": [ "10.18800/lexis.201902.001" ] }, "num": null, "urls": [], "raw_text": "Roberto Zariquiey, Harald Hammarstr\u00f6m, M\u00f3nica Arakaki, Arturo Oncevay, John Miller, Aracelli Gar- c\u00eda, and Adriano Ingunza. 2019. Obsolescencia ling\u00fc\u00edstica, descripci\u00f3n gramatical y documentaci\u00f3n de lenguas en el Per\u00fa: hacia un estado de la cuesti\u00f3n. Lexis, 43(2):271-337.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Transfer learning for low-resource neural machine translation", "authors": [ { "first": "Barret", "middle": [], "last": "Zoph", "suffix": "" }, { "first": "Deniz", "middle": [], "last": "Yuret", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "May", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2016, "venue": "EMNLP 2016 -Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1568--1575", "other_ids": { "DOI": [ "10.18653/v1/d16-1163" ] }, "num": null, "urls": [], "raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. EMNLP 2016 -Con- ference on Empirical Methods in Natural Language Processing, Proceedings, pages 1568-1575.", "links": null } }, "ref_entries": { "TABREF0": { "type_str": "table", "content": "
WebMiscLexiconHandbook
es quyes quyes quy
S98561612297
N5002 2996 7050 6288 15537 8522
V1929 2089 3962 3361 4137 5604
V11358 1673 2460 1838 2576 4645
V/N 0.38 0.69 0.56 0.53 0.26 0.65
V1/N 0.27 0.55 0.34 0.29 0.16 0.54
mean 2.59 1.43 1.77 1.87 3.75 1.52
", "text": "The LNRE modelling for the Quechua Cusco datasets are shown in appendix as they are not used for the final submission.", "num": null, "html": null }, "TABREF1": { "type_str": "table", "content": "", "text": "Percentage of word overlapping between the development and the new extracted datasets", "num": null, "html": null }, "TABREF3": { "type_str": "table", "content": "
", "text": "Results of transfer learning experiments +0", "num": null, "html": null }, "TABREF4": { "type_str": "table", "content": "
", "text": "Subword analysis on translated and reference sentence", "num": null, "html": null }, "TABREF5": { "type_str": "table", "content": "
RankTeamBLEU chrF
Track 11 3Helsinki REPUcs5.38 3.10.394 0.358
Track 21 2REPUcs Helsinki2.91 3.630.346 0.343
", "text": "", "num": null, "html": null }, "TABREF6": { "type_str": "table", "content": "", "text": "Official results from AmericasNLP 2021 Shared Task competition on the two tracks.Track 1: Development set used for Training, Track 2: Development set not used for Training", "num": null, "html": null } } } }