{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:13:28.843566Z" }, "title": "A survey of part-of-speech tagging approaches applied to K'iche'", "authors": [ { "first": "Francis", "middle": [], "last": "Tyers", "suffix": "", "affiliation": { "laboratory": "", "institution": "Indiana University", "location": { "settlement": "Bloomington", "region": "IN" } }, "email": "ftyers@iu.edu" }, { "first": "Nick", "middle": [], "last": "Howell", "suffix": "", "affiliation": {}, "email": "nhowell@hse.ru" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We study the performance of several popular neural part-of-speech taggers from the Universal Dependencies ecosystem on Mayan languages using a small corpus of 1435 annotated K'iche' sentences consisting of approximately 10,000 tokens, with encouraging results: F 1 scores 93%+ on lemmatisation, partof-speech and morphological feature assignment. The high performance motivates a crosslanguage part-of-speech tagging study, where K'iche'-trained models are evaluated on two other Mayan languages, Kaqchikel and Uspanteko: performance on Kaqchikel is good, 63-85%, and on Uspanteko modest, 60-71%. Supporting experiments lead us to conclude the relative diversity of morphological features as a plausible explanation for the limiting factors in cross-language tagging performance, providing some direction for future sentence annotation and collection work to support these and other Mayan languages.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We study the performance of several popular neural part-of-speech taggers from the Universal Dependencies ecosystem on Mayan languages using a small corpus of 1435 annotated K'iche' sentences consisting of approximately 10,000 tokens, with encouraging results: F 1 scores 93%+ on lemmatisation, partof-speech and morphological feature assignment. The high performance motivates a crosslanguage part-of-speech tagging study, where K'iche'-trained models are evaluated on two other Mayan languages, Kaqchikel and Uspanteko: performance on Kaqchikel is good, 63-85%, and on Uspanteko modest, 60-71%. Supporting experiments lead us to conclude the relative diversity of morphological features as a plausible explanation for the limiting factors in cross-language tagging performance, providing some direction for future sentence annotation and collection work to support these and other Mayan languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "This paper presents a survey of approaches to partof-speech tagging for K'iche', a Mayan language spoken principally in Guatemala. The Mayan languages are a group of related languages spoken throughout Mesoamerica. K'iche' belongs to the Eastern branch, which contains 14 other languages, including Kaqchikel in the Quichean subgroup and Uspanteko which belongs to its own subgroup.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Part-of-speech tagging has wide usage in corpus and computational linguistics and natural language processing, and is often considered part of a toolkit for basic natural language processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the definition of part-of-speech tagging we subsume the tasks of determining the part of speech, morphological analysis and lemmatisation. That is, given a sentence such as in (1) part-of-speech tagging would return both the sequence of part-ofspeech tags [VERB, DET, NOUN] but also the lemmata [q\u02bcojomaj, le, q\u02bcojom] and the set of feature value pairs for each of the forms. 1 (1) Kinq\u02bcojomaj k-\u2205-in-q\u02bcojomaj IMP-B3SG-A1SG-play le le the q\u02bcojom. q\u02bcojom. marimba.", "cite_spans": [ { "start": 259, "end": 276, "text": "[VERB, DET, NOUN]", "ref_id": null }, { "start": 298, "end": 320, "text": "[q\u02bcojomaj, le, q\u02bcojom]", "ref_id": null }, { "start": 379, "end": 380, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "'I play the marimba.'", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A brief reading guide: prior work, on Mayan and other languages of the Americas and on crosslanguage part-of-speech tagging, is reviewed in section 2. Our experimental design including the mathematical model used for analysing performance are given section 3. Universal dependencies annotation for K'iche' and the systems tested are described in section 4, and results are presented and analysed in section 5. Palmer et al. (2010) explore morphological segmentation and analysis for the purpose of generating interlinearly glossed texts. They work with Uspanteko, a language of the Greater Quichean branch, and the closest language to K'iche' we were able to identify with published studies of computational morphology. They explore several different systems: inducing morphology from parallel texts, an unsupervised segmentation+clustering strategy, and an interactive training strategy with a linguist.", "cite_spans": [ { "start": 410, "end": 430, "text": "Palmer et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Sachse and D\u00fcrr (2016) , a set of preliminary annotation conventions for Mayan languages in general, and K'iche' in particular, are proposed.", "cite_spans": [ { "start": 3, "end": 25, "text": "Sachse and D\u00fcrr (2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "A maximum-entropy part-of-speech tagger is presented in Kuhn and Mateo-Toledo (2004) for Q'anjob'al, which, like K'iche', is a Mayan language of Guatemala. They work with a custom selection of 60 tags, and trained on an annotated corpus of 4100 words (no lemmatisation is performed). In contrast to the systems we will study, Kuhn and Mateo-Toledo (2004) perform feature engineering and end up with F 1 scores between 63% and 78%, depending on the features chosen.", "cite_spans": [ { "start": 56, "end": 84, "text": "Kuhn and Mateo-Toledo (2004)", "ref_id": "BIBREF9" }, { "start": 326, "end": 354, "text": "Kuhn and Mateo-Toledo (2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "There is much work on part-of-speech tagging for languages of the Americas outside of the Mayan family: statistical lemmatisation and part-of-speech tagging systems are described by Pereira-Noriega et al. (2017) and a finite-state morphological analyser by Cardenas and Zeman (2018) for Shipibo-Konibo, a Panoan language of the Amazonian region of Peru.", "cite_spans": [ { "start": 182, "end": 211, "text": "Pereira-Noriega et al. (2017)", "ref_id": "BIBREF12" }, { "start": 257, "end": 282, "text": "Cardenas and Zeman (2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "In Rios (2010) and Rios (2015) , respectively, finite-state morphology and support vector machine-based tagging+parsing systems are described for Quechua. The latter uses a corpus that comprises 2k sentences.", "cite_spans": [ { "start": 3, "end": 14, "text": "Rios (2010)", "ref_id": "BIBREF14" }, { "start": 19, "end": 30, "text": "Rios (2015)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "Cross-language part-of-speech tagging through parallel corpora, sometimes called annotation projection, is well-studied; in Mayan languages, Palmer et al. (2010) use a parallel corpus as a bridge to a higher-resourced language for which a part-ofspeech tagger already exists.", "cite_spans": [ { "start": 141, "end": 161, "text": "Palmer et al. (2010)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "In the absence of such a corpus, so-called \"zeroshot\" methods are created from other (presumably higher-resourced) languages and applied to the target language. The main balance to strike is between specificity of resources (how closely-related are the other languages) and quantity of resources (how much linguistic data is accessible). UDify of Kondratyuk and Straka (2019) is an example of preferring the latter: a deep neural architecture is trained on all of the Universal Dependencies treebanks. The former strategy can be seen in Huck et al. (2019) , where in addition to annotation projection, authors attempt zero-shot tagging of Ukrainian with a model trained on Russian.", "cite_spans": [ { "start": 347, "end": 375, "text": "Kondratyuk and Straka (2019)", "ref_id": "BIBREF8" }, { "start": 537, "end": 555, "text": "Huck et al. (2019)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Prior work", "sec_num": "2" }, { "text": "We used a corpus of K'iche' 2 annotated with partof-speech tags and morphological features (Tyers and Henderson, 2021) . The corpus consisted of 1,435 sentences comprising approximately 10,000 tokens from a variety of text types and was annotated according to the guidelines of the Universal Dependencies (UD) project (Nivre et al., 2020) . An example of a sentence from the corpus can be seen in Table 1 .", "cite_spans": [ { "start": 102, "end": 118, "text": "Henderson, 2021)", "ref_id": "BIBREF20" }, { "start": 318, "end": 338, "text": "(Nivre et al., 2020)", "ref_id": "BIBREF10" } ], "ref_spans": [ { "start": 397, "end": 404, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We studied the performance of several popular part-of-speech taggers within the Universal Dependencies ecosystem; these are reviewed in section 4. Performance was computed as F 1 scores for lemmatisation, universal part-of-speech (UPOS), and universal morphological features (UFeats). We performed 10-fold cross validation to obtain mean and standard deviation of F 1 . We also recorded training time and model size to compare the resource consumption of the models in the training process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We selected the best-performing system and performed a convergence study (see section 5.3 for results). We decimated the training data of one of the test-train splits from the cross-validation, and plotted the performance of models trained on the decimations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We make the following assumption about the performance: additional training data provides exponentially decreasing performance improvement. Under this assumption, we obtain the formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F 1 (n) = F 1 (\u221e) \u2212 \u2206F 1 \u2022 e \u2212n/k .", "eq_num": "(1)" } ], "section": "Methodology", "sec_num": "3" }, { "text": "Here F 1 (n) is the performance of a model trained on n tokens, F 1 (\u221e) is the asymptotic performance, and \u2206F 1 is the gap between F 1 (\u221e) (estimated maximum performance) and F 1 (0) (zero-shot performance). The parameter k is the characteristic number of tokens; each additional k tokens of training data causes the gap \u2206F 1 = F 1 (\u221e) \u2212 F 1 (n) to shrink by a factor of 1/e \u2248 36%. This can be used to estimate the training data n required to meet a given performance target F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "target 1 : n = k \u2022 log \u2206F 1 F 1 (\u221e) \u2212 F target 1 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We fit this curve against our convergence data and estimate peak performance and characteristic number. Error propagation is used with the error in parameter estimation to compute the error bands in the graph:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "(\u03b4F 1 ) 2 = \u2211 ( \u2202F 1 \u2202x \u03b4x ) 2", "eq_num": "(3)" } ], "section": "Methodology", "sec_num": "3" }, { "text": "Here x runs over the parameters of F 1 (n): F 1 (\u221e), \u2206F 1 and k. We also studied the best-performer in crosslanguage tagging on the related Kaqchikel and Uspanteko languages. The 10 models trained in cross-validation were all evaluated on small part-ofspeech-tagged corpora of 157 (Kaqchikel) and 160 (Uspanteko) sentences. For results and overviews of the languages, see section 6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We tested morphological analysis on three systems designed for Universal Dependencies treebanks: UDPipe (Straka et al., 2016) , UDPipe 2 (Straka, 2018) , and UDify (Kondratyuk and Straka, 2019) . Of these, only UDPipe had a working tokeniser. For other taggers we trained, we trained the UD-Pipe tokeniser and other tagger together. We thus present combined tokeniser-tagger systems.", "cite_spans": [ { "start": 104, "end": 125, "text": "(Straka et al., 2016)", "ref_id": "BIBREF18" }, { "start": 137, "end": 151, "text": "(Straka, 2018)", "ref_id": "BIBREF19" }, { "start": 164, "end": 193, "text": "(Kondratyuk and Straka, 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": "4" }, { "text": "UDPipe (Straka et al., 2016 ) is a languageindependent trainable tokeniser, lemmatiser, POS tagger, and dependency parser designed to train on and produce Universal Dependencies-format treebanks. It uses gated linear units for tokenisation, averaged perceptrons for part-of-speech tagging, and a neural network classifier for dependency parsing. It is the least resource-hungry model in our study by an order of magnitude or more, and we trained it from-scratch using the K'iche' corpus in section 3.", "cite_spans": [ { "start": 7, "end": 27, "text": "(Straka et al., 2016", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": "4" }, { "text": "UDPipe 2 (Straka, 2018 ) is a Python prototype for a Tensorflow-based deep neural network POStagger, lemmatiser, and dependency parser. It won high rankings in the CoNLL 2018 shared task on multilingual parsing , taking first place by one metric. Deep neural methods have achieved impressive performance results in recent years, but take considerable computational resources to train. We used UDPipe 2 without pretrained embeddings, and trained it from-scratch using the K'iche' corpus in section 3.", "cite_spans": [ { "start": 9, "end": 22, "text": "(Straka, 2018", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": "4" }, { "text": "UDify (Kondratyuk and Straka, 2019 ) is a AllenNLP-based multilingual model using BERT pretrained embeddings and trained on the combined Universal Dependencies treebank collection; we fine-tuned this pretrained model on our K'iche' data. This was our most resource-intensive model, even though we only fine-tuned on K'iche'; our initialisation was the UDify-distributed BERT+UD model.", "cite_spans": [ { "start": 6, "end": 34, "text": "(Kondratyuk and Straka, 2019", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Systems", "sec_num": "4" }, { "text": "Resource utilisation for the three systems is summarised in Table 2 . Model production is reported in kilojoules for each of our systems; these were estimated by taking the reported runtime and multiplying it by the thermal design power (TDP) of the reported hardware. Error could be introduced into these estimates from many sources: only the reported device is considered, ignoring many other components of the machine; devices are assumed to run at their TDP the entire runtime; the UDify numbers as reported by Kondratyuk and Straka (2019) are approximate.", "cite_spans": [ { "start": 515, "end": 543, "text": "Kondratyuk and Straka (2019)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 60, "end": 67, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Energy efficiency", "sec_num": "5.1" }, { "text": "We evaluated the performance of the models on five tasks: tokenisation (Tokens), word segmentation (Words), lemmatisation (Lemmas), part-of-speech tagging (UPOS) and morphological tagging (Features). The difference between tokenisation and word segmentation can be explained with reference to Table 1 . The word chqawach 'to us' counts as a single token, but two syntactic words. So the performance of tokenisation is recovering the tokens, and the performance of word segmentation is recovering the words.", "cite_spans": [], "ref_spans": [ { "start": 293, "end": 300, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Task performance", "sec_num": "5.2" }, { "text": "We performed 10-fold cross validation on the 1435 analysed sentences, with F 1 scores for lemmatisation, part-of-speech tagging, and morphological features computed using the evaluation scripts from Zeman et al. 2018, modified to not ignore language-specific morphological features. Results are summarised in Table 3 ; the winner is UDPipe2.", "cite_spans": [], "ref_spans": [ { "start": 309, "end": 316, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Task performance", "sec_num": "5.2" }, { "text": "While both UDPipe 2 and UDify have deep neural architectures, it seems UDify is unable to overcome non-K'iche' biases from the BERT embeddings and initial training on Universal Dependencies releases; neither of these components incorporate Mayan languages. We speculate that training on data with a better representation of languages of the Americas would enable UDify to surpass UD-Pipe 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task performance", "sec_num": "5.2" }, { "text": "The original UDPipe makes an impressively resource-efficient performance: it obtains 95%, 97%, and 96% the performance of UDPipe 2 on lemmatisation, part-of-speech tagging, and feature assignment, all with 3.5% of the training time and 3.6% of the model size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task performance", "sec_num": "5.2" }, { "text": "We performed a convergence study on the best system, UDPipe 2. Results are shown in Figure 1 . Asymptotic F 1 scores are 95.4\u00b11.9%, 97.4\u00b12.2%, and 95.7 \u00b1 2.1% for lemmatisation, part-of-speech tagging, and feature assignment, respectively. Gaps at full use of the 1292 sentence-, 9559-token training set are 2.5%, 2.9%, and 3.8%, respectively, and characteristic numbers are 4700, 4800 and 4700 tokens. Using (2), we can use this to compute how much more training data would be required to close this gap; for example, to bring F 1 to within 1% of its maximum, we would need to annotate an additional 4400, 4500, and 5900 tokens, respectively.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 92, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Convergence", "sec_num": "5.3" }, { "text": "There are around 32 Mayan languages spoken in Mesoamerica, in the countries of Guatemala, Mexico, Honduras, El Salvador and Belize. Given the impressive performance of the best-performing system on K'iche' data, we decided to test it on two related languages spoken in Guatemala: Kaqchikel and Uspanteko. UDify is also reported as being suited to zero-shot inference, so we include two UDify-based models: fine-tuned on K'iche' (referred to as \"UDify-FT\") and the original UDify model (simply \"UDify\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-language tagging", "sec_num": "6" }, { "text": "Kaqchikel (ISO-639: cak; previously Cakchiquel) is a Mayan language of the Quichean branch. It is spoken in Guatemala, to the south and east of the K'iche'-speaking area (see Figure 2 ) and has around 450,000 speakers. Some notable differences between Kaqchikel and K'iche' are the lack of status suffixes on verbs, no pied-piping inversion (Broadwell, 2005) , and SVO order in declarative sentences (Watanabe, 2017) . For the Kaqchikel corpus, we extracted glossed example sentences from a number of published sources, including papers discussing topics in morphology and syntax (Henderson, 2007; Broadwell and Duncan, 2002; Broadwell, 2000) and grammar books (Garcia Matzar et al., 1999; Guaj\u00e1n, 2016) . These sentences were then analysed with a morphological analyser (Richardson and Tyers, 2021) and manually disambiguated using the provided glosses. Figure 1: Convergence of the F 1 scores of the UDPipe 2 combined system for lemmas, universal part-of-speech, and universal feature tags, as a function of total number of tokens in training. The plotted points (p, s) are the decimation data: measurements of F 1 score p when given a training corpus of s tokens. Curves are obtained by constrained least-squares fitting of this data against (1). The shaded regions represent the propagation of the standard error (3) in the fit parameters through the curve; under hypothesis of the normal distribution, \u2248 68% of observations are expected to lie within this region. The numbers in the legend are the asymptotic performance given by the fitting procedure; as more training data is supplied, model performance should converge to the asymptotic performance. in an area adjacent to the K'iche'-speaking area in Guatemala. It has around 2,000 speakers and is one of the few Mayan languages to have developed contrastive tone. Palmer et al. (2010) present a large interlinearlyglossed corpus of Uspantek with approximately 3400 sentences and 27000 tokens. We selected 160 sentences from this corpus, totalling 1003 tokens and annotated them with part of speech, lemmas and morphological features. The lemmas were given by a morphological analyser 3 created from a lexicon provided by OKMA.", "cite_spans": [ { "start": 341, "end": 358, "text": "(Broadwell, 2005)", "ref_id": "BIBREF1" }, { "start": 400, "end": 416, "text": "(Watanabe, 2017)", "ref_id": "BIBREF21" }, { "start": 580, "end": 597, "text": "(Henderson, 2007;", "ref_id": "BIBREF6" }, { "start": 598, "end": 625, "text": "Broadwell and Duncan, 2002;", "ref_id": "BIBREF2" }, { "start": 626, "end": 642, "text": "Broadwell, 2000)", "ref_id": "BIBREF0" }, { "start": 661, "end": 689, "text": "(Garcia Matzar et al., 1999;", "ref_id": "BIBREF4" }, { "start": 690, "end": 703, "text": "Guaj\u00e1n, 2016)", "ref_id": "BIBREF5" }, { "start": 771, "end": 799, "text": "(Richardson and Tyers, 2021)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 175, "end": 183, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Kaqchikel", "sec_num": "6.1" }, { "text": "The results of our cross-language tagging study are shown in Table 4 ; in general the winner is UDify-K'iche'; the original UDify model itself performs very poorly. UDPipe 2 manages nearly as good performance as UDify-FT, especially impressive considering its three orders of magnitude less energy consumption. For UDPipe 2 and UDify-FT, we used the ten models trained to provide the K'iche' tagging performance and confidence. The original UDify system is a single model, thus we are unable Table 4 : Results for cross-lingual tagging on Kaqchikel and Uspanteko, using our UDPipe 2, UDify, and UDify-FT systems for part-of-speech tagging. We evaluated on our corpora lemmatised and annotated for part-ofspeech, morphological features. Performance for the K'iche'-trained systems are quoted as the average and standard deviation over the same 10 trained models used in cross-validation for K'iche' (see section 3).", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 4", "ref_id": null }, { "start": 492, "end": 499, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6.3" }, { "text": "to provide confidence intervals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.3" }, { "text": "We also studied convergence for the crosslanguage tagging task using our UDPipe 2 decimated K'iche' models; see figures 3a and 3b. We observe that for the given set of labels our models essentially have converged, with the exception of part-of-speech tagging for Uspanteko, which might benefit from additional examples of features already present in our K'iche' corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.3" }, { "text": "In order to understand whether our K'iche' corpus covers a sufficient variety of labels (parts of speech, features, lemmatisation patterns), we selected two labels, one of high frequency and one of low frequency (see Table 5a ), from our corpus with which to disable our model. For each label, new convergence runs where made using the 10%, 40%, and 70% subsets, omitting all sentences featuring the chosen label.", "cite_spans": [], "ref_spans": [ { "start": 217, "end": 225, "text": "Table 5a", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6.3" }, { "text": "If our cross-language tagging models could not be improved by a more diverse K'iche' training corpus, we would expect these disabled datapoints to fall within error of the convergence trendlines. This is the case with the low-frequency label, \"firstperson\". On the other hand, we see that the loss of the high-frequency label, perfective aspect, has (b) Uspanteko. Characteristic number of tokens for these was 1900 (lemmatisation), 7900 (part-of-speech), and 4900 (features); part-of-speech tagging might see improvement from increased annotation of K'iche' data, but with such high uncertainty (over 10% in asymptotic performance) it is difficult to be sure. a disproportionate impact on cross-tagging performance: removing this training data has caused the convergence curve to change parameters, lowering asymptotic performance. This raises the possibility that we might improve the asymptotic performance of our cross-tagging models by locating labels which are high-frequency in our target language (Kaqchikel or Uspanteko) and extending our K'iche' corpus with sentences featuring those labels. See Table 5b for a sample of highfrequency labels which appear in our K'iche' corpus but not our cross-tagging evaluation corpus.", "cite_spans": [], "ref_spans": [ { "start": 1106, "end": 1114, "text": "Table 5b", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "6.3" }, { "text": "These all indicate that the small test corpora of Kaqchikel and Uspanteko we annotated are not as diverse in terms of text type as the K'iche' corpus. For example, the test corpora contain no infinitive forms (for example the morpheme -ik in K'iche'), although these certainly exist in both Kaqchikelsee \u00a72.7.2.6 in Garcia Matzar et al. (1999) -and Uspanteko. Additionally they contain no examples of the imperative mood, relative clauses introduced by relative pronouns, the formal second person, or reflexives. All of these features certainly exist in the languages, but not in the selection of sentences we annotated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "6.3" }, { "text": "We used an annotated corpus of 1435 part-ofspeech tagged K'iche' sentences to to survey a number of neural part-of-speech tagging systems from that ecosystem. We found the best performance was generally with UDPipe 2, a deep neural system inte-grating lemmatisation, part-of-speech and morphological feature assignment. Our UDPipe 2-trained system achieved F 1 of 93% or better on all tasks, very encouraging results for a relatively small corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding remarks", "sec_num": "7" }, { "text": "Convergence studies showed that on corpora of similar morphological composition even better performance is attainable, but to close the gap to within 1% of projected optimal performance requires roughly half again the amount of training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding remarks", "sec_num": "7" }, { "text": "The high performance on K'iche' led us to experiment using our model to perform cross-language tagging on the related languages of Kaqchikel and Uspanteko. Performance on the more closelyrelated language, Kaqchikel, was still respectable, with F 1 ranging from 63 to 85% on the tasks; on Uspanteko performance we observed more modest performance 60\u221271%. The K'iche' fine-tuned UDify model does show noticably better performance, but possibly not worth the energy expenditure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding remarks", "sec_num": "7" }, { "text": "Our results after disabling our cross-language tagger by withholding some labels during training imply that cross-language performance could be improved by annotating more data with similar features to the Kaqchikel and Uspanteko evaluation corpora, and suggest that cross-language tagging is a path forward to greater availability of part-ofspeech annotation for Mayan languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Concluding remarks", "sec_num": "7" }, { "text": "Frequency Discrep. quc evaluation Person=1 3% 1% 0.12\u03c3 Aspect=Perf 49% 62% \u22123.5 \u03c3 (a) The two labels chosen for the label diversity study for our cross-language taggers. We studied convergence of two additional models: training data alternately lacked first-person (Person=1), or perfective aspect (Aspect=Perf). Frequency is percentage of sentences in the corpus with the feature. We give the median discrepancy, computed as the performance gap between the disabled model and the prediction for a model trained on the same number of tokens, normalised by the uncertainty in that prediction \u03c3. For the first-person label, we see a similar distribution with a very slight bias towards higher performance; perfective aspect seems to have an outsized effect, increasing the median discrepancy to 3.5\u03c3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label", "sec_num": null }, { "text": "Frequency (% sents.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label", "sec_num": null }, { "text": "VerbForm=Inf 6 Mood=Imp 3 Reflex=Yes 2 PronType=Rel 2 Polite=Form 2 (b)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Label", "sec_num": null }, { "text": "The results of our label diversity study. The top 20 labels for our K'iche' training corpus which do not appear in our Kaqchikel and Uspanteko evaluation corpora, along with their frequencies in the K'iche' corpus. See Table 5a for the impact missing high-frequency labels can have on cross-tagging performance.", "cite_spans": [], "ref_spans": [ { "start": 219, "end": 227, "text": "Table 5a", "ref_id": null } ], "eq_spans": [], "section": "Label", "sec_num": null }, { "text": "For example for the VERB it would return Aspect=Imp, Number[obj]=Sing, Number[subj]=Sing, Person[obj]=3, Person[subj]=1, Subcat=Tran, VerbForm=Fin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/ UniversalDependencies/UD_Kiche-IU", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/apertium/ apertium-usp", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank the anonymous reviewers for their helpful comments. This article is an output of a research project implemented as part of the Basic Research Programme at the National Research University Higher School of Economics (HSE University).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Word order and markedness in Kaqchikel", "authors": [ { "first": "George", "middle": [ "Aaron" ], "last": "Broadwell", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the LFG00 Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Aaron Broadwell. 2000. Word order and markedness in Kaqchikel. In Proceedings of the LFG00 Conference.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Pied-piping and optimal order in Kiche (K'iche')", "authors": [ { "first": "George", "middle": [ "Aaron" ], "last": "Broadwell", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Aaron Broadwell. 2005. Pied-piping and opti- mal order in Kiche (K'iche').", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A new passive in Kaqchikel. Linguistic Discovery", "authors": [ { "first": "George", "middle": [], "last": "", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Broadwell", "suffix": "" }, { "first": "Lachlan", "middle": [], "last": "Duncan", "suffix": "" } ], "year": 2002, "venue": "", "volume": "1", "issue": "", "pages": "26--43", "other_ids": { "DOI": [ "10.1349/PS1.1537-0852.A.161" ] }, "num": null, "urls": [], "raw_text": "George Aaron Broadwell and Lachlan Duncan. 2002. A new passive in Kaqchikel. Linguistic Discovery, 1:26- 43.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A morphological analyzer for Shipibo-konibo", "authors": [ { "first": "Ronald", "middle": [], "last": "Cardenas", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology", "volume": "", "issue": "", "pages": "131--139", "other_ids": { "DOI": [ "10.18653/v1/W18-5815" ] }, "num": null, "urls": [], "raw_text": "Ronald Cardenas and Daniel Zeman. 2018. A morpho- logical analyzer for Shipibo-konibo. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 131- 139, Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Gram\u00e1tica del idioma Kaqchikel. PLFM", "authors": [ { "first": "Pedro", "middle": [ "Oscar" ], "last": "", "suffix": "" }, { "first": "Garcia", "middle": [], "last": "Matzar", "suffix": "" }, { "first": "Domingo Coc", "middle": [], "last": "Valerio Toj Cotzajay", "suffix": "" }, { "first": "", "middle": [], "last": "Tuiz", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedro Oscar Garcia Matzar, Valerio Toj Cotzajay, and Domingo Coc Tuiz. 1999. Gram\u00e1tica del idioma Kaqchikel. PLFM.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Rutz'ib'axik ri Kaqchikel -Manual de Redacci\u00f3n Kaqchikel", "authors": [ { "first": "B'alam Rodriguez", "middle": [], "last": "Pakal", "suffix": "" }, { "first": "", "middle": [], "last": "Guaj\u00e1n", "suffix": "" } ], "year": 2016, "venue": "Editorial Maya' Wuj", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pakal B'alam Rodriguez Guaj\u00e1n. 2016. Rutz'ib'axik ri Kaqchikel -Manual de Redacci\u00f3n Kaqchikel. Edito- rial Maya' Wuj.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Observations on the syntax of adjunct extraction in Kaqchikel", "authors": [ { "first": "Robert", "middle": [], "last": "Henderson", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the CILLA III Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Henderson. 2007. Observations on the syntax of adjunct extraction in Kaqchikel. In Proceedings of the CILLA III Conference.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Cross-lingual annotation projection is effective for neural part-of-speech tagging", "authors": [ { "first": "Matthias", "middle": [], "last": "Huck", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Dutka", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Fraser", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects", "volume": "", "issue": "", "pages": "223--233", "other_ids": { "DOI": [ "10.18653/v1/W19-1425" ] }, "num": null, "urls": [], "raw_text": "Matthias Huck, Diana Dutka, and Alexander Fraser. 2019. Cross-lingual annotation projection is effec- tive for neural part-of-speech tagging. In Proceedings of the Sixth Workshop on NLP for Similar Languages, Varieties and Dialects, pages 223-233, Ann Arbor, Michigan. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "75 languages, 1 model: Parsing universal dependencies universally", "authors": [ { "first": "Dan", "middle": [], "last": "Kondratyuk", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2779--2795", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Kondratyuk and Milan Straka. 2019. 75 lan- guages, 1 model: Parsing universal dependencies uni- versally. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2779- 2795, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Applying computational linguistic techniques in a documentary project for Q'anjob'al (Mayan, Guatemala)", "authors": [ { "first": "Jonas", "middle": [], "last": "Kuhn", "suffix": "" }, { "first": "B'alam Mateo-Toledo", "middle": [], "last": "", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonas Kuhn and B'alam Mateo-Toledo. 2004. Applying computational linguistic techniques in a documentary project for Q'anjob'al (Mayan, Guatemala). In Pro- ceedings of the 4th International Conference on Lan- guage Resources and Evaluation (LREC 2004), Lis- boa, Portugal.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Universal dependencies v2: An evergrowing multilingual treebank collection", "authors": [ { "first": "J", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "M.-C", "middle": [], "last": "De Marneffe", "suffix": "" }, { "first": "F", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "J", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "S", "middle": [], "last": "Pyysalo", "suffix": "" }, { "first": "S", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "F", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "D", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)", "volume": "", "issue": "", "pages": "4027--4036", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Nivre, M.-C. de Marneffe, F. Ginter, J. Haji\u010d, C. D. Manning, S. Pyysalo, S. Schuster, F. Tyers, and D. Zeman. 2020. Universal dependencies v2: An ev- ergrowing multilingual treebank collection. In Pro- ceedings of the 12th Conference on Language Re- sources and Evaluation (LREC 2020), pages 4027- 4036.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Computational strategies for reducing annotation effort in language documentation", "authors": [ { "first": "Alexis", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Taesun", "middle": [], "last": "Moon", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Baldridge", "suffix": "" }, { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2010, "venue": "Linguistic Issues in Language Technology", "volume": "3", "issue": "4", "pages": "1--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alexis Palmer, Taesun Moon, Jason Baldridge, Katrin Erk, Eric Campbell, and Telma Can. 2010. Computa- tional strategies for reducing annotation effort in lan- guage documentation. Linguistic Issues in Language Technology, 3(4):1-42.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Ship-LemmaTagger: Building an NLP toolkit for a Peruvian native language", "authors": [ { "first": "Jos\u00e9", "middle": [], "last": "Pereira-Noriega", "suffix": "" }, { "first": "Rodolfo", "middle": [], "last": "Mercado-Gonzales", "suffix": "" }, { "first": "Andr\u00e9s", "middle": [], "last": "Melgar", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Sobrevilla-Cabezudo", "suffix": "" }, { "first": "Arturo", "middle": [], "last": "Oncevay-Marcos", "suffix": "" } ], "year": 2017, "venue": "International Conference on Text, Speech, and Dialogue", "volume": "", "issue": "", "pages": "473--481", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jos\u00e9 Pereira-Noriega, Rodolfo Mercado-Gonzales, An- dr\u00e9s Melgar, Marco Sobrevilla-Cabezudo, and Arturo Oncevay-Marcos. 2017. Ship-LemmaTagger: Build- ing an NLP toolkit for a Peruvian native language. In International Conference on Text, Speech, and Dia- logue, pages 473-481. Springer.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A morphological analyser for K'iche'. Procesamiento de Lenguaje Natural", "authors": [ { "first": "Ivy", "middle": [], "last": "Richardson", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" } ], "year": 2021, "venue": "", "volume": "66", "issue": "", "pages": "99--109", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ivy Richardson and Francis M. Tyers. 2021. A mor- phological analyser for K'iche'. Procesamiento de Lenguaje Natural, 66:99-109.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Applying finite-state techniques to a native American language: Quechua. Lizentiatsarbeit", "authors": [ { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annette Rios. 2010. Applying finite-state techniques to a native American language: Quechua. Lizenti- atsarbeit, Institut f\u00fcr Computerlinguistik, Universit\u00e4t Z\u00fcrich.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A basic language technology toolkit for Quechua", "authors": [ { "first": "Annette", "middle": [], "last": "Rios", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annette Rios. 2015. A basic language technology toolkit for Quechua. Ph.D. thesis, Institut f\u00fcr Computerlin- guistik, Universit\u00e4t Z\u00fcrich.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Sergio Manuel Guarchaj Can, Catarina Marcela Tambriz Cotiy", "authors": [ { "first": "Sergio", "middle": [], "last": "Romero", "suffix": "" }, { "first": "Ignacio", "middle": [], "last": "Carvajal", "suffix": "" }, { "first": "Mareike", "middle": [], "last": "Sattler", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Manuel Tahay", "suffix": "" }, { "first": "Carl", "middle": [], "last": "Tzaj", "suffix": "" }, { "first": "Sarah", "middle": [], "last": "Blyth", "suffix": "" }, { "first": "Pat", "middle": [], "last": "Sweeney", "suffix": "" }, { "first": "Nathalie", "middle": [], "last": "Kyle", "suffix": "" }, { "first": "Diego", "middle": [ "Guarchaj" ], "last": "Steinfeld Childre", "suffix": "" }, { "first": "Lorenzo", "middle": [ "Ernesto" ], "last": "Tambriz", "suffix": "" }, { "first": "Maura", "middle": [], "last": "Tambriz", "suffix": "" }, { "first": "Lupita", "middle": [], "last": "Tahay", "suffix": "" }, { "first": "Gaby", "middle": [], "last": "Tahay", "suffix": "" }, { "first": "Jenny", "middle": [], "last": "Tahay", "suffix": "" }, { "first": "Santiago", "middle": [], "last": "Tahay", "suffix": "" }, { "first": "Elena", "middle": [ "Ixmata" ], "last": "Can", "suffix": "" }, { "first": "Enrique", "middle": [], "last": "Xum", "suffix": "" }, { "first": "", "middle": [], "last": "Guarchaj", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergio Romero, Ignacio Carvajal, Mareike Sattler, Juan Manuel Tahay Tzaj, Carl Blyth, Sarah Sweeney, Pat Kyle, Nathalie Steinfeld Childre, Diego Guarchaj Tambriz, Lorenzo Ernesto Tambriz, Maura Tahay, Lupita Tahay, Gaby Tahay, Jenny Tahay, Santiago Can, Elena Ixmata Xum, Enrique Guarchaj, Ser- gio Manuel Guarchaj Can, Catarina Marcela Tam- briz Cotiy, Telma Can, Tara Kingsley, Charlotte Hayes, Christopher J. Walker, Mar\u00eda Angelina Ixmat\u00e1 Sohom, Jacob Sandler, Silveria Guarchaj Ixmat\u00e1, Manuela Petronila Tahay, and Susan Smythe Kung. 2018. Chqeta'maj le qach'ab'al K'iche'! https: //tzij.coerll.utexas.edu/.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Morphological glossing of Mayan languages under XML: Preliminary results", "authors": [ { "first": "Frauke", "middle": [], "last": "Sachse", "suffix": "" }, { "first": "Michael", "middle": [], "last": "D\u00fcrr", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.20376/IDIOM-23665556.16.wp004.en" ] }, "num": null, "urls": [], "raw_text": "Frauke Sachse and Michael D\u00fcrr. 2016. Morpho- logical glossing of Mayan languages under XML: Preliminary results. Working Paper 4, Nordrhein- Westf\u00e4lische Akademie der Wissenschaften und der K\u00fcnste.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "UDPipe: trainable pipeline for processing CoNLL-U files performing tokenization, morphological analysis, POS tagging and parsing", "authors": [ { "first": "M", "middle": [], "last": "Straka", "suffix": "" }, { "first": "J", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "J", "middle": [], "last": "Strakov\u00e1", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Straka, J. Haji\u010d, and J. Strakov\u00e1. 2016. UDPipe: trainable pipeline for processing CoNLL-U files per- forming tokenization, morphological analysis, POS tagging and parsing. In Proceedings of the Tenth Inter- national Conference on Language Resources and Eval- uation (LREC'16), Paris, France. European Language Resources Association (ELRA).", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "UDPipe 2.0 prototype at CoNLL 2018 UD shared task", "authors": [ { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "197--207", "other_ids": { "DOI": [ "10.18653/v1/K18-2020" ] }, "num": null, "urls": [], "raw_text": "Milan Straka. 2018. UDPipe 2.0 prototype at CoNLL 2018 UD shared task. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 197-207, Brus- sels, Belgium. Association for Computational Lin- guistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A corpus of K'iche' annotated for morphosyntactic structure", "authors": [ { "first": "M", "middle": [], "last": "Francis", "suffix": "" }, { "first": "Robert", "middle": [], "last": "Tyers", "suffix": "" }, { "first": "", "middle": [], "last": "Henderson", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the First Workshop on NLP for Indigenous Languages of the Americas (Americas-NLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Francis M. Tyers and Robert Henderson. 2021. A cor- pus of K'iche' annotated for morphosyntactic struc- ture. In Proceedings of the First Workshop on NLP for Indigenous Languages of the Americas (Americas- NLP).", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The division of labor between syntax and morphology in the Kichean agent-focus construction", "authors": [ { "first": "Akira", "middle": [], "last": "Watanabe", "suffix": "" } ], "year": 2017, "venue": "Morphology", "volume": "27", "issue": "", "pages": "685--720", "other_ids": { "DOI": [ "10.1007/s11525-017-9312-0" ] }, "num": null, "urls": [], "raw_text": "Akira Watanabe. 2017. The division of labor between syntax and morphology in the Kichean agent-focus construction. Morphology, 27:685-720.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies", "authors": [ { "first": "Daniel", "middle": [], "last": "Zeman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Popel", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Milan", "middle": [], "last": "Straka", "suffix": "" }, { "first": "Filip", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "Joakim", "middle": [], "last": "Nivre", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies", "volume": "", "issue": "", "pages": "1--21", "other_ids": { "DOI": [ "10.18653/v1/K18-2001" ] }, "num": null, "urls": [], "raw_text": "Daniel Zeman, Jan Haji\u010d, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Mul- tilingual Parsing from Raw Text to Universal Depen- dencies, pages 1-21, Brussels, Belgium. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "A map of Guatemala with approximate locations of speaker areas of Mayan languages. K'iche', Kaqchikel and Uspanteko are highlighted in purple (gridhatched), green (forward slash-hatched), and red (backward slash-hatched), respectively.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Convergence of our UDPipe 2 on Kaqchikel (3a) and Uspanteko (3b). The legends show projected asymptotic performance for each of universal part-of-speech tagging, universal feature assignment, and lemmatisation.", "type_str": "figure", "num": null, "uris": null }, "TABREF0": { "type_str": "table", "html": null, "text": "# sent_id = utexas:123.2 # text = Xuk\u02bcut le K\u02bciche\u02bc ch\u02bcab\u02bcal le al Nela chqawach. # text[spa] = Manuela nos ense\u00f1\u00f3 el idioma k\u02bciche\u02bc", "num": null, "content": "
# labels = tijonik-17 complete
1Xuk\u02bcutk\u02bcutVERB_ [\u2026] 1_ _ _ _
2leleDET_ __ _ _ _
3K\u02bciche\u02bck\u02bciche\u02bcADJ_ __ _ _ _
4ch\u02bcab\u02bcalch\u02bcab\u02bcal NOUN _ __ _ _ _
5leleDET_ __ _ _ _
6alaliNOUN _ Gender=Fem|NounType=Clf _ _ _ _
7NelaNelaPROPN _ Gender=Fem_ _ _ _
8-9 chqawach ___ __ _ _ _
8chchiADP_ __ _ _ _
9qawachwachNOUN _ [\u2026] 2_ _ _ _
10 ..PUNCT _ __ _ _ _
" }, "TABREF1": { "type_str": "table", "html": null, "text": "", "num": null, "content": "
ModelEnergy (kJ)
UD K'iche'
UDPipe050
UDPipe 201400
UDify5400001300
" }, "TABREF2": { "type_str": "table", "html": null, "text": "", "num": null, "content": "" }, "TABREF3": { "type_str": "table", "html": null, "text": "Uspanteko (ISO-639: usp; also referred to as Uspantek, or Uspanteco) is a Mayan language of the Greater Quichean branch. The language is spoken", "num": null, "content": "
UDPipe UDPipe 2UDify
Training time 12.5 \u00b1 0.1356 \u00b1 4323 \u00b1 2
Model size2.3M64M760M
Tokens99.7 \u00b1 0.4--
Words98.6 \u00b1 0.5--
Lemmas88.3 \u00b1 1.1 93.2 \u00b1 0.6 88.3 \u00b1 0.9
UPOS91.4 \u00b1 1.4 94.5 \u00b1 0.8 94.2 \u00b1 1.1
Features88.8 \u00b1 1.1 92.9 \u00b1 0.8 89.2 \u00b1 1.2
" }, "TABREF4": { "type_str": "table", "html": null, "text": "Results on tasks from tokenisation to morphological analysis. Standard deviation is obtained by running ten-fold cross validation. The columns are F 1 score: Tokens tokenisation; Words splitting syntactic words (e.g. contractions); Lemmas lemmatisation; UPOS universal part-of-speech tags; Features morphological features. Model size is in megabytes, training time is in mm:ss, as run on a machine with AMD Ryzen 7 1700 8-core CPU and 32GiB of memory.", "num": null, "content": "
100Model convergence
UPOS: 97.4 \u00b1 2.2
Features: 95.7 \u00b1 2.1
Lemmas: 95.4 \u00b1 1.9
95
90
1 (%)85
F
80
75
200040006000800010000
Training corpus size (tokens)
" } } } }