{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:39.630084Z" }, "title": "Effort versus performance tradeoff in Uralic lemmatisers", "authors": [ { "first": "Nicholas", "middle": [], "last": "Howell", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Research University Higher School of Economics", "location": { "settlement": "Moscow", "country": "Russia" } }, "email": "" }, { "first": "Maria", "middle": [], "last": "Bibaeva", "suffix": "", "affiliation": { "laboratory": "", "institution": "National Research University Higher School of Economics", "location": { "settlement": "Moscow", "country": "Russia" } }, "email": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "", "affiliation": { "laboratory": "National Research University Higher School of Economics", "institution": "Indiana University", "location": { "settlement": "Moscow, Bloomington", "region": "IN", "country": "Russia, United States" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Lemmatisers in Uralic languages are required for dictionary lookup, an important task for language learners. We explore how to decide which of the rule-based and unsupervised categories is more efficient to invest in. We present a comparison of rule-based and unsupervised lemmatisers, derived from the Giellatekno finite-state morphology project and the Morfessor surface segmenter trained on Wikipedia, respectively. The comparison spanned six Uralic languages, from relatively high-resource (Finnish) to extremely lowresource (Uralic languages of Russia). Performance is measured by dictionary lookup and vocabulary reduction tasks on the Wikipedia corpora. Linguistic input was quantified, for rulebased as quantity of source code and state machine complexity, and for unsupervised as the size of the training corpus; these are normalised against Finnish. Most languages show performance improving with linguistic input. Future work will produce quantitative estimates for the relationship between corpus size, ruleset size, and lemmatisation performance.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Lemmatisers in Uralic languages are required for dictionary lookup, an important task for language learners. We explore how to decide which of the rule-based and unsupervised categories is more efficient to invest in. We present a comparison of rule-based and unsupervised lemmatisers, derived from the Giellatekno finite-state morphology project and the Morfessor surface segmenter trained on Wikipedia, respectively. The comparison spanned six Uralic languages, from relatively high-resource (Finnish) to extremely lowresource (Uralic languages of Russia). Performance is measured by dictionary lookup and vocabulary reduction tasks on the Wikipedia corpora. Linguistic input was quantified, for rulebased as quantity of source code and state machine complexity, and for unsupervised as the size of the training corpus; these are normalised against Finnish. Most languages show performance improving with linguistic input. Future work will produce quantitative estimates for the relationship between corpus size, ruleset size, and lemmatisation performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Lemmatisation is the process of deinflecting a word (the surface form) to obtain a normalised, grammatically \"neutral\" form, called the lemma.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A related task is stemming, the process of removing affix morphemes from a word, reducing it to the intersection of all surface forms of the same lemma.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "These two operations have finer (meaning more informative) variants: morphological analysis (producing the lemma plus list of morphological tags) and surface segmentation (producing the stem plus list of affixes). Still, a given surface form may have several possible analyses and several possible segmentations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Uralic languages are highly agglutinative, that is, inflection is often performed by appending suffixes to the lemma. For such languages, stemming and lemmatisation agree, allowing one dimension of comparison between morphological analysers and surface segmenters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Such agglutinative languages typically do not have all surface forms listed in a dictionary; users wishing to look up a word must lemmatise before performing the lookup. Software tools (Johnson et al., 2013) are being developed to combine the lemmatisation and lookup operations.", "cite_spans": [ { "start": 185, "end": 207, "text": "(Johnson et al., 2013)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Further, most Uralic languages are low-resourced, meaning large corpora (necessary for the training of some analysers and segmenters) are not readily available. In such cases, software engineers, linguists and system designers must decide whether to invest effort in obtaining a large enough corpus for statistical methods or in writing rulesets for a rule-based system.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this article we explore this trade-off, comparing rule-based and statistical stemmers across several Uralic languages (with varying levels of resources), using a number of proxies for \"model effort\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For rule-based systems, we evaluate the Giellatekno (Moshagen et al., 2014) finite-state morphological transducers, exploring model effort through ruleset length, and number of states of the transducer.", "cite_spans": [ { "start": 52, "end": 75, "text": "(Moshagen et al., 2014)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For statistical systems, we evaluate Morfessor (Virpioja et al., 2013) surface segementer models along with training corpus size.", "cite_spans": [ { "start": 47, "end": 70, "text": "(Virpioja et al., 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We hope to provide guidance on the question, \"given an agglutinative language with a corpus of N words, how much effort might a rule-based analyser require to be better than a statistical segmenter at lemmatisation?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The most interesting results of this work are the figures shown in Section 5.4, where effort proxies are plotted against several measures of performance (normalised against Finnish). The efficient reader may wish to look at these first, looking up the various quantities afterwards.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reading Guide", "sec_num": "1.1" }, { "text": "For (brief) information on the languages involved, see Section 2; to read about the morphological analysers and statistical segmenters used, see Section 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reading Guide", "sec_num": "1.1" }, { "text": "Discussion and advisement on directions for future work conclude the article in Section 6. The entire project is reproducible, and will be made available before publication.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reading Guide", "sec_num": "1.1" }, { "text": "The languages used for the experiments in this paper are all of the Uralic group. These languages are typologically agglutinative with predominantly suffixing morphology. The following paragraphs give a brief introduction to each of the languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Finnish (ISO-639-3 fin) is the majority and official (together with Swedish) language of Finland. It is in the Finnic group of Uralic languages, and has an estimate of around 6 million speakers worldwide. The language, like other Uralic languages spoken in the more western regions of the language area has predominantly SVO word order and NP-internal agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Komi-Zyrian (ISO-639-3 kpv; often simply referred to as Komi) is one of the major varieties of the Komi macrolanguage of the Permic group of Uralic languages. It is spoken by the Komi-Zyrians, the most populous ethnic subgroup of the Komi peoples in the Uralic regions of the Russian Federation. Komi languages are spoken by an estimated 220, 00 people, and are co-official with Russian in the Komi Republic and the Perm Krai territory of the Russian Federation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Moksha (ISO-639-3 mdf) is one of the two Mordvinic languges, the other being Erzya; the two share co-official status with Russian in the Mordovia Republic of the Russian Federation. There are an estimated 2, 000 speakers of Moksha, and it is dominant in the Western part of Mordovia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Meadow Mari (ISO-639-3 mhr, also known as Eastern Mari) is one of the minor languages of Russia belonging to the Finno-Volgaic group of the Uralic family. After Russian, it is the second-most spoken language of the Mari El Republic in the Russian Federation, and an estimated 500, 000 speakers globally. Meadow Mari is co-official with Hill Mari and Russian in the Mari El Republic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Hill Mari (ISO-639-3 mrj; also known as Western Mari) is one of the minor languages of Russia belonging to the Finno-Volgaic group of the Uralic family, with an estimated 30, 000 speakers. It is closely related to Meadow Mari (ISO-639-3 mhr, also known as Eastern Mari, and Hill Mari is sometimes regarded as a dialect of Meadow Mari. Both languages are co-official with Russian in the Mari El Republic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Erzya (ISO-639-3 myv) is one of the two Mordvinic languages, the other being Moksha, which are traditionally spoken in scattered villages throughout the Volga Region and former Russian Empire by well over a million in the beginning of the 20th century and down to approximately half a million according to the 2010 census. Together with Moksha and Russian, it shares co-official status in the Mordovia Republic of the Russian Federation. 1 North S\u00e1mi (ISO-639-3 sme) belongs to the Samic branch of the Uralic languages. It is spoken in the Northern parts of Norway, Sweden and Finland by approximately 24.700 people, and it has, alongside the national language, some official status in the municipalities and counties where it is spoken. North S\u00e1mi speakers are bilingual in their mother tongue and in their respective national language, many also speak the neighbouring official language. It is primarily an SVO language with limited NP-internal agreement. Of all the languages studied it has the most complex phonological processes.", "cite_spans": [ { "start": 438, "end": 439, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Udmurt (ISO-639-3 udm) is a Uralic language in the Permic subgroup spoken in the Volga area of the Russian Federation. It is co-official with Russian in the Republic of Udmurtia. As of 2010 it has around 340,000 native speakers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Grammatically as with the other languages it is agglutinative, with 15 noun cases, seven of which are locative cases. It has two numbers, singular and plural and a series of possessive suffixes which decline for three persons and two numbers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "In terms of word order typology, the language is SOV, like many of the other Uralic languages of the Russian Federation. There are a number of grammars of the language in Russian and in English, e.g. Winkler (2001) . 3 Lemmatisers", "cite_spans": [ { "start": 200, "end": 214, "text": "Winkler (2001)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Languages", "sec_num": "2" }, { "text": "Giellatekno is a research group working on language technology for the S\u00e1mi languages. It is based in Troms\u00f8, Norway and works primarily on rule-based language technology, particularly finite-state morphological descriptions and constraint grammars. In addition to the S\u00e1mi languages, their open-source infrastructure also contains software and data for many other Uralic languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Giellatekno transducers", "sec_num": "3.1" }, { "text": "In particular, Giellatekno has produced (Moshagen et al., 2014) finite-state transducers for morphological analysis of our chosen Uralic languages; we use these to extract lemmas from surface forms. When multiple lemmatisations are offered, the highest weight one is chosen. Unaccepted words are treated as already-lemmatised.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Giellatekno transducers", "sec_num": "3.1" }, { "text": "Morfessor (Virpioja et al., 2013 ) is a class of unsupervised and semi-supervised trainable surface segmentation algorithms; it attempts to find a minimal dictionary of morphemes. We use Wikipedia as training data for this model.", "cite_spans": [ { "start": 10, "end": 32, "text": "(Virpioja et al., 2013", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Morfessor", "sec_num": "3.2" }, { "text": "The stemmers are applied to every word in the corpus, and the resulting stem is looked up in a dictionary. This mimics a user attempting to look up a highlighted word in a dictionary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary task", "sec_num": "4.1" }, { "text": "Bilingual dictionaries are taken from Giellatekno, with definitions in Russian, Finnish, English, or German. (The actual definitions are not used, just the presence of an entry; we take the union over all dictionaries.) Dictionary sizes are shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 250, "end": 257, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Dictionary task", "sec_num": "4.1" }, { "text": "As baseline we take the percentage of words in the corpus which are already in the dictionary. Both token and type counts provided.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary task", "sec_num": "4.1" }, { "text": "We apply the lemmatisers to each word of the corpus, and measure the reduction in tokens and types. Lower diversity of post-lemmatisation tokens or types demonstrates that the lemmatiser is identifying more words as having the same lemma.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vocabulary reduction", "sec_num": "4.2" }, { "text": "The distinction between token reduction and type reduction corresponds to a notion of \"user experience\": from the perspective of our tasks, correctly lemmatising a more frequent token is more important than a less frequent token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vocabulary reduction", "sec_num": "4.2" }, { "text": "The effort expended in producing a model is a subjective and qualitative measure; we claim only to provide coarse objective and quantitative proxies for this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effort", "sec_num": "4.3" }, { "text": "In the case of statistical methods, total effort (which would include the effort of developing the algorithm) is not important for our purposes: we are comparing the specialisation of a statistical method to a particular language with the development of a rule-based model. (Indeed, to fairly compare total effort of the technique, a completely different and perhaps more academic question, we would need to include the general development of rule-based methods.) Thus for statistical methods we include only the size of the corpus used to train the system. In our experiments, this corpus is Wikipedia, which we use (for better or worse) as a proxy for general availability of corpora in a given language on the internet.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effort", "sec_num": "4.3" }, { "text": "For rule-based systems, we must find a measure of the effort. In this article our rule-based systems are all finitestate transducers, compiled from rulesets written by linguists. We choose two proxies for invested effort: the lines of code in all rulesets used in compiling the transducer, and the number of states of the transducer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effort", "sec_num": "4.3" }, { "text": "The former will count complex and simple rules the same, which the latter may provide insight into. Conversely, a highly powerful rule system may create a great number of states while being simple to write; in this case, the ruleset is a better proxy than the number of states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effort", "sec_num": "4.3" }, { "text": "Wikipedia dumps from 20181201 are used as source corpus; the corpus is split into tokens at word boundaries and tokens which are not purely alphabetical are dropped. Corpus size in tokens, post-processing, is shown in Table 2 .", "cite_spans": [], "ref_spans": [ { "start": 218, "end": 225, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Wikipedia", "sec_num": "4.4" }, { "text": "Corpora were randomly divided into training (90% of the corpus) and testing subcorpora (10%); Morfessor models are produced with the training subcorpus, and lemmatiser evaluation is only with the test subcorpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Wikipedia", "sec_num": "4.4" }, { "text": "Our study involves treating the Uralic language as an independent variable; the six languages we consider here do not provide for a very large sample. We attempt to mitigate this by using both traditional and robust statistics; potential \"outliers\" can then be quantitatively identified. Thus for every mean and standard deviation seen, we will also present the median and the median absolute deviation. For reference:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "suppose that {x i } N i=1 is a finite set of numbers. If {y i } N i=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "is the same collection, but sorted (so that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "y 1 \u2264 y 2 \u2264 \u2022 \u2022 \u2022 \u2264 y N ), then the median is med{x i } = { y N /2 N is even mean{y (N \u00b11)/2 } N is odd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "and the median absolute deviation (or for brevity, \"median deviation\") is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "mad{x i } = med {|x i \u2212 med x i |} .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "When we quote means, we will write them as \u00b5 \u00b1 \u03c3 where \u00b5 is the mean and \u03c3 the standard deviation of the data. Similarly, for medians we will write m \u00b1 d where m is the median and d the median deviation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Data with potential outliers can be identified by comparing the median/median deviation and the mean/standard deviation: if they are significantly different (for example, the mean is much further than one standard deviation away from the median, or the median deviation is much smaller than the standard deviation), then attention is likely warranted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Results of the dictionary lookup are presented in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 50, "end": 57, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Dictionary lookup", "sec_num": "5.1" }, { "text": "Cursory inspection shows that while the Giellatekno model for Finnish slightly out-performs the Wikipedia Morfessor model, on average Morfessor provides not only the greatest improvement in token lookup performance (average/median improvement of 1.6/1.5 versus Giellatekno's 1.4/1.3), but also more consistent (standard/median deviation of 0.3/0.1 versus 0.4/0.3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary lookup", "sec_num": "5.1" }, { "text": "We see some limitations in the Morfessor model when projecting to type lookup performance: the value of Morfessor on type lookup is essentially random, hurting as often and as much as it helps: mean and median improvement factors are both 1.0. Compare with Giellatekno, where improvement mean and median are at least one deviation above baseline. We suggest this disparity could be due to our Morfessor model over-stemming rare words, and successfully stemming common words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dictionary lookup", "sec_num": "5.1" }, { "text": "Vocabulary reduction results are presented in Table 4 .", "cite_spans": [], "ref_spans": [ { "start": 46, "end": 53, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Vocabulary reduction", "sec_num": "5.2" }, { "text": "Generally, we see that Morfessor is much more aggressively reducing the vocabulary: average Morfessor reduction is 9% versus Giellatekno's 15%; here North S\u00e1mi and Finnish again stand out with Morfessor reducing to 7.2% and 6.5% respectively. Compare with Hill Mari, where reduction is to a mere 11%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vocabulary reduction", "sec_num": "5.2" }, { "text": "While the performance of Giellatekno is much less dramatic, we still notice that North S\u00e1mi and Hill Mari are more than a standard deviation, or more than two median deviations, away from the mean performance. Otherwise, the clustering is fairly tight, with all languages besides North S\u00e1mi and Hill Mari within one standard deviation and 1.5 median deviations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vocabulary reduction", "sec_num": "5.2" }, { "text": "The analysis above shows that our data are affected by outlier models; which of the two measures is nominally more representative of the overall performance landscape could be demonstrated through an increase of sample size, i.e., increasing the number of languages surveyed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vocabulary reduction", "sec_num": "5.2" }, { "text": "The effort quantification is presented in Table 5 . Transducer source code complexity, measured in number of transducer states per line of source code, is presented in Table 6 . Note that comments are included as part of the \"source code\"; we consider, for example, explanation of how the code works to count as some of the effort behind the development of the transducer. Some immediate observations: among the Uralic languages studied here, Finnish is high-resource, but not overwhelmingly: North S\u00e1mi compares for transducer size (in number of states), at nearly 2.5 times the median. While Meadow Mari actually has a comparable amount of transducer source code (1.8 million lines of code, about 80% the size of the Finnish transducer), its transducer code is extremely low complexity; see Table 6 . Finnish Wikipedia is approximately 2.5 times larger than the next largest, Hill Mari, and nearly 7 times larger than the median; under our assumption, this would indicate that Finnish written material is also much more accessible on the internet than our other Uralic languages.", "cite_spans": [], "ref_spans": [ { "start": 42, "end": 49, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 168, "end": 175, "text": "Table 6", "ref_id": "TABREF5" }, { "start": 793, "end": 800, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Effort", "sec_num": "5.3" }, { "text": "Among Giellatekno models, Hill Mari transducer is uniformly the lowest-resource of the Uralic languages studied, with very few lines of below-average complexity code written; contrast this with the Morfessor models, where Hill Mari has a respectable 350, 000 tokens. The lowest resource Morfessor model is Udmurt, with only 7, 000 tokens; the Udmurt Giellatekno model is also significantly below-average in resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effort", "sec_num": "5.3" }, { "text": "While North S\u00e1mi has slightly below-median transducer source size, it has extremely high (eight deviations above median) state complexity, with more than one state for every two lines of code.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effort", "sec_num": "5.3" }, { "text": "See Figures 1, 2 , and 3 for plots of effort normalised against Finnish versus performance. Plots are colored by language and marked by the effort quantification method. Note that since \"lines of code\" and \"number of states\" are two different measures of the same model, Table 3 : Results of the dictionary lookup task for no-op (NOOP), Morfessor (MF), and Giellatekno transducer (GT). A \"hit\" means a successful dictionary lookup. Percentage hits (tokens or types) is the percentage of tokens or types in the corpus for which the lemmatiser produces a dictionary word. The \"no-op\" (NOOP) lemmatiser takes the surface form as-is, and is used as baseline; the last two columns are percentage hits normalised by this.", "cite_spans": [], "ref_spans": [ { "start": 4, "end": 16, "text": "Figures 1, 2", "ref_id": null }, { "start": 271, "end": 278, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Analysis", "sec_num": "5.4" }, { "text": "Hits ( .0 \u00b1 6.0 0.6 \u00b1 0.7 40.0 \u00b1 10.0 16.0 \u00b1 8.0 1.6 \u00b1 0.3 1.0 \u00b1 0.2 median 0.7 \u00b1 0.4 0.21 \u00b1 0.09 46.0 \u00b1 7.0 17.0 \u00b1 6.0 1.5 \u00b1 0.1 1.0 \u00b1 0.1 Table 4 : Vocabulary reduction results for no-op (NOOP), Morfessor (MF), and Giellatekno (GT) lemmatisers. The final column gives the reduction factor in vocabulary size: reduction of 1 corresponds to no reduction performed, while 0.01 corresponds to a 100-fold reduction in vocabulary (average of 100 types per lemma).", "cite_spans": [], "ref_spans": [ { "start": 140, "end": 147, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Language Lemmatiser", "sec_num": null }, { "text": "Note that there is no constraint that the \"lemmas\" produced are dictionary words. 6.5 kpv 0.4 9.1 mdf 1.8 9.9 mhr 0.6 9.9 mrj 5.2 11.1 myv 0.4 8.6 sme 0.5 7.2 udm 0.4 9.9 average MORF 3.3 \u00b1 5.4 9.0 \u00b1 1.4 median 0.5 \u00b1 0.1 9.5 \u00b1 0.6 MORF ktok 171 \u00b1 296 19.1 \u00b1 33.0 med.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Language Lemmatiser", "sec_num": null }, { "text": "13 \u00b1 5 1.5 \u00b1 0.5 0.9 \u00b1 0.3 90 \u00b1 40 0.12 \u00b1 0.06 their performance is the same. Figure 1 indicates that for the dictionary lookup task by-token, Morfessor with Wikipedia is more effortefficient (relative to Finnish) for Komi-Zyrian, Udmurt, North S\u00e1mi, Erzya, Meadow Mari, and Giellatekno is more effort-efficient for Hill Mari. Remaining is Moksha, for which performance improvement scales with effort independent of model, and Finnish.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 86, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Language Lemmatiser", "sec_num": null }, { "text": "Since we normalise effort against Finnish, we can only observe that the Finnish Giellatekno model performs slightly better than the Finnish Wikipedia Morfessor model; efficiency claims cannot be made. Figure 2 indicates that for the dictionary lookup task by-token, Morfessor with Wikipedia is more effortefficient (relative to Finnish) for Komi-Zyrian only; Giellatekno remains more effort-efficient for Hill Mari. Meanwhile, Udmurt, North S\u00e1mi, Erzya, and Meadow Mari join Moksha in improvement scaling with effort; the spread in slopes (the rate at which performance improves as effort is increased) is, however, quite large. Figure 3 shows that, as with lookup performance for tokens, Morfessor dominates vocabulary reduction efficiency, with only Hill Mari scaling with relative effort.", "cite_spans": [], "ref_spans": [ { "start": 201, "end": 209, "text": "Figure 2", "ref_id": null }, { "start": 629, "end": 637, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Language Lemmatiser", "sec_num": null }, { "text": "There are many interesting things to notice in the effortperformance analysis.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 6.1 Discussion", "sec_num": "6" }, { "text": "Focusing just on the dictionary task, we find that compared against the same technology for Finnish, the Giellatekno North S\u00e1mi (sme) transducer has very high performance (relatively small ruleset), due to high rule complexity (the number of states is not very low). It is possible that North S\u00e1mi is simply easy to lemmatise, as Morfessor seems to do very well with a small corpus.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 6.1 Discussion", "sec_num": "6" }, { "text": "Hill Mari (mrj) shows predictable performance: relative to Finnish, a small increase in resources (going from 20% or 30% of Finnish resources for the Giellatekno transducer to 40% resources for the Wikipedia corpus) gives a modest increase in performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 6.1 Discussion", "sec_num": "6" }, { "text": "Overall, we see that percent improvement in tasks scales with effort (relative to Finnish) in the type-lookup task; in the token-lookup and vocabulary reduction tasks, performance improvement favours Morfessor. (That is, the Morfessor model has a higher improvement-toresource ratio, with resources relative to Finnish.) This might be explained by the dramatic spread in Wikipedia corpus sizes used in the Morfessor models: median corpus size is 1.5% \u00b1 0.5% the size of Finnish. Thus, improvement of 5% of the Morfessor model is increasing the nominal effort (kilotokens) by a factor of four, for the median corpus; compare with Giellatekno, where median model is 20% or 40% the size of the corresponding Finnish model, depending on the metric used. See the following section for potential avenues to control for this.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion 6.1 Discussion", "sec_num": "6" }, { "text": "In the dictionary task, hits/words is lower than unique hits/words (see Section 5.1); this indicates that mislemmatised words are more frequent. Since irregular words are typically high-frequency, we might hypothesize that filtering these would close this gap. If not, it might point out areas for improvement in the lemmatisation algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "We would like to also try other methods of lemmatising. One of the problems with the finite-state transducers is that they have limited capacity for lemmatising words which are not found in the lexicon. It is possible to use guesser techniques such as those described in Lind\u00e9n (2009) , but the accuracy is substantially lower than for hand-written entries. We would like to approach the problem as in Silfverberg and Tyers (2018) and train a sequence-to-sequence LSTM to perform lemmatisation using the finite-state transducer to produce forms for the training process.", "cite_spans": [ { "start": 271, "end": 284, "text": "Lind\u00e9n (2009)", "ref_id": "BIBREF2" }, { "start": 402, "end": 430, "text": "Silfverberg and Tyers (2018)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "There are other statistical methods, in particular bytepair encoding and adaptor grammars (Johnson et al., 2006) , which should be added to the comparison, and addition of further languages should be straightforward.", "cite_spans": [ { "start": 90, "end": 112, "text": "(Johnson et al., 2006)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "A more refined understanding of the relationship between size of corpus and Morfessor would give a richer dataset; this could be achieved by decimating the Wikipedia corpus. For truly low-resource languages, additional corpora may be necessary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "Similar refinement could be produced for the Giellatekno transducers using their version history: older versions of the transducers have had less work, and presumably have less source code. A dedicated researcher could compare various editions of the same transducer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "Cross-validation (in the case of Morfessor) and using multiple testing subcorpora would give some idea of the confidence of our performance measurements at the language-level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "Another interesting analysis, which we do not have the space to perform here, would be to normalise performance P , along the model axis m, for example for lan- Figure 1 : Improvement factor in hit rate in dictionary lookup (by tokens) (see Section 4.1; higher is better) vs. effort relative to Finnish (see Section 4.3; higher is more effort). In general, more effort-efficient models will appear to the upper-left of less effort-efficient models. Figure 2 : Improvement factor in hit rate in dictionary lookup (by types) (see Section 4.1; higher is better) vs. effort relative to Finnish (see Section 4.3; higher is more effort). In general, more effort-efficient models will appear to the upper-left of less effort-efficient models. Figure 3 : Vocabulary reduction performance in types (see Section 4.2; lower is better) vs. effort relative to Finnish (see Section 4.3; higher is more effort). In general, more effort-efficient models will appear to the lower-left of less effort-efficient models.", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 169, "text": "Figure 1", "ref_id": null }, { "start": 449, "end": 457, "text": "Figure 2", "ref_id": null }, { "start": 736, "end": 744, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "guage xxx (normalising against Giellatekno model performance):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "P *", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "xxx,m = P xxx,m \u2022 P fin,GT P fin,m", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "This measure, P * , would always be fixed to 1.0 for Finnish, and would partially control for languageindependent performance variation between models. This would then allow study of the distribution over languages of marginal performance improvement with effort.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future work", "sec_num": "6.2" }, { "text": "https://efo.revues.org/1829", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Adaptor grammars: A framework for specifying compositional nonparametric bayesian models", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2006, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "641--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Gold- water. 2006. Adaptor grammars: A framework for specifying compositional nonparametric bayesian models. In Advances in neural information processing systems, pages 641-648.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using finite state transducers for making efficient reading comprehension dictionaries", "authors": [ { "first": "Ryan", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Lene", "middle": [], "last": "Antonsen", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Trosterud", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 19th Nordic Conference of Computational Linguistics", "volume": "85", "issue": "", "pages": "59--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ryan Johnson, Lene Antonsen, and Trond Trosterud. 2013. Using finite state transducers for making effi- cient reading comprehension dictionaries. In Proceed- ings of the 19th Nordic Conference of Computational Linguistics (NODALIDA 2013), 85, pages 59-71.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Guessers for finite-state transducer lexicons", "authors": [ { "first": "", "middle": [], "last": "Krister Lind\u00e9n", "suffix": "" } ], "year": 2009, "venue": "Computational Linguistics and Intelligent Text Processing 10th International Conference", "volume": "5449", "issue": "", "pages": "158--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Krister Lind\u00e9n. 2009. Guessers for finite-state trans- ducer lexicons. Computational Linguistics and Intel- ligent Text Processing 10th International Conference, CICLing 2009, 5449:158-169.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Open-source infrastructures for collaborative work on under-resourced languages", "authors": [ { "first": "Jack", "middle": [], "last": "Sjur N\u00f8rsteb\u00f8 Moshagen", "suffix": "" }, { "first": "Tommi", "middle": [], "last": "Rueter", "suffix": "" }, { "first": "Trond", "middle": [], "last": "Prinen", "suffix": "" }, { "first": "Francis", "middle": [ "Morton" ], "last": "Trosterud", "suffix": "" }, { "first": "", "middle": [], "last": "Tyers", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sjur N\u00f8rsteb\u00f8 Moshagen, Jack Rueter, Tommi Prinen, Trond Trosterud, and Francis Morton Tyers. 2014. Open-source infrastructures for collaborative work on under-resourced languages.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Datadriven morphological analysis for Uralic languages", "authors": [ { "first": "Miikka", "middle": [], "last": "Silfverberg", "suffix": "" }, { "first": "Francis", "middle": [ "M" ], "last": "Tyers", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 5th International Workshop on Computational Linguistics for the Uralic Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Miikka Silfverberg and Francis M. Tyers. 2018. Data- driven morphological analysis for Uralic languages. In Proceedings of the 5th International Workshop on Computational Linguistics for the Uralic Languages (IWCLUL 2018).", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Morfessor 2.0: Python Implementation and Extensions for Morfessor Baseline", "authors": [ { "first": "Sami", "middle": [], "last": "Virpioja", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Smit", "suffix": "" }, { "first": "Stig-Arne", "middle": [], "last": "Gr\u00f6nroos", "suffix": "" }, { "first": ",", "middle": [], "last": "", "suffix": "" }, { "first": "Mikko", "middle": [], "last": "Kurimo", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sami Virpioja, Peter Smit, Stig-Arne Gr\u00f6nroos, , and Mikko Kurimo. 2013. Morfessor 2.0: Python Im- plementation and Extensions for Morfessor Baseline. Technical report, Aalto University.", "links": null } }, "ref_entries": { "TABREF0": { "content": "
Language Lexemes
fin19012
kpv43362
mdf28953
mhr53134
mrj6052
myv15401
sme17605
udm19639
", "text": "Giellatekno bilingual dictionary sizes, in words.", "num": null, "type_str": "table", "html": null }, "TABREF1": { "content": "
Language TokensTypes
fin897867 276761
mrj35252151420
mhr151596468
myv111775107
sme94426552
udm75034308
", "text": "Wikipedia corpus size by language, in alphabetic words.", "num": null, "type_str": "table", "html": null }, "TABREF4": { "content": "
: Effort quantification; last column is normalized
by Finnish. The group 'Mloc' refers to millions of lines
of code in the Giellatekno transducer source, including
lexc, xfst, regular expression, constrain grammar, and
twol code. The group 'kst' is the number (in thousands)
of states in the Giellatekno transducer, and 'ktok' is the
number (in thousands) of tokens in the Morfessor train-
ing corpus. The final column normalises against Finnish.
Lang. Model EffortQuan.% fin
fin440100
kpv15035
mdf6013
mhr mrjGTkst80 5017 11
myv11025
sme540122
udm6015
avg. med.GTkst190 \u00b1 180 90 \u00b1 4040 \u00b1 40 20 \u00b1 9
fin2.3100.0
kpv0.730.0
mdf0.940.0
mhr mrjGTMloc1.8 0.580.0 20.0
myv1.250.0
sme0.940.0
udm0.520.0
avg. med.GTMloc1.1 \u00b1 0.6 0.9 \u00b1 0.350 \u00b1 30 40 \u00b1 10
fin898.0100.0
kpv11.01.2
mdf64.07.1
mhr mrjMORF ktok15.0 353.01.7 39.3
myv11.01.2
sme9.01.1
udm7.00.8
avg.
", "text": "", "num": null, "type_str": "table", "html": null }, "TABREF5": { "content": "
Lang. LoC (M) States (k)Complex.
fin2.3440.00.19
kpv0.7150.00.21
mdf0.960.00.06
mhr1.880.00.04
mrj0.550.00.09
myv1.2110.00.09
sme0.9540.00.63
udm0.560.00.14
avg.1.1 \u00b1 0.6 200 \u00b1 2000.2 \u00b1 0.2
med.
", "text": "Transducer source complexity, in number of states per line of transducer source code. The column \"LoC (M)\" gives the number of lines of source code, in millions, and \"States (k)\" the size, in thousands of states of the compiled transducer.", "num": null, "type_str": "table", "html": null } } } }