{ "paper_id": "I08-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:41:18.988253Z" }, "title": "A Hybrid Approach to the Induction of Underlying Morphology", "authors": [ { "first": "Michael", "middle": [], "last": "Tepper", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington Seattle", "location": { "postCode": "98195", "region": "WA" } }, "email": "mtepper@u.washington.edu" }, { "first": "Fei", "middle": [], "last": "Xia", "suffix": "", "affiliation": {}, "email": "fxia@u.washington.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a technique for refining a baseline segmentation and generating a plausible underlying morpheme segmentation by integrating handwritten rewrite rules into an existing state-of-the-art unsupervised morphological induction procedure. Performance on measures which consider surface-boundary accuracy and underlying morpheme consistency indicates this technique leads to improvements over baseline segmentations for English and Turkish word lists.", "pdf_parse": { "paper_id": "I08-1003", "_pdf_hash": "", "abstract": [ { "text": "We present a technique for refining a baseline segmentation and generating a plausible underlying morpheme segmentation by integrating handwritten rewrite rules into an existing state-of-the-art unsupervised morphological induction procedure. Performance on measures which consider surface-boundary accuracy and underlying morpheme consistency indicates this technique leads to improvements over baseline segmentations for English and Turkish word lists.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The primary goal of unsupervised morphological induction (UMI) is the simultaneous induction of a reasonable morphological lexicon as well as an optimal segmentation of a corpus of words, given that lexicon. The majority of existing approaches employ statistical modeling towards this goal, but differ with respect to how they learn or refine the morphological lexicon. While some approaches involve lexical priors, either internally motivated or motivated by the minimal description length (MDL) criterion, some utilize heuristics. Pure maximum likelihood (ML) approaches may refine the lexicon with heuristics in lieu of explicit priors (Creutz and Lagus, 2004) , or not make categorical refinements at all concerning which morphs are included, only probabilistic refinements through a hierarchical EM procedure (Peng and Schuurmans, 2001) . Approaches that optimize the lexicon with respect to priors come in several flavors. There are basic maximum a priori (MAP) approaches that try to maximize the probability of the lexicon against linguistically motivated priors (Deligne and Bimbot, 1997; Snover and Brent, 2001 ; Creutz and Lagus, 2005) . An alternative to MAP, MDL approaches use their own set of priors motivated by complexity theory. These studies attempt to minimize lexicon complexity (bit-length in crude MDL) while simultaneously minimizing the complexity (by maximizing the probability) of the corpus given the lexicon (de Marcken, 1996; Goldsmith, 2001; Creutz and Lagus, 2002) .", "cite_spans": [ { "start": 639, "end": 663, "text": "(Creutz and Lagus, 2004)", "ref_id": "BIBREF4" }, { "start": 814, "end": 841, "text": "(Peng and Schuurmans, 2001)", "ref_id": "BIBREF15" }, { "start": 1071, "end": 1097, "text": "(Deligne and Bimbot, 1997;", "ref_id": "BIBREF9" }, { "start": 1098, "end": 1120, "text": "Snover and Brent, 2001", "ref_id": "BIBREF20" }, { "start": 1123, "end": 1146, "text": "Creutz and Lagus, 2005)", "ref_id": "BIBREF5" }, { "start": 1437, "end": 1455, "text": "(de Marcken, 1996;", "ref_id": "BIBREF8" }, { "start": 1456, "end": 1472, "text": "Goldsmith, 2001;", "ref_id": "BIBREF11" }, { "start": 1473, "end": 1496, "text": "Creutz and Lagus, 2002)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Morphological Induction", "sec_num": "1.1" }, { "text": "Many of the approaches mentioned above utilize a simplistic unigram model of morphology to produce the segmentation of the corpus given the lexicon. Substrings in the lexicon are proposed as morphs within a word based on frequency alone, independently of phrase-, word-and morph-surroundings (de Marcken, 1996; Peng and Schuurmans, 2001; Creutz and Lagus, 2002) . There are many approaches, however, which further constrain the segmentation procedure. The work by Creutz and Lagus (2004; 2005; constrains segmentation by accounting for morphotactics, first assigning mophotactic categories (prefix, suffix, and stem) to baseline morphs, and then seeding and refining an HMM using those category assignments. Other more structured models include Goldsmith's (2001) work which, instead of inducing morphemes, induces morphological signatures like {\u00f8, s, ed, ing} for English regular verbs. Some techniques constrain possible analyses by employing approximations for morphological meaning or usage to prevent false derivations (like singed = sing + ed ). There is work by Schone and Jurafsky (2000; 2001) where meaning is proxied by wordand morph-context, condensed via LSA. Yarowsky and Wicentowski (2000) and Yarowsky et al. (2001) use expectations on relative frequency of aligned inflected-word, stem pairs, as well as POS context features, both of which approximate some sort of meaning.", "cite_spans": [ { "start": 292, "end": 310, "text": "(de Marcken, 1996;", "ref_id": "BIBREF8" }, { "start": 311, "end": 337, "text": "Peng and Schuurmans, 2001;", "ref_id": "BIBREF15" }, { "start": 338, "end": 361, "text": "Creutz and Lagus, 2002)", "ref_id": "BIBREF3" }, { "start": 464, "end": 487, "text": "Creutz and Lagus (2004;", "ref_id": "BIBREF4" }, { "start": 488, "end": 493, "text": "2005;", "ref_id": "BIBREF5" }, { "start": 745, "end": 763, "text": "Goldsmith's (2001)", "ref_id": "BIBREF11" }, { "start": 1069, "end": 1095, "text": "Schone and Jurafsky (2000;", "ref_id": null }, { "start": 1096, "end": 1101, "text": "2001)", "ref_id": "BIBREF11" }, { "start": 1172, "end": 1203, "text": "Yarowsky and Wicentowski (2000)", "ref_id": "BIBREF22" }, { "start": 1208, "end": 1230, "text": "Yarowsky et al. (2001)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Morphological Induction", "sec_num": "1.1" }, { "text": "Allomorphy, or allomorphic variation, is the process by which a morpheme varies (orthographically or phonologically) in particular contexts, as constrained by a grammar. 1 To our knowledge, there is only handful of work within UMI attempting to integrate allomorphy into morpheme discovery. A notable approach is the Wordframe model developed by Wicentowski (2002) , which performs weighted edits on root-forms, given context, as part of a larger similarity alignment model for discovering pairs.", "cite_spans": [ { "start": 346, "end": 364, "text": "Wicentowski (2002)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Allomorphy in UMI", "sec_num": "1.2" }, { "text": "Morphological complexity is fixed by a template; the original was designed for inflectional morphologies and thus constrained to finding an optional affix on either side of a stem. Such a template would be difficult to design for agglutinative morphologies like Turkish or Finnish, where stems are regularly inflected by chains of affixes. Still, it can be extended. A notable recent extension accounts for phenomena like infixation and reduplication in Filipino (Cheng and See, 2006) .", "cite_spans": [ { "start": 463, "end": 484, "text": "(Cheng and See, 2006)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Allomorphy in UMI", "sec_num": "1.2" }, { "text": "In terms of allomorphy, the approach succeeds at generalizing allomorphic patterns, both steminternally and at points of affixation. A major drawback is that, so far, it does not account for affix allomorphy involving character replacement-that is, beyond point-of-affixation epentheses or deletions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Allomorphy in UMI", "sec_num": "1.2" }, { "text": "Our approach aims to integrate a rule-based component consisting of hand-written rewrite rules into an otherwise unsupervised morphological induction procedure in order to refine the segmentations it produces.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Our Approach", "sec_num": "1.3" }, { "text": "The major contribution of this work is a rulebased component which enables simple encoding of context-sensitive rewrite rules for the analysis of induced morphs into plausible underlying morphemes. 2 A rule has the form general form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Sensitive Rewrite Rules", "sec_num": "1.3.1" }, { "text": "\u03b1 underlying \u2192 \u03b2 surface / \u03b3 l. context _ \u03b4 r. context (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Sensitive Rewrite Rules", "sec_num": "1.3.1" }, { "text": "It is also known as a SPE-style rewrite rule, part of the formal apparatus to introduced by Chomsky and Halle (1968) to account for regularities in phonology. Here we use it to describe orthographic patterns. Mapping morphemes to underlying forms with context-sensitive rewrite rules allows us to peer through the fragmentation created by allomorphic variation. Our experiments will show that this has the effect of allowing for more unified, consistent morphemes while simultaneously making surface boundaries more transparent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Sensitive Rewrite Rules", "sec_num": "1.3.1" }, { "text": "For example, take the English multipurpose inflectional suffix \u2022s, normally written as \u2022s, but as \u2022es after sibilants (s,sh, ch, . . . ). We can write the following SPE-style rule to account for its variation. This rule says, \"Insert an e (map nothing to e) following a character marked as a sibilant (+SIB) and a morphological boundary (+), at the focus position (_), immediately preceding an s.\" In short, it enables the mapping of the underlying form \u2022s to \u2022es by inserting an e before s where appropriate. When this rule is reversed to produce underlying analyses, the \u2022es variant in such words as glasses, matches, swishes, and buzzes can be identified with the \u2022s variant in words like plots, sits, quakes, and nips.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Context-Sensitive Rewrite Rules", "sec_num": "1.3.1" }, { "text": "Before the start of the procedure, there is a preprocessing step to derive an initial segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Procedure", "sec_num": "1.3.2" }, { "text": "This segmentation is fed to the EM Stage, the goal of which is to find the maximum probability segmentation of a wordlist into underlying morphemes. First, analyses of initial segments are produced by rule. Then, their frequency is used to determine their likelihood as underlying morphemes. Finally, probability of a segmentation into underlying morphemes is maximized.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Procedure", "sec_num": "1.3.2" }, { "text": "The output segmentation feeds into the Split Stage, where heuristics are used to split large, highfrequency segments that fail to break into smaller underlying morphemes during the EM algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of Procedure", "sec_num": "1.3.2" }, { "text": "A flowchart of the procedure is given in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 41, "end": 49, "text": "Figure 1", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Procedure", "sec_num": "2" }, { "text": "Preprocessing We use the Categories-MAP algorithm developed by Creutz and Lagus (2005; to produce an initial morphological segmentation. Here, a segmentation is optimized by maximum a posteriori estimate given priors on length, frequency, and usage of morphs stored in the model. Their procedure begins with morphological tags indicating basic morphotactics (prefix, stem, suffix, noise) being assigned heuristically to a baseline segmentation. That tag assignment is then used to seed an HMM.", "cite_spans": [ { "start": 63, "end": 86, "text": "Creutz and Lagus (2005;", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Procedure", "sec_num": "2" }, { "text": "Morfessor 0.9", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Preprocess", "sec_num": null }, { "text": "Step 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Categories-MAP", "sec_num": null }, { "text": "Step 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Propose Underlying Analyses", "sec_num": null }, { "text": "Step 3 Re-segment Wordlist", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimate HMM Probabilities", "sec_num": null }, { "text": "Rewrite Rules analyses probs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimate HMM Probabilities", "sec_num": null }, { "text": "Orig. Wordlist", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Estimate HMM Probabilities", "sec_num": null }, { "text": "Step 4 Re-tag Segmentation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SPLIT STAGE", "sec_num": null }, { "text": "Step 7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewrite Rules", "sec_num": null }, { "text": "Re-segment (Split) Morphs probs. Step 6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewrite Rules", "sec_num": null }, { "text": "Estimate HMM Probabilities", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewrite Rules", "sec_num": null }, { "text": "Step 5 Optimal segmentation of a word is simultaneously the best tag and morph 3 sequence given that word. The contents of the model are optimized with respect to length, frequency, and usage priors during splitting and joining phases. The final output is a tagged segmentation of the input word-list.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Rewrite Rules", "sec_num": null }, { "text": "The model we train is a modified version of the morphological HMM from the work of Lagus (2004-2006) , where a word w consists of a sequence of morphs generated by a morphologicalcategory tag sequence. The difference between their HMM and ours is that theirs emits surface morphs, while ours emits underlying morphemes. Morphemes may either be analyses proposed by rule or surface morphs acting as morphemes. We do not modify the tags Creutz and Lagus use (prefix, stem, suffix, and noise).", "cite_spans": [ { "start": 83, "end": 100, "text": "Lagus (2004-2006)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "We proceed by EM, initialized by the preprocessed segmentation. Rule-generated underlying analyses are produced (Step 1), and used to estimate the emission probability P (u i |t i ) and transition probability", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "P (t i |t i\u22121 ) (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Step 2). In successive E-steps, Steps 1 and 2 are repeated. The M-step (Step 3) involves finding the maximum probability decoding of each word according to Eq (6), i.e. maximum probability tag and morpheme sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Step 1 -Derive Underlying Analyses In this step, handwritten context-sensitive rewrite rules derive context-relevant analyses for morphs in the preprocessed segmentation. These analyses are produced by a set of ordered rules that propose dele-3 A morph is a linguistic morpheme as it occurs in production, i.e. as it occurs in a surface word. tions, insertions, or substitutions when triggered by the proper characters around a segmentation boundary. 4 A rule applies wherever contextually triggered, from left to right, and may apply more than once to the same word. To prevent the runaway application of certain rules, a rule may not apply to its own output. The result of applying a rule is a (possibly spelling-changed) segmented word, which is fed to the next rule. This enables multi-step analyses by using rules designed specifically to apply to the outputs of other rules. See Figure 2 for a small example.", "cite_spans": [], "ref_spans": [ { "start": 885, "end": 893, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Step 2 -Estimate HMM Probabilities Transition probabilities P (t i |t i\u22121 ) are estimated by maximum likelihood, given a tagged input segmentation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Emission probabilities P (u i |t i ) are also estimated by maximum likelihood, but the situation is slightly more complex; the probability of morphemes u i are estimated according to frequencies of association (coindexation) with surface morphs s i and tags t i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Furthermore an underlying morpheme u i can either be identical to its associated surface morph s i when no rules apply, or be a rule-generated analysis. For the sake of clarity, we call the former u i and the latter u i , as defined below: Figure 2 : Underlying analyses for a segmentation are generated by passing it through context-sensitive rewrite rules. Rules apply to some morphs (e.g., citi \u2192 city) but not to others (e.g., glass \u2192 glass). u i . The probability of u i given tag t i is calculated by summing over all allomorphs s of u i the probability that u i realizes s in the context of tag t i :", "cite_spans": [], "ref_spans": [ { "start": 240, "end": 248, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "u i = u i if u i = s i u i otherwise", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "P (u i |t i ) = s\u2208allom.-of(ui) P (u i , s|t i ) (3) = s\u2208allom.-of(ui) P (u i |s, t i )P (s|t i ) (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Both Eq (3) and Eq (4) are trivial to estimate with counting on our input from Step 1 (see Figure 2 ). We show (4) because it has the term P (u i |s, t i ), which may be used for thresholding and discounting terms of the sum where u i is rarely associated with a particular allomorph and tag. In the future, such discounting may be useful to filter out noise generated by noisy or permissive rules. So far, this type of discounting has not improved results.", "cite_spans": [], "ref_spans": [ { "start": 91, "end": 100, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Step 3 -Resegment Word List Next we resegment the word list into underlying morphemes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Searching for the best breakdown of a word w into morpheme sequence u and tag sequence t, we maximize the probability of the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P (w, u, t) = P (w|u, t)P (u, t) = P (w|u, t)P (u|t)P (t)", "eq_num": "(5)" } ], "section": "EM Stage", "sec_num": "2.1" }, { "text": "To simplify, we assume that P (w|u, t) is equal to one. 5 With this assumption in mind, Eq (5) reduces to P (u|t)P (t). With independence assumptions and a local time horizon, we estimate:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "argmax u,t P (u|t)P (t) \u2248 argmax u,t n i=1 P (u i |t i )P (t i |t i\u22121 ) (6) 5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "In other words, we make the assumption that a sequence of underlying morphemes and tags corresponds to just one word. This assumption may need revision in cases where morphemes can optionally undergo the types of spelling changes we are trying to encode; this has not been the case for the languages under investigation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "The search for the maximum probability tag and morph sequence in Eq (6) is carried out by a modified version of the Viterbi algorithm. The maximum probability segmentation for a given word may be a mixture of both types of underlying morpheme, u i and u i . Also, wherever we have a choice between emitting u i , identical to the surface form, or u i , an analysis with rule-proposed changes, the highest probability of the two is always selected.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "EM Stage", "sec_num": "2.1" }, { "text": "Many times, large morphs have substructure and yet are too frequent to be split when segmented by the HMM in the EM Stage. To overcome this, we approximately follow the heuristic procedure 6 laid out by Creutz and Lagus (2004) , encouraging splitting of larger morphs into smaller underlying morphemes. This process has the danger of introducing many false analyses, so first the segmentation must be re-tagged (Step 4) to identify which morphemes are noise and should not be used. Once we re-tag, we re-analyze morphs in the surface segmentation (Step 5) and re-estimate HMM probabilities (Step 6). (for Steps 5 and 6, refer to Steps 1 and 2). Finally, we use these HMM probabilities to split morphs (Step 7).", "cite_spans": [ { "start": 203, "end": 226, "text": "Creutz and Lagus (2004)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Split Stage", "sec_num": "2.2" }, { "text": "Step 4 -Re-tag the Segmentation To identify noise morphemes, we estimate a distribution P (CAT |u i ) for three true categories CAT (prefix, stem, or suffix) and one noise category; we then assign categories randomly according to this distribution. Stem probabilities are proportional to stemlength, while affix probabilities are proportional to left-or right-perplexity. The probability of true categories are also tied to the value of sigmoid-cutoff parameters, the most important of which is b, which thresholds the probability of both types of affix (prefix and suffix).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Split Stage", "sec_num": "2.2" }, { "text": "The probability of the noise category is conversely related to the product of true category probabilities; when true categories are less probable, noise becomes more probable. Thus, adjusting parameters like b can increase or decrease the probability of noise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Split Stage", "sec_num": "2.2" }, { "text": "Step 7 -Split Morphs In this step, we examine pairs in the segmentation to see if a split into sub-morphemes is warranted. We constrain this process by restricting splitting to stems (with the option to split affixes), and by splitting into restricted sequences of tags, particularly avoiding noise. We also use parameter b in Step 4 as a way to discourage excessive splitting by tagging more morphemes as noise. Stems are split into the sequence: (PRE * STM SUF * ). Affixes (prefixes and suffixes) are split into other affixes of the same category. Whether to split affixes depends on typological properties of the language. If a language has agglutinative suffixation, for example, we hand-set a parameter to allow suffix-splitting.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Split Stage", "sec_num": "2.2" }, { "text": "When examining a morph for splitting, we search over all segmentations with at least one split, and choose the one that is both optimal according to Eq (6) and does not violate our constraints on what category sequences are allowed for its category. We end this step by returning to the EM Stage, where another cycle of EM is performed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Split Stage", "sec_num": "2.2" }, { "text": "In this section we report and discuss development results for English and Turkish. We also report finaltest results for both languages. Results for the preprocessed segmentation are consistently used as a baseline. In order to isolate the effect of the rewrite rules, we also compare against results taken on a parallel set of experiments, run with all the same parameters but without rule-generated underlying morphemes, i.e. without morphemes of type u i . But before we get to these results, we will describe the conditions of our experiments. First we introduce the evaluation metrics and data used, and then detail any parameters set during development.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments and Results", "sec_num": "3" }, { "text": "We use two procedures for evaluation, described in the Morpho Challenge '05 and '07 Competition Reports (Kurimo et al., 2006; Kurimo et al., 2007) . Both procedures use gold-standards created with commercially available morphological analyzers for each language. Each procedure is associated with its own F-score-based measure.", "cite_spans": [ { "start": 104, "end": 125, "text": "(Kurimo et al., 2006;", "ref_id": "BIBREF13" }, { "start": 126, "end": 146, "text": "Kurimo et al., 2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.1" }, { "text": "The first was used in Morpho Challenge '05, and measures the extent to which boundaries match between the surface-layer of our segmentations and gold-standard surface segmentations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.1" }, { "text": "The second was used in Morpho Challenge '07 and measures the extent to which morphemes match between the underlying-layer of our segmentations and gold-standard underlying analyses. The F-score here is not actually on matched morphemes, but instead on matched morpheme-sharing word-pairs. A point is given whenever a morpheme-sharing wordpair in the gold-standard segmentation also shares morphemes in the test segmentation (for recall), and vice-versa for precision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation Metrics", "sec_num": "3.1" }, { "text": "Training Data The data-sets used for training were provided by the Helsinki University of Technology in advance of the Morpho Challenge '07 and were downloaded by the authors from the contest website 7 . According to the website, they were compiled from the University of Leipzig Wortschatz Corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.2" }, { "text": "Tokens Types English 3 \u00d7 10 6 6.22 \u00d7 10 7 3.85 \u00d7 10 5 Turkish 1 \u00d7 10 6 1.29 \u00d7 10 7 6.17 \u00d7 10 5 Test Data For final testing, we use the goldstandard data reserved for final evaluation in the Morpho Challenge '07 contest. The gold-standard consists of approximately 1.17 \u00d7 10 5 English and 3.87 \u00d7 10 5 Turkish analyzed words, roughly a tenth the size of training word-lists. Word pairs that exist in both the training and gold standard are used for evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentences", "sec_num": null }, { "text": "There are two sets of parameters used in this experiment. First, there are parameters used to produce the initial segmentation. They were set as suggested in Cruetz and Lagus (2005) , with parameter b tuned on development data. Then there are parameters used for the main procedure. Here we have rewrite rules, numerical parameters, and one typology parameter. Rewrite rules and any orthographic features they use were culled from linguistic literature. We currently have 6 rules for English and 10 for Turkish; See Appendix A.1 for the full set of English rules used. Numerical parameters were set as suggested in Cruetz and Lagus (2004) , and following their lead we tuned b on development data; we show development results for the following values: b = 100, 300, and 500 (see Figure 3). Finally, as introduced in Section 2.2, we have a hand-set typology parameter that allows us to split prefixes or suffixes if the language has an agglutinative morphology. Since Turkish has agglutinative suffixation, we set this parameter to split suffixes for Turkish.", "cite_spans": [ { "start": 158, "end": 181, "text": "Cruetz and Lagus (2005)", "ref_id": null }, { "start": 615, "end": 638, "text": "Cruetz and Lagus (2004)", "ref_id": null } ], "ref_spans": [ { "start": 779, "end": 785, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Parameters", "sec_num": "3.3" }, { "text": "Development results were obtained by evaluating English and Turkish segmentations at several stages, and with several values of parameter b as shown in Figure 3 .", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 160, "text": "Figure 3", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Development Results", "sec_num": "3.4" }, { "text": "Overall, our development results were very positive. For the surface-level evaluation, the largest F-score improvement was observed for English (Figure 3, Chart 1) , 63.75% to 68.99%, a relative F-score gain of 8.2% over the baseline segmentation. The Turkish result also improves to a similar degree, but it is only achieved after the model as been refined by splitting. For English we observe the improvement earlier, after the EM Stage. For the underlying-level evaluation, the largest F-score improvement was observed for Turkish (Chart 4), 31.37% to 54.86%, a relative F-score gain of over 74%.", "cite_spans": [], "ref_spans": [ { "start": 144, "end": 163, "text": "(Figure 3, Chart 1)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Development Results", "sec_num": "3.4" }, { "text": "In most experiments with rules to generate underlying analyses (With Rules), the successive applications of EM and splitting result in improved results. Without rule-generated forms (No Rules) the results tend be negative compared to the baseline (see Figure 3, Chart 2), or mixed (Charts 1 and 4). When we look at recall and precision numbers directly, we observe that even without rules, the algorithm produces large recall boosts (especially after splitting). However, these boosts are accompanied by precision losses, which result in unchanged or lower F-scores.", "cite_spans": [], "ref_spans": [ { "start": 252, "end": 258, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Development Results", "sec_num": "3.4" }, { "text": "The exception is the underlying-level evaluation of English segmentations (Figure 3, Chart 3) . Here we observe a near-parity of F-score gains for segmentations produced with and without underlying morphemes derived by rule. One explanation is that the English initial segmentation is conservative and that coverage gains are the main reason for improved English scores. Creutz and Lagus (2005) note that the Morfessor EM approach often has better coverage than the MAP approach we use to produce the is Morfessor MAP, which was used as a reference method in the contest. MC Top is the top contestant. For our hybrid approach, we show the F-score obtained with and without using rewrite rules. The splitting parameter b was set to the best performing value seen in development evaluations (Tr. b = 100, En. b = 500).", "cite_spans": [ { "start": 371, "end": 394, "text": "Creutz and Lagus (2005)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 74, "end": 93, "text": "(Figure 3, Chart 3)", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Development Results", "sec_num": "3.4" }, { "text": "initial segmentation. Also, in English, allomorphy is not as extensive as in Turkish (see Chart 4) where precision losses are greater without rules, i.e. when not representing allomorphs by the same morpheme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Development Results", "sec_num": "3.4" }, { "text": "Final test results, given in Table 2 , are mixed. For English, though we improve on our baseline and on Morfessor MAP trained by Creutz and Lagus, we are beaten by the top unsupervised Morpho Challenge contestant, entered by Delphine Bernhard (2007) . Bernhard's approach was purely unsupervised and did not explicitly account for allomorphic phenomena. There are several possible reasons why we were not the top performer here. Our splitting constraint for stems, which allows them to split into stems and chains of affixes, is suited for agglutinative morphologies. It does not seem particularly well suited to English morphology. Our rewrite-rules might also be improved. Finally, there may be other, more pressing barriers (besides allomorphy) to improving morpheme induction in English, like ambiguity between homographic morphemes. For Turkish, the story is very different. We observe our baseline segmentation going from 32.76% F-score to 54.54% when re-segmented using rules, a relative improvement of over 66%. Compared with the top unsupervised approach, Creutz and Lagus's Morfessor MAP, our F-score improvement is over 48%. The distance between our hybrid approach and unsupervised approaches emphasizes the problem allomorphy can be for a language like Turkish. Turkish inflectional suffixes, for instance, regularly undergo multiple spelling-rules and can have 10 or more variant forms. Knowing that these variants are all one morpheme makes a difference.", "cite_spans": [ { "start": 234, "end": 249, "text": "Bernhard (2007)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 29, "end": 36, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Final Test Results", "sec_num": "3.5" }, { "text": "In this work we showed that we can use a small amount of knowledge in the form of context-sensitive rewrite rules to improve unsupervised segmentations for Turkish and English. This improvement can be quite large. On the morpheme-consistency measure used in the last Morpho Challenge, we observed an improvement of the Turkish segmentation of over 66% against the baseline, and 48% against the topof-the-line unsupervised approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Work in progress includes error analysis of the results to more closely examine the contribution of each rule, as well as developing rule sets for additional languages. This will help highlight various aspects of the most beneficial rules.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "There has been recent work on discovering allomorphic phenomena automatically (Dasgupta and Ng, 2007; Demberg, 2007) . It is hoped that our work can inform these approaches, if only by showing what variation is possible, and what is relevant to particular languages. For example, variation in inflectional suffixes, driven by vowel harmony and other phenomena, should be captured for a language like Turkish.", "cite_spans": [ { "start": 78, "end": 101, "text": "(Dasgupta and Ng, 2007;", "ref_id": "BIBREF7" }, { "start": 102, "end": 116, "text": "Demberg, 2007)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Future work involves attempting to learn broadcoverage underlying morphology without the handcoded element of the current work. This might involve employing aspects of the most beneficial rules as variable features in rule-templates. It is hoped that we can start to derive underlying morphemes through processes (rules, constraints, etc) suggested by these templates, and possibly learn instantiations of templates from seed corpora. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "In this work we focus on orthographic allomorphy.2 Ordered rewrite rules, when restricted from applying to their own output, have similar expressive capabilities to Koskenniemi's two-level constraints. Both define regular relations on strings, both can be compiled into lexical transducers, and both have been used in finite-state analyzers(Karttunen and Beesley, 2001). We choose ordered rules because they are easier to write given our task and resources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Some special substitution rules, like vowel harmony in Turkish and Finnish, have a spreading effect, moving from syllable to syllable within and beyond morphboundaries. In our formulation, these rules differ from other rules by not being conditioned on a morphboundary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The main difference between our procedure andCreutz and Lagus (2004) is that we allow splitting into two or more morphemes (see Step 7) while they allow binary splits only.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://www.cis.hut.fi/morphochallenge2007/datasets.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Simple morpheme labeling in unsupervised morpheme analysis", "authors": [ { "first": "Delphine", "middle": [], "last": "Bernhard", "suffix": "" } ], "year": 2007, "venue": "Working Notes for the CLEF 2007 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Delphine Bernhard. 2007. Simple morpheme label- ing in unsupervised morpheme analysis. In Work- ing Notes for the CLEF 2007 Workshop, Budapest, Hungary.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The revised wordframe model for the filipino language", "authors": [ { "first": "K", "middle": [], "last": "Charibeth", "suffix": "" }, { "first": "Solomon", "middle": [ "L" ], "last": "Cheng", "suffix": "" }, { "first": "", "middle": [], "last": "See", "suffix": "" } ], "year": 2006, "venue": "Journal of Research in Science, Computing and Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charibeth K. Cheng and Solomon L. See. 2006. The revised wordframe model for the filipino language. Journal of Research in Science, Computing and Engineering.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The Sound Pattern of English", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" }, { "first": "Morris", "middle": [], "last": "Halle", "suffix": "" } ], "year": 1968, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky and Morris Halle. 1968. The Sound Pattern of English. Harper & Row, New York.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Unsupervised discovery of morphemes", "authors": [ { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Krista", "middle": [], "last": "Lagus", "suffix": "" } ], "year": 2002, "venue": "Proc. Workshop on Morphological and Phonological Learning of ACL'02", "volume": "", "issue": "", "pages": "21--30", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathias Creutz and Krista Lagus. 2002. Unsuper- vised discovery of morphemes. In Proc. Work- shop on Morphological and Phonological Learning of ACL'02, pages 21-30, Philadelphia. Association for Computational Linguistics.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Induction of a simple morphology for highly inflecting languages", "authors": [ { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Krista", "middle": [], "last": "Lagus", "suffix": "" } ], "year": 2004, "venue": "Proc. 7th Meeting of the ACL Special Interest Group in Computational Phonology (SIG-PHON)", "volume": "", "issue": "", "pages": "43--51", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathias Creutz and Krista Lagus. 2004. Induction of a simple morphology for highly inflecting lan- guages. In Proc. 7th Meeting of the ACL Special Interest Group in Computational Phonology (SIG- PHON), pages 43-51, Barcelona.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Inducing the morphological lexicon of a natural language from unannotated text", "authors": [ { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Krista", "middle": [], "last": "Lagus", "suffix": "" } ], "year": 2005, "venue": "Proc. International and Interdisciplinary Conference on Adaptive Knowledge Representation and Reasoning (AKRR'05)", "volume": "", "issue": "", "pages": "106--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathias Creutz and Krista Lagus. 2005. Inducing the morphological lexicon of a natural language from unannotated text. In Proc. International and Interdisciplinary Conference on Adaptive Knowl- edge Representation and Reasoning (AKRR'05), pages 106-113, Espoo, Finland.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Morfessor in the morpho challenge", "authors": [ { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Krista", "middle": [], "last": "Lagus", "suffix": "" } ], "year": 2006, "venue": "Proc. PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mathias Creutz and Krista Lagus. 2006. Morfessor in the morpho challenge. In Proc. PASCAL Chal- lenge Workshop on Unsupervised Segmentation of Words into Morphemes, Venice, Italy.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "High performance, language-independent morphological segmentation", "authors": [ { "first": "Sajib", "middle": [], "last": "Dasgupta", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "Proc. NAACL'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sajib Dasgupta and Vincent Ng. 2007. High perfor- mance, language-independent morphological seg- mentation. In Proc. NAACL'07.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Unsupervised Language Acquisition", "authors": [ { "first": "G", "middle": [], "last": "Carl", "suffix": "" }, { "first": "", "middle": [], "last": "De Marcken", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carl G. de Marcken. 1996. Unsupervised Language Acquisition. Ph.D. thesis, Massachussetts Insti- tute of Technology, Boston.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Inference of variable-length linguistic and acoustic units by multigrams", "authors": [ { "first": "Sabine", "middle": [], "last": "Deligne", "suffix": "" }, { "first": "Fr\u00e9d\u00e9ric", "middle": [], "last": "Bimbot", "suffix": "" } ], "year": 1997, "venue": "Speech Communication", "volume": "23", "issue": "", "pages": "223--241", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Deligne and Fr\u00e9d\u00e9ric Bimbot. 1997. Inference of variable-length linguistic and acoustic units by multigrams. Speech Communication, 23:223-241.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A language-independent unsupervised model for morphological segmentation", "authors": [ { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" } ], "year": 2007, "venue": "Proc. ACL'07", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vera Demberg. 2007. A language-independent un- supervised model for morphological segmentation. In Proc. ACL'07.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Unsupervised learning of the morphology of a natural language", "authors": [ { "first": "John", "middle": [], "last": "Goldsmith", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "", "pages": "153--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27.2:153-198.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A short history of two-level morphology", "authors": [ { "first": "Lauri", "middle": [], "last": "Karttunen", "suffix": "" }, { "first": "Kenneth", "middle": [ "R" ], "last": "Beesley", "suffix": "" } ], "year": 2001, "venue": "Proc. ESSLLI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lauri Karttunen and Kenneth R. Beesley. 2001. A short history of two-level morphology. In Proc. ESSLLI 2001.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Unsupervised segmentation of words into morphemes -Morpho Challenge 2005, an introduction and evaluation report", "authors": [ { "first": "Mikko", "middle": [], "last": "Kurimo", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Matti", "middle": [], "last": "Varjokallio", "suffix": "" }, { "first": "Ebru", "middle": [], "last": "Arisoy", "suffix": "" }, { "first": "Murat", "middle": [], "last": "Sara\u00e7lar", "suffix": "" } ], "year": 2006, "venue": "Proc. PASCAL Challenge Workshop on Unsupervised Segmentation of Words into Morphemes", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikko Kurimo, Mathias Creutz, Matti Varjokallio, Ebru Arisoy, and Murat Sara\u00e7lar. 2006. Unsu- pervised segmentation of words into morphemes - Morpho Challenge 2005, an introduction and eval- uation report. In Proc. PASCAL Challenge Work- shop on Unsupervised Segmentation of Words into Morphemes, Venice, Italy.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Unsupervised morpheme analysis evaluation by a comparison to a linguistic gold standard -Morpho Challenge", "authors": [ { "first": "Mikko", "middle": [], "last": "Kurimo", "suffix": "" }, { "first": "Mathias", "middle": [], "last": "Creutz", "suffix": "" }, { "first": "Matti", "middle": [], "last": "Varjokallio", "suffix": "" } ], "year": 2007, "venue": "Working Notes for the CLEF 2007 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mikko Kurimo, Mathias Creutz, and Matti Var- jokallio. 2007. Unsupervised morpheme analysis evaluation by a comparison to a linguistic gold standard -Morpho Challenge 2007. In Working Notes for the CLEF 2007 Workshop, Budapest, Hungary.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A hierarchical em approach to word segmentation", "authors": [ { "first": "Fuchun", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" } ], "year": 2001, "venue": "Proc. 4th Intl. Conference on Intel. Data Analysis (IDA)", "volume": "", "issue": "", "pages": "238--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fuchun Peng and Dale Schuurmans. 2001. A hier- archical em approach to word segmentation. In Proc. 4th Intl. Conference on Intel. Data Analysis (IDA), pages 238-247.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Knowledge-free induction of morphology using latent semantic analysis", "authors": [], "year": null, "venue": "Proc. CoNLL'00 and LLL'00", "volume": "", "issue": "", "pages": "67--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knowledge-free induction of morphology using la- tent semantic analysis. In Proc. CoNLL'00 and LLL'00, pages 67-72, Lisbon.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Knowledge-free induction of inflectional morphologies", "authors": [], "year": null, "venue": "Proc. NAACL'01", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Knowledge-free induction of inflectional morpholo- gies. In Proc. NAACL'01, Pittsburgh.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A bayesian model for morpheme and paradigm identification", "authors": [ { "first": "G", "middle": [], "last": "Matthew", "suffix": "" }, { "first": "Michael", "middle": [ "R" ], "last": "Snover", "suffix": "" }, { "first": "", "middle": [], "last": "Brent", "suffix": "" } ], "year": 2001, "venue": "Proc. ACL'01", "volume": "", "issue": "", "pages": "482--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew G. Snover and Michael R. Brent. 2001. A bayesian model for morpheme and paradigm identification. In Proc. ACL'01, pages 482-490, Toulouse, France.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Modeling and Learning Multilingual Inflectional Morphology in a Minimally Supervised Framework", "authors": [ { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Richard Wicentowski. 2002. Modeling and Learn- ing Multilingual Inflectional Morphology in a Min- imally Supervised Framework. Ph.D. thesis, Johns Hopkins University, Baltimore, Maryland.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Minimally supervised morphological analysis by multimodal alignment", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2000, "venue": "Proc. ACL'00", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky and Richard Wicentowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In Proc. ACL'00.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Inducing multilingual text analysis tools via robust projection accross aligned corpora", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" }, { "first": "Grace", "middle": [], "last": "Ngai", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Wicentowski", "suffix": "" } ], "year": 2001, "venue": "Proc. HLT'01", "volume": "01", "issue": "", "pages": "161--168", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analysis tools via robust projection accross aligned corpora. In Proc. HLT'01, volume HLT 01, pages 161-168, San Diego.", "links": null } }, "ref_entries": { "FIGREF2": { "num": null, "text": "Flowchart showing the entire procedure.", "type_str": "figure", "uris": null }, "FIGREF4": { "num": null, "text": "Development results for the preprocessed initial segmentation (Baseline), and segmentations produced by our approach, first after the EM Stage (EM) and again after the Split Stage (SPL) with different values of parameter b. Rules that generate underlying analyses have either been included (With Rules), or left out (No Rules).", "type_str": "figure", "uris": null }, "TABREF0": { "content": "
Tags Surface Segmentationseat + s STM SUFciti + es STM SUFSTM glass + es SUFFeatures: VWL = vowel
Applicable Rule(s)\u00f8\u2192e / [+VWL] + _s y\u2192i / _ + [+ANY]\u00f8\u2192e / [+SIB] + _sANY = any char. SIB = sibilant
Underlying Analysesseat + scity + sglass + s{s,sh,ch,...}
", "num": null, "text": "When an underlying morpheme u i is associated to a surface morph s, we refer to s as an allomorph of", "type_str": "table", "html": null }, "TABREF1": { "content": "
Development Data The development gold-
standard for the surface metric was provided in
advance of Morpho Challenge '05 and consists of
surface segmentations for 532 English and 774
Turkish words.
The development gold-standard for the underlying
metric was provided in advance of Morpho Challenge
'07 and consists of morphological analyses for 410
English and 593 Turkish words.
", "num": null, "text": "Training corpus sizes vary slightly, with 3 million English sentences and 1 million Turkish sentences.", "type_str": "table", "html": null }, "TABREF2": { "content": "
English47.1760.8147.0457.3559.78
Turkish37.1029.2332.7631.1054.54
", "num": null, "text": "Hybrid:After Split MC Morf. MC Top Baseline No Rules With Rules", "type_str": "table", "html": null }, "TABREF3": { "content": "", "num": null, "text": "Final test F-scores on the underlying morpheme measure used in Morpho Challenge '07. MC Morf.", "type_str": "table", "html": null }, "TABREF4": { "content": "
", "num": null, "text": "A.1 Rules Used For English e epenthesis before s suffix \u00f8 \u2192e / ..[+V] + _s \u00f8\u2192e / ..[+SIB] + _s long e deletion e \u2192\u00f8 / ..[+V][+C]_ + [+V] change y to i before suffix y \u2192i / ..[+C] +? _ + [+ANY] consonant gemination \u00f8 \u2192\u03b1[+STOP] / ..\u03b1[+STOP]_ + [+V] \u00f8 \u2192\u03b1[+STOP] / ..\u03b1[+STOP]_ + [+GLI]", "type_str": "table", "html": null }, "TABREF5": { "content": "
BaseEMSPL:b=300 SPL:b=500
happen shappen shapp e n shappen s
happierhappierhappi erhappi er
happiesthappiesthapp i esthappiest
happilyhappilyhappi lyhappi ly
happiness happiness happi nesshappiness
", "num": null, "text": "English RulesA.2 Example Segmentations", "type_str": "table", "html": null }, "TABREF6": { "content": "", "num": null, "text": "Surface segmentations after preprocessing (Base), EM Stage (EM), and Split Stage (SPL)", "type_str": "table", "html": null } } } }