{ "paper_id": "A92-1018", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:03:41.873135Z" }, "title": "A Practical Part-of-Speech Tagger", "authors": [ { "first": "Doug", "middle": [], "last": "Cutting", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xerox Palo Alto Research Center", "location": { "addrLine": "3333 Coyote Hill Road", "postCode": "94304", "settlement": "Palo Alto", "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Julian", "middle": [], "last": "Kupiec", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xerox Palo Alto Research Center", "location": { "addrLine": "3333 Coyote Hill Road", "postCode": "94304", "settlement": "Palo Alto", "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Jan", "middle": [], "last": "Pedersen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xerox Palo Alto Research Center", "location": { "addrLine": "3333 Coyote Hill Road", "postCode": "94304", "settlement": "Palo Alto", "region": "CA", "country": "USA" } }, "email": "" }, { "first": "Penelope", "middle": [], "last": "Sibun", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xerox Palo Alto Research Center", "location": { "addrLine": "3333 Coyote Hill Road", "postCode": "94304", "settlement": "Palo Alto", "region": "CA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present an implementation of a part-of-speech tagger based on a hidden Markov model. The methodology enables robust and accurate tagging with few resource requirements. Only a lexicon and some unlabeled training text are required. Accuracy exceeds 96%. We describe implementation strategies and optimizations which result in high-speed operation. Three applications for tagging are described: phrase recognition; word sense disambiguation; and grammatical function assignment.", "pdf_parse": { "paper_id": "A92-1018", "_pdf_hash": "", "abstract": [ { "text": "We present an implementation of a part-of-speech tagger based on a hidden Markov model. The methodology enables robust and accurate tagging with few resource requirements. Only a lexicon and some unlabeled training text are required. Accuracy exceeds 96%. We describe implementation strategies and optimizations which result in high-speed operation. Three applications for tagging are described: phrase recognition; word sense disambiguation; and grammatical function assignment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many words are ambiguous in their part of speech. For example, \"tag\" can be a noun or a verb. However, when a word appears in the context of other words, the ambiguity is often reduced: in '% tag is a part-of-speech label,\" the word \"tag\" can only be a noun. A part-of-speech tagger is a system that uses context to assign parts of speech to words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Desiderata", "sec_num": "1" }, { "text": "Automatic text tagging is an important first step in discovering the linguistic structure of large text corpora. Part-of-speech information facilitates higher-level analysis, such as recognizing noun phrases and other patterns in text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Desiderata", "sec_num": "1" }, { "text": "For a tagger to function as a practical component in a language processing system, we believe that a tagger must be:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Desiderata", "sec_num": "1" }, { "text": "Robust Text corpora contain ungrammatical constructions, isolated phrases (such as titles), and nonlinguistic data (such as tables). Corpora are also likely to contain words that are unknown to the tagger. It is desirable that a tagger deal gracefully with these situations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Desiderata", "sec_num": "1" }, { "text": "Efficient If a tagger is to be used to analyze arbitrarily large corpora, it must be efficient--performing in time linear in the number of words tagged. Any training required should also be fast, enabling rapid turnaround with new corpora and new text genres.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Desiderata", "sec_num": "1" }, { "text": "Accurate A tagger should attempt to assign the correct part-of-speech tag to every word encountered. Tunable A tagger should be able to take advantage of linguistic insights. One should be able to correct systematic errors by supplying appropriate a priori \"hints.\" It should be possible to give different hints for different corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Desiderata", "sec_num": "1" }, { "text": "The effort required to retarget a tagger to new corpora, new tagsets, and new languages should be minimal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reusable", "sec_num": null }, { "text": "Several different approaches have been used for building text taggers. Greene and Rubin used a rule-based approach in the TAGGIT program [Greene and Rubin, 1971] , which was an aid in tagging the Brown corpus [Francis and Ku~era, 1982] . TAGGIT disambiguated 77% of the corpus; the rest was done manually over a period of several years. More recently, Koskenniemi also used a rule-based approach implemented with finite-state machines [Koskenniemi, 1990] .", "cite_spans": [ { "start": 137, "end": 161, "text": "[Greene and Rubin, 1971]", "ref_id": "BIBREF7" }, { "start": 209, "end": 235, "text": "[Francis and Ku~era, 1982]", "ref_id": "BIBREF5" }, { "start": 435, "end": 454, "text": "[Koskenniemi, 1990]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology 2.1 Background", "sec_num": "2" }, { "text": "Statistical methods have also been used (e.g., [DeRose, 1988] , [Garside et al., 1987] ). These provide the capability of resolving ambiguity on the basis of most likely interpretation. A form of Markov model has been widely used that assumes that a word depends probabilistically on just its part-of-speech category, which in turn depends solely on the categories of the preceding two words. Two types of training (i.e., parameter estimation) have been used with this model. The first makes use of a tagged training corpus. Derouault and Merialdo use a bootstrap method for training [Derouault and Merialdo, 1986] . At first, a relatively small amount of text is manually tagged and used to train a partially accurate model. The model is then used to tag more text, and the tags are manually corrected and then used to retrain the model. Church uses the tagged Brown corpus for training [Church, 1988] . These models involve probabilities for each word in the lexicon, so large tagged corpora are required for reliable estimation.", "cite_spans": [ { "start": 47, "end": 61, "text": "[DeRose, 1988]", "ref_id": "BIBREF4" }, { "start": 64, "end": 86, "text": "[Garside et al., 1987]", "ref_id": "BIBREF6" }, { "start": 584, "end": 614, "text": "[Derouault and Merialdo, 1986]", "ref_id": null }, { "start": 888, "end": 902, "text": "[Church, 1988]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology 2.1 Background", "sec_num": "2" }, { "text": "The second method of training does not require a tagged training corpus. In this situation the Baum-Welch algorithm (also known as the forward-backward algorithm) can be used [Baum, 1972] . Under this regime the model is called a hidden Markov model (HMM), as state transitions (i.e., part-of-speech categories) are assumed to be unobservable. Jelinek has used this method for training a text tagger [Jelinek, 1985] . Parameter smoothing can be conveniently achieved using the method of deleted interpolation in which weighted estimates are taken from secondand first-order models and a uniform probability distribution [Jelinek and Mercer, 1980] . Kupiec used word equivalence classes (referred to here as ambiguity classes) based on parts of speech, to pool data from individual words [Kupiec, 1989b] . The most common words are still represented individually, as sufficient data exist for robust estimation.", "cite_spans": [ { "start": 175, "end": 187, "text": "[Baum, 1972]", "ref_id": "BIBREF1" }, { "start": 400, "end": 415, "text": "[Jelinek, 1985]", "ref_id": "BIBREF9" }, { "start": 620, "end": 646, "text": "[Jelinek and Mercer, 1980]", "ref_id": "BIBREF8" }, { "start": 787, "end": 802, "text": "[Kupiec, 1989b]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology 2.1 Background", "sec_num": "2" }, { "text": "However all other words are represented according to the set of possible categories they can assume. In this manner, the vocabulary of 50,000 words in the Brown corpus can be reduced to approximately 400 distinct ambiguity classes [Kupiec, 1992] . To further reduce the number of parameters, a first-order model can be employed (this assumes that a word's category depends only on the immediately preceding word's category). In [Kupiec, 1989a] , networks are used to selectively augment the context in a basic firstorder model, rather than using uniformly second-order dependencies.", "cite_spans": [ { "start": 231, "end": 245, "text": "[Kupiec, 1992]", "ref_id": "BIBREF12" }, { "start": 428, "end": 443, "text": "[Kupiec, 1989a]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Methodology 2.1 Background", "sec_num": "2" }, { "text": "We next describe how our choice of techniques satisfies the criteria listed in section 1. The use of an HMM permits complete flexibility in the choice of training corpora. Text from any desired domain can be used, and a tagger can be tailored for use with a particular text database by training on a portion of that database. Lexicons containing alternative tag sets can be easily accommodated without any need for re-labeling the training corpus, affording further flexibility in the use of specialized tags. As the resources required are simply a lexicon and a suitably large sample of ordinary text, taggers can be built with minimal effort, even for other languages, such as French (e.g., [Kupiec, 1992] ). The use of ambiguity classes and a first-order model reduces the number of parameters to be estimated without significant reduction in accuracy (discussed in section 5). This also enables a tagger to be reliably trained using only moderate amounts of text. We have produced reasonable results training on as few as 3,000 sentences. Fewer parameters also reduce the time required for training. Relatively few ambiguity classes are sufficient for wide coverage, so it is unlikely that adding new words to the lexicon requires retraining, as their ambiguity classes are already accommodated. Vocabulary independence is achieved by predicting categories for words not in the lexicon, using both context and suffix information. Probabilities corresponding to category sequences that never occurred in the training data are assigned small, non-zero values, ensuring that the model will accept any sequence of tokens, while still providing the most likely tagging. By using the fact that words are typically associated with only a few part-ofspeech categories, and carefully ordering the computation, the algorithms have linear complexity (section 3.3).", "cite_spans": [ { "start": 693, "end": 707, "text": "[Kupiec, 1992]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Our approach", "sec_num": "2.2" }, { "text": "The hidden Markov modeling component of our tagger is implemented as an independent module following the specification given in [Levinson et al., 1983] , with special attention to space and time efficiency issues. Only first-order modeling is addressed and will be presumed for the remainder of this discussion.", "cite_spans": [ { "start": 128, "end": 151, "text": "[Levinson et al., 1983]", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Hidden Markov Modeling", "sec_num": "3" }, { "text": "In brief, an HMM is a doubly stochastic process that generates sequence of symbols", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Formalism", "sec_num": "3.1" }, { "text": "S = { Si, S2,...,ST}, Si E W I