Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W01-0502",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:59:25.792452Z"
},
"title": "A Sequential Model for Multi-Class Classification",
"authors": [
{
"first": "Yair",
"middle": [],
"last": "Even-Zohar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": "evenzoha@uiuc.edu"
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach-a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.",
"pdf_parse": {
"paper_id": "W01-0502",
"_pdf_hash": "",
"abstract": [
{
"text": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach-a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A large number of important natural language inferences can be viewed as problems of resolving ambiguity, either semantic or syntactic, based on properties of the surrounding context. These, in turn, can all be viewed as classification problems in which the goal is to select a class label from among a collection of candidates. Examples include part-of speech tagging, word-sense disambiguation, accent restoration, word choice selection in machine translation, context-sensitive spelling correction, word selection in speech recognition and identifying discourse markers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Machine learning methods have become the most popular technique in a variety of classification problems of these sort, and have shown significant success. A partial list consists of Bayesian classifiers (Gale et al., 1993) , decision lists (Yarowsky, 1994) , Bayesian hybrids (Golding, 1995) , HMMs (Charniak, 1993) , inductive logic methods (Zelle and Mooney, 1996) , memory-\u00a3 This research is supported by NSF grants IIS-9801638, IIS-0085836 and SBR-987345. based methods (Zavrel et al., 1997) , linear classifiers (Roth, 1998; Roth, 1999) and transformationbased learning (Brill, 1995) .",
"cite_spans": [
{
"start": 203,
"end": 222,
"text": "(Gale et al., 1993)",
"ref_id": "BIBREF7"
},
{
"start": 240,
"end": 256,
"text": "(Yarowsky, 1994)",
"ref_id": "BIBREF28"
},
{
"start": 276,
"end": 291,
"text": "(Golding, 1995)",
"ref_id": "BIBREF9"
},
{
"start": 299,
"end": 315,
"text": "(Charniak, 1993)",
"ref_id": "BIBREF2"
},
{
"start": 342,
"end": 366,
"text": "(Zelle and Mooney, 1996)",
"ref_id": "BIBREF30"
},
{
"start": 474,
"end": 495,
"text": "(Zavrel et al., 1997)",
"ref_id": "BIBREF29"
},
{
"start": 517,
"end": 529,
"text": "(Roth, 1998;",
"ref_id": "BIBREF24"
},
{
"start": 530,
"end": 541,
"text": "Roth, 1999)",
"ref_id": "BIBREF25"
},
{
"start": 575,
"end": 588,
"text": "(Brill, 1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In many of these classification problems a significant source of difficulty is the fact that the number of candidates is very large -all words in words selection problems, all possible tags in tagging problems etc. Since general purpose learning algorithms do not handle these multi-class classification problems well (see below), most of the studies do not address the whole problem; rather, a small set of candidates (typically two) is first selected, and the classifier is trained to choose among these. While this approach is important in that it allows the research community to develop better learning methods and evaluate them in a range of applications, it is important to realize that an important stage is missing. This could be significant when the classification methods are to be embedded as part of a higher level NLP tasks such as machine translation or information extraction, where the small set of candidates the classifier can handle may not be fixed and could be hard to determine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we develop a general approach to the study of multi-class classifiers. We suggest a sequential learning model that utilizes (almost) general purpose classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidate set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our paradigm the sought after classifier has to choose a single class label (or a small set of labels) from among a large set of labels. It works by sequentially applying simpler classifiers, each of which outputs a probability distribution over the candidate labels. These distributions are multiplied and thresholded, resulting in that each classifier in the sequence needs to deal with a (significantly) smaller number of the candidate labels than the previous classifier. The classifiers in the sequence are selected to be simple in the sense that they typically work only on part of the feature space where the decomposition of feature space is done so as to achieve statistical independence. Simple classifier are used since they are more likely to be accurate; they are chosen so that, with high probability (w.h.p.), they have one sided error, and therefore the presence of the true label in the candidate set is maintained. The order of the sequence is determined so as to maximize the rate of decreasing the size of the candidate labels set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Beyond increased accuracy on multi-class classification problems , our scheme improves the computation time of these problems several orders of magnitude, relative to other standard schemes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work we describe the approach, discuss an experiment done in the context of part-of-speech (pos) tagging, and provide some theoretical justifications to the approach. Sec. 2 provides some background on approaches to multi-class classification in machine learning and in NLP. In Sec. 3 we describe the sequential model proposed here and in Sec. 4 we describe an experiment the exhibits some of its advantages. Some theoretical justifications are outlined in Sec. 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Several works within the machine learning community have attempted to develop general approaches to multi-class classification. One of the most promising approaches is that of error correcting output codes (Dietterich and Bakiri, 1995) ; however, this approach has not been able to handle well a large number of classes (over 10 or 15, say) and its use for most large scale NLP applications is therefore questionable. Statistician have studied several schemes such as learning a single classifier for each of the class labels (one vs. all) or learning a discriminator for each pair of class labels, and discussed their relative merits (Hastie and Tibshirani, 1998) . Although it has been argued that the latter should provide better results than others, experimental results have been mixed (Allwein et al., 2000) and in some cases, more involved schemes, e.g., learning a classifier for each set of three class labels (and deciding on the prediction in a tournament like fashion) were shown to perform better (Teow and Loe, 2000) . Moreover, none of these methods seem to be computationally plausible for large scale problems, since the number of classifiers one needs to train is, at least, quadratic in the number of class labels.",
"cite_spans": [
{
"start": 206,
"end": 235,
"text": "(Dietterich and Bakiri, 1995)",
"ref_id": "BIBREF5"
},
{
"start": 635,
"end": 664,
"text": "(Hastie and Tibshirani, 1998)",
"ref_id": "BIBREF10"
},
{
"start": 791,
"end": 813,
"text": "(Allwein et al., 2000)",
"ref_id": "BIBREF0"
},
{
"start": 1010,
"end": 1030,
"text": "(Teow and Loe, 2000)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Class Classification",
"sec_num": "2"
},
{
"text": "Within NLP, several learning works have already addressed the problem of multi-class classification. In (Kudoh and Matsumoto, 2000) the methods of \"all pairs\" was used to learn phrase annotations for shallow parsing. More than \u00a4 \u00a6 \u00a5 \u00a7 \u00a5 different classifiers where used in this task, making it infeasible as a general solution. All other cases we know of, have taken into account some properties of the domain and, in fact, several of the works can be viewed as instantiations of the sequential model we formalize here, albeit done in an ad-hoc fashion.",
"cite_spans": [
{
"start": 104,
"end": 131,
"text": "(Kudoh and Matsumoto, 2000)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Class Classification",
"sec_num": "2"
},
{
"text": "In speech recognition, a sequential model is used to process speech signal. Abstracting away some details, the first classifier used is a speech signal analyzer; it assigns a positive probability only to some of the words (using Levenshtein distance (Levenshtein, 1966) or somewhat more sophisticated techniques (Levinson et al., 1990) ). These words are then assigned probabilities using a different contextual classifier e.g., a language model, and then, (as done in most current speech recognizers) an additional sentence level classifier uses the outcome of the word classifiers in a word lattice to choose the most likely sentence.",
"cite_spans": [
{
"start": 229,
"end": 269,
"text": "Levenshtein distance (Levenshtein, 1966)",
"ref_id": null
},
{
"start": 312,
"end": 335,
"text": "(Levinson et al., 1990)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Class Classification",
"sec_num": "2"
},
{
"text": "Several word prediction tasks make decisions in a sequential way as well. In spell correction confusion sets are created using a classifier that takes as input the word transcription and outputs a positive probability for potential words. In conventional spellers, the output of this classifier is then given to the user who selects the intended word. In context sensitive spelling correction (Golding and Roth, 1999; Mangu and Brill, 1997) an additional classifier is then utilized to predict among words that are supported by the first classifier, using contextual and lexical information of the surrounding words. In all studies done so far, however, the first classifier -the confusion sets -were constructed manually by the researchers.",
"cite_spans": [
{
"start": 393,
"end": 417,
"text": "(Golding and Roth, 1999;",
"ref_id": "BIBREF8"
},
{
"start": 418,
"end": 440,
"text": "Mangu and Brill, 1997)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Class Classification",
"sec_num": "2"
},
{
"text": "Other word predictions tasks have also constructed manually the list of confusion sets Dagan et al., 1999; Lee, 1999) and justifications where given as to why this is a reasonable way to construct it. (Even-Zohar and Roth, 2000) present a similar task in which the confusion sets generation was automated. Their study also quantified experimentally the advantage in using early classifiers to restrict the size of the confusion set.",
"cite_spans": [
{
"start": 87,
"end": 106,
"text": "Dagan et al., 1999;",
"ref_id": "BIBREF3"
},
{
"start": 107,
"end": 117,
"text": "Lee, 1999)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Class Classification",
"sec_num": "2"
},
{
"text": "Many other NLP tasks, such as pos tagging, name entity recognition and shallow parsing require multi-class classifiers. In several of these cases the number of classes could be very large (e.g., pos tagging in some languages, pos tagging when a finer proper noun tag is used). The sequential model suggested here is a natural solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-Class Classification",
"sec_num": "2"
},
{
"text": "We study the problem of learning a multi-class classifier,\u00a9 where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "\u00a5 \" ! # \" $ , % ' & ' ( \" 0 ) 1 ) 1 ) 2 3 & 5 4 6 # and 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "is typically large, on the order of ! \" \u00a5 9 8 A @ B ! \" \u00a5 9 C . We address this problem using the Sequential Model (SM) in which simpler classifiers are sequentially used to filter subsets of out of consideration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "The sequential model is formally defined as a D tuple:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "E G F % H I \" Q P R # S T 6 G U V W \u00a6 P # X Y P # a # S where b % d c T e P 1 f ( P is a decomposition of the do- main (not necessarily disjoint; it could be that g i h p P % q ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "b is the set of class labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "b U r % s u t S ( \" 3 t 8 0 ) 1 ) 1 ) 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "3 t e # determines the order in which the classifiers are learned and evaluated. For convenience we denote \u00a7( ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "v % \u1e85 x 3 y u 8 %\u00a8 x ' 0 ) \" ) \" ) b \u00a6 P #",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "P % H ' & I s t hf P g & X h i v u \u1e81 P # S )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "The sequential process can be viewed as a multiplication of distributions. (Hinton, 2000) argues that a product of distributions (or, \"experts\", PoE) 1 The output of many classifiers can be viewed, after appropriate normalization, as a confidence measure that can be used as our",
"cite_spans": [
{
"start": 75,
"end": 89,
"text": "(Hinton, 2000)",
"ref_id": "BIBREF11"
},
{
"start": 150,
"end": 151,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "x y . is an efficient way to make decisions in cases where several different constrains play a role, and is advantageous over additive models. In fact, due to the thresholding step, our model can be viewed as a selective PoE. The thresholding ensures that the SM has the following monotonicity property: . The rest of this paper presents a concrete instantiation of the SM, and then provides a theoretical analysis of some of its properties (Sec. 5). This work does not address the question of acquiring SM i.e., learning P # X U .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "' & I s t h R f P l & S h n v u \u1e81 P # A Q ' & z s t h { f P d ( u g & X h i v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential Model",
"sec_num": "3"
},
{
"text": "This section describes a two part experiment of pos tagging in which we compare, under identical conditions, two classification models: A SM and a single classifier. Both are provided with the same input features and the only difference between them is the model structure. In the first part, the comparison is done in the context of assigning pos tags to unknown wordsthose words which were not presented during training and therefore the learner has no baseline knowledge about possible POS they may take. This experiment emphasizes the advantage of using the SM during evaluation in terms of accuracy. The second part is done in the context of pos tagging of known words. It compares processing time as well as accuracy of assigning pos tags to known words (that is, the classifier utilizes knowledge about possible POS tags the target word may take). This part exhibits a large reduction in training time using the SM over the more common one-vs-all method while the accuracy of the two methods is almost identical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example: POS Tagging",
"sec_num": "4"
},
{
"text": "Two types of featureslexical features and contextual features may be used when learning how to tag words for pos. Contextual features capture the information in the surrounding context and the word lemma while the lexical features capture the morphology of the unknown word. 3 Several is-sues make the pos tagging problem a natural problem to study within the SM. (i) A relatively large number of classes (about 50). (ii) A natural decomposition of the feature space to contextual and lexical features. (iii) Lexical knowledge (for unknown words) and the word lemma (for known words) provide, w.h.p, one sided error (Mikheev, 1997) .",
"cite_spans": [
{
"start": 616,
"end": 631,
"text": "(Mikheev, 1997)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Example: POS Tagging",
"sec_num": "4"
},
{
"text": "The domain in our experiment is defined using the following set of features, all of which are computed relative to the target word P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Tagger Classifiers",
"sec_num": "4.1"
},
{
"text": "Contextual Features (as in (Brill, 1995; Roth and Zelenko, 1998) is an unknown word, the baseline is proper singular noun \"NNP\" for capitalized words and common singular noun \"NN\" otherwise. (This feature is introduced only in some of the experiments.) 9.The target word P . ",
"cite_spans": [
{
"start": 27,
"end": 40,
"text": "(Brill, 1995;",
"ref_id": "BIBREF1"
},
{
"start": 41,
"end": 64,
"text": "Roth and Zelenko, 1998)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Tagger Classifiers",
"sec_num": "4.1"
},
{
"text": "Let v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "! @ t ! \" .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "The SM is compared with a single classifier -either ' or . Notice that is a single classifier that uses the same information as used by the SM. illustrates the SM that was used in the experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "All the classifiers in the sequential model, as well as the single classifier, use the SNoW learning architecture (Roth, 1998) with the Winnow update rule. SNoW (Sparse Network of Winnows) is a multi-class classifier that is specifically tailored for learning in domains in which the potential number of features taking part in decisions is very large, but in which decisions actually depend on a small number of those features. SNoW works by learning a sparse network of linear functions over a pre-defined or incrementally learned feature space. SNoW has already been used successfully on several tasks in natural language processing (Roth, 1998; Roth and Zelenko, 1998; Golding and Roth, 1999; Punyakanok and Roth, 2001) .",
"cite_spans": [
{
"start": 114,
"end": 126,
"text": "(Roth, 1998)",
"ref_id": "BIBREF24"
},
{
"start": 636,
"end": 648,
"text": "(Roth, 1998;",
"ref_id": "BIBREF24"
},
{
"start": 649,
"end": 672,
"text": "Roth and Zelenko, 1998;",
"ref_id": "BIBREF23"
},
{
"start": 673,
"end": 696,
"text": "Golding and Roth, 1999;",
"ref_id": "BIBREF8"
},
{
"start": 697,
"end": 723,
"text": "Punyakanok and Roth, 2001)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "Specifically, for each class label SNoW learns a function' \u00a9 \u00a5 u ! that maps a feature based representation of the input instance to a number \" n \u00a5 u ! 5 which can be interpreted as the prob-ability of & being the class label corresponding to . At prediction time, given Q , SNoW outputs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "E G t 9 \u00a1 n G % q 7 l i # X )",
"eq_num": "(1)"
}
],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "All functions -in our case, D 9 \u00a5 target nodes are used, one for each pos tag -reside over the same feature space, but can be thought of as autonomous functions (networks). That is, a given example is treated autonomously by each target subnetwork; an example labeled is considered as a positive example for the function learned for and as a negative example for the rest of the functions (target nodes). The network is sparse in that a target node need not be connected to all nodes in the input layer. For example, it is not connected to input nodes (features) that were never active with it in the same sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "Although SNoW is used with D 9 \u00a5 different targets, the SM utilizes by determining the confusion set dynamically. That is, in evaluation (prediction), the maximum in Eq. 1 is taken only over the currently applicable confusion set. Moreover, in training, a given example is used to train only target networks that are in the currently applicable confusion set. That is, an example that is positive for target , is viewed as positive for this target (if it is in the confusion set), and as negative for the other targets in the confusion set. All other targets do not see this example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "The case of POS tagging of known words is handled in a similar way. In this case, all possible tags are known. In training, we record, for each word P , all pos tags with which it was tagged in the training corpus. During evaluation, whenever word P occurs, it is tagged with one of these pos tags. That is, in evaluation, the confusion set consists only of those tags observed with the target word in training, and the maximum in Eq. 1 is taken only over these. This is always the case when using' (or ), both in the SM and as a single classifier. In training, though, for the sake of this experiment, we treat\u00a8 ( ) differently depending on whether it is trained for the SM or as a single classifier. When trained as a single classifier (e.g., (Roth and Zelenko, 1998) ",
"cite_spans": [
{
"start": 745,
"end": 769,
"text": "(Roth and Zelenko, 1998)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "), '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "uses each -tagged example as a positive example for and a negative example for all other tags. On the other hand, the SM classifier is trained on a -tagged example of word , by using it as a positive example for and a negative example only for the effective confusion set. That is, those pos tags which have been observed as tags of in the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Features:",
"sec_num": null
},
{
"text": "The data for the experiments was extracted from the Penn Treebank WSJ and Brown corpora. The training corpus consists of \u00a4 p S \u00a5 9 \u00a5 \u00a5 9 \u00a5 \u00a7 \u00a5 words. The test corpus consists of \u00a4 \u00a6 \u00a2 \u00a7 \u00a5 \u00a5 \u00a7 \u00a5 9 \u00a5 words of which D ! u \u00a4 are unknown words (that is, they do not occur in the training corpus. (Numbers (the pos \"CD\"), are not included among the unknown words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4.2"
},
{
"text": "' + baseline baseline (accuracy in percent) . is a classifier that uses only contextual features,' + baseline is the same classifier with the addition of the baseline feature (\"NNP\" or \"NN\"). Table 1 summarizes the results of the experiments with a single classifier that uses only contextual features. Notice that adding the baseline POS significantly improves the results but not much is gained over the baseline. The reason is that the baseline feature is almost perfect ( ) S \u00a5 ) in the training data. For that reason, in the next experiments we do not use the baseline at all, since it could hide the phenomenon addressed. (In practice, one might want to use a more sophisticated baseline, as in (Dermatas and Kokkinakis, 1995) .) Table 2 summarizes the results of the main experiment in this part. It exhibits the advantage of using the SM (columns 3,4) over a single classifier that makes use of the same features set (column 2). In both cases, all features are used. In\u00ef , a classifier is trained on input that consists of all these features and chooses a label from among all class labels. In E G F k ( 8 the same features are used as input, but different classifiers are used sequentially -using only part of the feature space and restricting the set of possible outcomes available to the next classifier in the sequence -\u00a8P chooses only from among those left as candidates.",
"cite_spans": [
{
"start": 701,
"end": 732,
"text": "(Dermatas and Kokkinakis, 1995)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 22,
"end": 43,
"text": "(accuracy in percent)",
"ref_id": null
},
{
"start": 192,
"end": 199,
"text": "Table 1",
"ref_id": "TABREF4"
},
{
"start": 736,
"end": 743,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "'",
"sec_num": null
},
{
"text": "\u00a2 ) \u00a4 \u00a3 \u00a3 ! \u00a6 ) \u00a4 \u00a2 \u00a3 \u00a7 \u00a5 ) \u00a4 \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'",
"sec_num": null
},
{
"text": "' i SM(\u1e8c( 0 8 ) SM( \u00a7( \" 8 ) \u00a2 ) \u00a4 \u00a3 D \u00a6 \u00a3 ) 1 ! \u00a3 \u00a7 D ) \u00a4 \u00a6 \u00a6 9 ) \u00a4 \u00a5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'",
"sec_num": null
},
{
"text": "It is interesting to note that further improvement can be achieved, as shown in the right most column. Given that the last stage in E G F k X ( 0 8 i is identical to the single classifier , this shows the contribution of the filtering done in the first two stages using\u00a8( and\u00a88 . In addition, this result shows that the input spaces of the classifiers need not be disjoint.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "'",
"sec_num": null
},
{
"text": "Essentially everyone who is learning a POS tagger for known words makes use of a \"sequential model\" assumption during evaluation -by restricting the set of candidates, as discussed in Sec 4.1). The focus of this experiment is thus to investigate the advantage of the SM during training. In this case, a single (one-vs-all) classifier trains each tag against all other tags, while a SM classifier trains it only against the effective confusion set (Sec 4.1). Table 3 compares the performance of the\u00a8 classifier trained using in a one-vs-all method to the same classifier trained the SM way. The results are only for known words and the results of Brill's tagger (Brill, 1995) are presented for comparison. Table 3 : POS Tagging of known words using contextual features (accuracy in percent). one-vs-all denotes training where example serves as positive example to the true tag and as negative example to all the other tags. SM| 2 \u00a9 P$ denotes training where example serves as positive example to the true tag and as a negative example only to a restricted set of tags in based on a previous classifier -here, a simple baseline restriction.",
"cite_spans": [
{
"start": 661,
"end": 674,
"text": "(Brill, 1995)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 458,
"end": 465,
"text": "Table 3",
"ref_id": null
},
{
"start": 705,
"end": 712,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS Tagging of Known Words",
"sec_num": null
},
{
"text": "one-vs-all SM| 2 \u00a9 P$ Brill \u00a3 ) \u00a4 \u00a2 9 \u00a2 \u00a3 ) \u00a4 \u00a2 9 \u00a3 \u00a3 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging of Known Words",
"sec_num": null
},
{
"text": "While, in principle, (see Sec 5) the SM should do better (an never worse) than the one-vs-all classifier, we believe that in this case SM does not have any performance advantages since the classifiers work in a very high dimensional feature space which allows the one-vs-all classifier to find a separating hyperplane that separates the positive examples many different kinds of negative examples (even irrelevant ones).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging of Known Words",
"sec_num": null
},
{
"text": "However, the key advantage of the SM in this case is the significant decrease in computation time, both in training and evaluation. Table 4 shows that in the pos tagging task, training using the SM is 6 times faster than with a one-vs-all method and 3000 faster than Brill's learner. In addition, the evaluation time of our tagger was about twice faster than that of Brill's tagger. Table 4 : Processing time for POS tagging of known words using contextual features (In CPU seconds). Train: training time over ! \" \u00a5 C sentences. Brill's learner was interrupted after 12 days of training (default threshold was used). Test: average number of seconds to evaluate a single sentence. All runs were done on the same machine.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 139,
"text": "Table 4",
"ref_id": null
},
{
"start": 383,
"end": 390,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "POS Tagging of Known Words",
"sec_num": null
},
{
"text": "one-vs-all SM| 1 p \u00a9 P$ Brill Train ! 0 \u00a2 X \u00a6 \u00a7 \u00a6 ) \u00a4 ! \" ) \u00aa D u H ! \" \u00a5 \u00a6 Test \u00a4 ) I \u00ab I ! 0 \u00a5 d ) \u00a4 a \u00ab I ! 0 \u00a5 d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "POS Tagging of Known Words",
"sec_num": null
},
{
"text": "In this section, we discuss some of the theoretical aspects of the SM and explain some of its advantages. In particular, we discuss the following issues:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential model: Theoretical Justification",
"sec_num": "5"
},
{
"text": "1. Domain Decomposition: When the input feature space can be decomposed, we show that it is advantageous to do it and learn several classifiers, each on a smaller domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential model: Theoretical Justification",
"sec_num": "5"
},
{
"text": "2. Range Decomposition: Reducing confusion set size is advantageous both in training and testing the classifiers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential model: Theoretical Justification",
"sec_num": "5"
},
{
"text": "(a) Test: Smaller confusion set is shown to yield a smaller expected error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential model: Theoretical Justification",
"sec_num": "5"
},
{
"text": "(b) Training: Under the assumptions that a small confusion set (determined dynamically by previous classifiers in the sequence) is used when a classifier is evaluated, it is shown that training the classifiers this way is advantageous.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential model: Theoretical Justification",
"sec_num": "5"
},
{
"text": "3. Expressivity: SM can be viewed as a way to generate an expressive classifier by building on a number of simpler ones. We argue that the SM way of generating an expressive classifier has advantages over other ways of doing it, such as decision tree. (Sec 5.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential model: Theoretical Justification",
"sec_num": "5"
},
{
"text": "In addition, SM has several significant computational advantages both in training and in test, since it only needs to consider a subset of the set of candidate class labels. We will not discuss these issues in detail here.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Sequential model: Theoretical Justification",
"sec_num": "5"
},
{
"text": "Decomposing the domain is not an essential part of the SM; it is possible that all the classifiers used actually use the same domain. As we shown below, though, when a decomposition is possible, it is advantageous to use it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the Domain",
"sec_num": "5.1"
},
{
"text": "It is shown in Eq. 2-7 that when it is possible to decompose the domain to subsets that are conditionally independent given the class label, the SM with classifiers defined on these subsets is as accurate as the optimal single classifier. (In fact, this is shown for a pure product of simpler classifiers; the SM uses a selective product.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the Domain",
"sec_num": "5.1"
},
{
"text": "In the following we assume that ( \" ) 0 ) \" ) 0 s e provide a decomposition of the domain (Sec. 3) and that ( \" ) \" ) 0 ) \" p e \u00ac v ( 0 ) \" ) \" ) 0 p e . By conditional independence we mean that , and \u00dd the hypothesis produced by \u00d7 . Then, for all \u00dd \u00db \u00c7 % \u00d2 \u00dd ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the Domain",
"sec_num": "5.1"
},
{
"text": "g i h k \u00ae f i P \" ) 2 ) 1 ) 1 p \u00a7 h& \" G % j ' #",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the Domain",
"sec_num": "5.1"
},
{
"text": "! 7 \u00e4 \u00e3 \u00c2 \u00a6 \u00e5 \u00da l \u00dc \u00dd n \u00cc ! 7 p \u00e3 \u00c2 \u00a6 \u00e5 \u00da l \u00dc \u00dd \u00b9 l i 3 (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the Domain",
"sec_num": "5.1"
},
{
"text": "In the limit, as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the Domain",
"sec_num": "5.1"
},
{
"text": "7 ae q \u00e7 \u00d8 \u00c2 u \u00e8 i \u00e9 y { \u00ea \u00da \u00dc \u00dd l i 3 f \u00b9 l i p \u00de X \u00cc \u00d8 \u00c2 u \u00e8 i \u00e9 y { \u00ea \u00da l \u00dc \u00dd \u00b9 l i 3 f n \u00de X \u00bc )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the Domain",
"sec_num": "5.1"
},
{
"text": "In particular this holds if \u00dd is a hypothesis produced by \u00d7 when trained on E , that is sampled according to \u00eb \u00e0 e ( \u00e2 8 \u00e2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the Domain",
"sec_num": "5.1"
},
{
"text": "The SM is a decision process that is conceptually similar to a decision tree processes (Rasoul and Landgrebe, 1991; Mitchell, 1997) , especially if one allows more general classifiers in the decision tree nodes. In this section we show that (i) the SM can express any DT. (ii) the SM is more compact than a decision tree even when the DT makes used of more expressive internal nodes (Murthy et al., 1994) . The next theorem shows that for a fixed set of functions (queries) over the input features, any binary decision tree can be represented as a SM. Extending the proof beyond binary decision trees is straight-forward. ",
"cite_spans": [
{
"start": 87,
"end": 115,
"text": "(Rasoul and Landgrebe, 1991;",
"ref_id": "BIBREF22"
},
{
"start": 116,
"end": 131,
"text": "Mitchell, 1997)",
"ref_id": "BIBREF19"
},
{
"start": 383,
"end": 404,
"text": "(Murthy et al., 1994)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity",
"sec_num": "5.3"
},
{
"text": "u 0 ) 1 ) 1 ) \u00a4 e",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity",
"sec_num": "5.3"
},
{
"text": "such that a classifier that is assigned to node \u00de is processed before any classifier that was assigned to any of the children of \u00de .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity",
"sec_num": "5.3"
},
{
"text": "4. Define each classifier\u00a8P that was assigned to node \u00de \u00ec to have an influence on the outcome iff node \u00de \u00f0 \u00ec lies in the path (\u00cf 9 3 \u00cf u ( \" \" ) 1 ) 2 ) 1 3 \u00cf\u00b0d ( ) from the root to the predicted class.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity",
"sec_num": "5.3"
},
{
"text": "5. Show that using steps 1-4, the predicted target of \u00ec and E are identical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity",
"sec_num": "5.3"
},
{
"text": "This completes that proof and shows that the resulting SM is of equivalent size to the original decision tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity",
"sec_num": "5.3"
},
{
"text": "We note that given a SM, it is also relatively easy (details omitted) to construct a decision tree that produces the same decisions as the final classifier of the SM. However, the simple construction results in a decision tree that is exponentially larger than the original SM. Theorem 4 shows that this difference in expressivity is inherent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Expressivity",
"sec_num": "5.3"
},
{
"text": "E and the number of internal nodes a in decision tree \u00ec . Let 7 be the set of classes in the output of E and also the maximum degree of the internal nodes in \u00ec . Denote by \u00f1 l \u00ec a j \u00f1 E the number of functions representable by \u00ec E respectively. Then, when 7 \u00f2 u A u , \u00f1 E is exponentially larger than \u00f1 l \u00ec a .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Theorem 4 Let be the number of classifiers in a sequential model",
"sec_num": null
},
{
"text": "The proof follows by counting the number of functions that can be represented using a decision tree with internal nodes (Wilf, 1994) , and the number of functions that can be represented using a sequential model on intermediate classifier. Given the exponential gap, it follows that one may need exponentially large decision trees to represent an equivalent predictor to an size SM.",
"cite_spans": [
{
"start": 120,
"end": 132,
"text": "(Wilf, 1994)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proof (Sketch):",
"sec_num": null
},
{
"text": "A wide range and a large number of classification tasks will have to be used in order to perform any high level natural language inference such as speech recognition, machine translation or question answering. Although in each instantiation the real conflict could be only to choose among a small set of candidates, the original set of candidates could be very large; deriving the small set of candidates that are relevant to the task at hand may not be immediate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "This paper addressed this problem by developing a general paradigm for multi-class classification that sequentially restricts the set of candidate classes to a small set, in a way that is driven by the data observed. We have described the method and provided some justifications for its advantages, especially in NLP-like domains. Preliminary experiments also show promise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Several issues are still missing from this work. In our experimental study the decomposition of the feature space was done manually; it would be nice to develop methods to do this automatically. Better understanding of methods for thresholding the probability distributions that the classifiers output, as well as principled ways to order them are also among the future directions of this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We use the terms class and target interchangeably.3 Lexical features are used only when tagging unknown words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "where \u00b0 i s the input for the \u00b2 th classifier.(3)and therefore can be treated as a constant. Eq. 5 is derived by applying the independence assumption. Eq. 6 is derived by using the Bayes rule for each term f \u00b9 g & X h P separately.We note that although the conditional independence assumption is a strong one, it is a reasonable assumption in many NLP applications; in particular, when cross modality information is used, this assumption typically holds for decomposition that is done across modalities. For example, in POS tagging, lexical information is often conditionally independent of contextual information, given the true POS. (E.g., assume that word is a gerund; then the context is independent of the \"ing\" word ending.)In addition, decomposing the domain has significant advantages from the learning theory point of view (Roth, 1999) . Learning over domains of lower dimensionality implies better generalization bounds or, equivalently, more accurate classifiers for a fixed size training set.",
"cite_spans": [
{
"start": 832,
"end": 844,
"text": "(Roth, 1999)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "The SM attempts to reduce the size of the candidates set. We justify this by considering two cases: (i) Test: we will argue that prediction among a smaller set of classes has advantages over predicting among a large set of classes; (ii) Training: we will argue that it is advantageous to ignore irrelevant examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the range",
"sec_num": "5.2"
},
{
"text": "The following discussion formalizes the intuition that a smaller confusion set in preferred. Let\u00a1\u00a9 \u00bd wbe the true target function and f \u00b9 g & \u00a7 h n the probability assigned by the final classifier to class & \u00a7 \u00be given example \u00bf p . Assuming that the prediction is done, naturally, by choosing the most likely class label, we see that the expected error when using a confusion set of size \u00b2 is:Now we have: Claim 1 shows that reducing the size of the confusion set can only help; this holds under the assumption that the true class label is not eliminated from consideration by down stream classifiers, that is, under the one-sided error assumption. Moreover, it is easy to see that the proof of Claim 1 allows us to relax the one sided error assumption and assume instead that the previous classifiers err with a probability which is smaller than:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decomposing the range during Test",
"sec_num": "5.2.1"
},
{
"text": "We will assume now, as suggested by the previous discussion, that in the evaluation stage the smallest possible set of candidates will be considered by each classifier. Based on this assumption, Claim 2 shows that training this way is advantageous. That is, that utilizing the SM in training yields a better classifier. Let \u00d7 be a learning algorithm that is trained to minimize:where is an example, \u00dc r \u00df \u00a7 @ ! \u00a6 \u00d4 m ! ' # is the true class, \u00dd is the hypothesis, \u00da is a loss function and f \u00b9 l i is the probability of seeing example when s \u00e0 e (see (Allwein et al., 2000) ). (Notice that in this section we are using general loss function \u00da; we could use, in particular, binary loss function used in Sec 5.2.) We phrase and prove the next claim, w.l.o.g, the case of \u00a4 vs. class labels. H % H u & ( \" & 8 ",
"cite_spans": [
{
"start": 549,
"end": 571,
"text": "(Allwein et al., 2000)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 787,
"end": 810,
"text": "H % H u & ( \" & 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Decomposing the range during training",
"sec_num": "5.2.2"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reducing multiclass to binary: a unifying approach for margin classifiers",
"authors": [
{
"first": "L",
"middle": [
"E"
],
"last": "Allwein",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 17th International Workshop on Machine Learning",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. E. Allwein, R. E. Schapire, and Y. Singer. 2000. Reducing multiclass to binary: a unifying ap- proach for margin classifiers. In Proceedings of the 17th International Workshop on Machine Learning, pages 9-16.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "4",
"pages": "543--565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging. Computational Linguistics, 21(4):543-565.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Statistical Language Learning",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak. 1993. Statistical Language Learning. MIT Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Similaritybased models of word cooccurrence probabilities",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "43--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Dagan, L. Lee, and F. Pereira. 1999. Similarity- based models of word cooccurrence probabilities. Machine Learning, 34(1-3):43-69.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic stochastic tagging of natural language texts",
"authors": [
{
"first": "E",
"middle": [],
"last": "Dermatas",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kokkinakis",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "2",
"pages": "137--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Dermatas and G. Kokkinakis. 1995. Automatic stochastic tagging of natural language texts. Computational Linguistics, 21(2):137-164.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Solving multiclass learning problems via error-correcting output codes",
"authors": [
{
"first": "T",
"middle": [
"G"
],
"last": "Dietterich",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Bakiri",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Artificial Intelligence Research",
"volume": "2",
"issue": "",
"pages": "263--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. G. Dietterich and G. Bakiri. 1995. Solving multi- class learning problems via error-correcting out- put codes. Journal of Artificial Intelligence Re- search, 2:263-286.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A classification approach to word prediction",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Even-Zohar",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2000,
"venue": "NAALP 2000",
"volume": "",
"issue": "",
"pages": "124--131",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Even-Zohar and D. Roth. 2000. A classification approach to word prediction. In NAALP 2000, pages 124-131.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A method for disambiguating word senses in a large corpus",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1993,
"venue": "Computers and the Humanities",
"volume": "26",
"issue": "",
"pages": "415--439",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. A. Gale, K. W. Church, and D. Yarowsky. 1993. A method for disambiguating word senses in a large corpus. Computers and the Humanities, 26:415-439.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Winnow based approach to context-sensitive spelling correction",
"authors": [
{
"first": "A",
"middle": [
"R"
],
"last": "Golding",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1999,
"venue": "Special Issue on Machine Learning and Natural Language",
"volume": "34",
"issue": "",
"pages": "107--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. R. Golding and D. Roth. 1999. A Winnow based approach to context-sensitive spelling correction. Machine Learning, 34(1-3):107-130. Special Is- sue on Machine Learning and Natural Language.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A Bayesian hybrid method for context-sensitive spelling correction",
"authors": [
{
"first": "A",
"middle": [
"R"
],
"last": "Golding",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 3rd workshop on very large corpora, ACL-95",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. R. Golding. 1995. A Bayesian hybrid method for context-sensitive spelling correction. In Pro- ceedings of the 3rd workshop on very large cor- pora, ACL-95.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Classification by pairwise coupling",
"authors": [
{
"first": "T",
"middle": [],
"last": "Hastie",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1998,
"venue": "Advances in Neural Information Processing Systems",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Hastie and R. Tibshirani. 1998. Classifica- tion by pairwise coupling. In Michael I. Jordan, Michael J. Kearns, and Sara A. Solla, editors, Advances in Neural Information Processing Sys- tems, volume 10. The MIT Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Training products of experts by minimizing contrastive divergence",
"authors": [
{
"first": "G",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Hinton. 2000. Training products of experts by minimizing contrastive divergence. Technical Report GCNU TR 2000-004, University College London.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Use of support vector machines for chunk identification",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kudoh",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2000,
"venue": "CoNLL",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Kudoh and Y. Matsumoto. 2000. Use of sup- port vector machines for chunk identification. In CoNLL, pages 142-147, Lisbon, Protugal.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributional similarity models: Clustering vs. nearest neighbors",
"authors": [
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL 99",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Lee and F. Pereira. 1999. Distributional similar- ity models: Clustering vs. nearest neighbors. In ACL 99, pages 33-40.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Measure of distributional similarity",
"authors": [
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1999,
"venue": "ACL 99",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Lee. 1999. Measure of distributional similarity. In ACL 99, pages 25-32.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Binary codes capable of correcting deletions, insertions and reversals",
"authors": [
{
"first": "V",
"middle": [
"I"
],
"last": "Levenshtein",
"suffix": ""
}
],
"year": 1966,
"venue": "In Sov. Phys-Dokl",
"volume": "10",
"issue": "",
"pages": "707--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V.I. Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. In Sov. Phys-Dokl, volume 10, pages 707-710.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Continuous speech recognition from phonetic transcription",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Levinson",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ljolje",
"suffix": ""
},
{
"first": "L",
"middle": [
"G"
],
"last": "Miller",
"suffix": ""
}
],
"year": 1990,
"venue": "Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "190--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S.E. Levinson, A. Ljolje, and L.G. Miller. 1990. Continuous speech recognition from phonetic transcription. In Speech and Natural Language Workshop, pages 190-199.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic rule acquisition for spelling correction",
"authors": [
{
"first": "L",
"middle": [],
"last": "Mangu",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of the International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "734--741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Mangu and E. Brill. 1997. Automatic rule ac- quisition for spelling correction. In Proc. of the International Conference on Machine Learning, pages 734-741.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Automatic rule induction for unknown word guessing",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mikheev",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistic",
"volume": "23",
"issue": "",
"pages": "405--423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Mikheev. 1997. Automatic rule induction for unknown word guessing. In Computational Lin- guistic, volume 23(3), pages 405-423.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Machine Learning",
"authors": [
{
"first": "T",
"middle": [
"M"
],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. M. Mitchell. 1997. Machine Learning. Mcgraw- Hill.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A system for induction of oblique decision trees",
"authors": [
{
"first": "S",
"middle": [],
"last": "Murthy",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Kasif",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Salzberg",
"suffix": ""
}
],
"year": 1994,
"venue": "Journal of Artificial Intelligence Research",
"volume": "2",
"issue": "1",
"pages": "1--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Murthy, S. Kasif, and S. Salzberg. 1994. A sys- tem for induction of oblique decision trees. Jour- nal of Artificial Intelligence Research, 2:1:1-33.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The use of classifiers in sequential inference",
"authors": [
{
"first": "V",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2001,
"venue": "NIPS-13; The 2000 Conference on Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Punyakanok and D. Roth. 2001. The use of clas- sifiers in sequential inference. In NIPS-13; The 2000 Conference on Advances in Neural Infor- mation Processing Systems.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "A survey of decision tree classifier methodology",
"authors": [
{
"first": "S",
"middle": [
"S"
],
"last": "Rasoul",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Landgrebe",
"suffix": ""
}
],
"year": 1991,
"venue": "IEEE Transactions on Systems, Man, and Cybernetics",
"volume": "21",
"issue": "3",
"pages": "660--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. S. Rasoul and D. A. Landgrebe. 1991. A sur- vey of decision tree classifier methodology. IEEE Transactions on Systems, Man, and Cybernetics, 21 (3):660-674.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Part of speech tagging using a network of linear separators",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Zelenko",
"suffix": ""
}
],
"year": 1998,
"venue": "The 17th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1136--1142",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Roth and D. Zelenko. 1998. Part of speech tagging using a network of linear separators. In COLING-ACL 98, The 17th International Conference on Computational Linguistics, pages 1136-1142.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning to resolve natural language ambiguities: A unified approach",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "806--813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Roth. 1998. Learning to resolve natural lan- guage ambiguities: A unified approach. In Proc. National Conference on Artificial Intelligence, pages 806-813.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning in natural language",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. Int'l Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "898--904",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Roth. 1999. Learning in natural language. In Proc. Int'l Joint Conference on Artificial Intelli- gence, pages 898-904.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Handwritten digit recognition with a novel vision model that extracts linearly separable features",
"authors": [
{
"first": "L-W",
"middle": [],
"last": "Teow",
"suffix": ""
},
{
"first": "K-F",
"middle": [],
"last": "Loe",
"suffix": ""
}
],
"year": 2000,
"venue": "CVPR'00, The IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "76--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L-W. Teow and K-F. Loe. 2000. Handwritten digit recognition with a novel vision model that ex- tracts linearly separable features. In CVPR'00, The IEEE Conference on Computer Vision and Pattern Recognition, pages 76-81.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Decision lists for lexical ambiguity resolution: application to accent restoration in Spanish and French",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of the Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "88--95",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky. 1994. Decision lists for lexical ambi- guity resolution: application to accent restoration in Spanish and French. In Proc. of the Annual Meeting of the ACL, pages 88-95.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Resolving pp attachment ambiguities with memory based learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Zavrel",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Veenstra",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Zavrel, W. Daelemans, and J. Veenstra. 1997. Resolving pp attachment ambiguities with mem- ory based learning. In Computational Natural Language Learning, Madrid, Spain, July.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Learning to parse database queries using inductive logic programming",
"authors": [
{
"first": "M",
"middle": [],
"last": "Zelle",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic pro- gramming. In Proc. National Conference on Ar- tificial Intelligence, pages 1050-1055.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 1: POS Tagging of Unknown Word using Contextual and Lexical features in a Sequential Model. The input for capitalized classifier has 2 values and therefore 2 ways to create confusion sets. There are at most 8 p ( k C 3 different inputs for the suffix classifier (26 character + 10 digits + 5 other symbols), therefore suffix may emit up to 8 ( k C confusion sets."
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table/>",
"num": null
},
"TABREF5": {
"type_str": "table",
"html": null,
"text": "",
"content": "<table><tr><td>based only on contextual features,</td><td>is</td></tr><tr><td colspan=\"2\">based on contextual and lexical features. SM(\u00a8P denotes that\u00a8 \u00a7 follows\u00a8P in the sequential model. 5 \u00a7 )</td></tr></table>",
"num": null
}
}
}
}