Dataset Viewer
preprocessed_text
stringlengths 43
152k
| preprocessed_summary
stringlengths 55
3.02k
|
|---|---|
tnt - a statistical part-of-speech tagger trigrams'n'tags ( tnt ) is an efficient statistical part-of-speech tagger . contrary to claims found elsewhere in the literature , we argue that a tagger based on markov models performs at least as well as other current approaches , including the maximum entropy framework . a recent comparison has even shown that tnt performs significantly better for the tested corpora . we describe the basic model of tnt , the techniques used for smoothing and for handling unknown words . furthermore , we present evaluations on two corpora . a large number of current language processing systems use a part-of-speech tagger for pre-processing . the tagger assigns a ( unique or ambiguous ) part-ofspeech tag to each token in the input and passes its output to the next processing level , usually a parser . furthermore , there is a large interest in part-ofspeech tagging for corpus annotation projects , who create valuable linguistic resources by a combination of automatic processing and human correction . for both applications , a tagger with the highest possible accuracy is required . the debate about which paradigm solves the part-of-speech tagging problem best is not finished . recent comparisons of approaches that can be trained on corpora ( van halteren et al. , 1998 ; volk and schneider , 1998 ) have shown that in most cases statistical aproaches ( cutting et al. , 1992 ; schmid , 1995 ; ratnaparkhi , 1996 ) yield better results than finite-state , rule-based , or memory-based taggers ( brill , 1993 ; daelemans et al. , 1996 ) . they are only surpassed by combinations of different systems , forming a & quot ; voting tagger & quot ; . among the statistical approaches , the maximum entropy framework has a very strong position . nevertheless , a recent independent comparison of 7 taggers ( zavrel and daelemans , 1999 ) has shown that another approach even works better : markov models combined with a good smoothing technique and with handling of unknown words . this tagger , tnt , not only yielded the highest accuracy , it also was the fastest both in training and tagging . the tagger comparison was organized as a & quot ; blackbox test & quot ; : set the same task to every tagger and compare the outcomes . this paper describes the models and techniques used by tnt together with the implementation . the reader will be surprised how simple the underlying model is . the result of the tagger comparison seems to support the maxime & quot ; the simplest is the best & quot ; . however , in this paper we clarify a number of details that are omitted in major previous publications concerning tagging with markov models . as two examples , ( rabiner , 1989 ) and ( charniak et al. , 1993 ) give good overviews of the techniques and equations used for markov models and part-ofspeech tagging , but they are not very explicit in the details that are needed for their application . we argue that it is not only the choice of the general model that determines the result of the tagger but also the various & quot ; small & quot ; decisions on alternatives . the aim of this paper is to give a detailed account of the techniques used in tnt . additionally , we present results of the tagger on the negra corpus ( brants et al. , 1999 ) and the penn treebank ( marcus et al. , 1993 ) . the penn treebank results reported here for the markov model approach are at least equivalent to those reported for the maximum entropy approach in ( ratnaparkhi , 1996 ) . for a comparison to other taggers , the reader is referred to ( zavrel and daelemans , 1999 ) . tnt uses second order markov models for part-ofspeech tagging . the states of the model represent tags , outputs represent the words . transition probabilities depend on the states , thus pairs of tags . output probabilities only depend on the most recent category . to be explicit , we calculate for a given sequence of words w1 of length t. t1 tr are elements of the tagset , the additional tags t_1 , to , and t7- , ±1 are beginning-of-sequence and end-of-sequence markers . using these additional tags , even if they stem from rudimentary processing of punctuation marks , slightly improves tagging results . this is different from formulas presented in other publications , which just stop with a & quot ; loose end & quot ; at the last word . if sentence boundaries are not marked in the input , tnt adds these tags if it encounters one of [ . ! ? ; ] as a token . transition and output probabilities are estimated from a tagged corpus . as a first step , we use the maximum likelihood probabilities p which are derived from the relative frequencies : for all t1 , t2 , t3 in the tagset and w3 in the lexicon . n is the total number of tokens in the training corpus . we define a maximum likelihood probability to be zero if the corresponding nominators and denominators are zero . as a second step , contextual frequencies are smoothed and lexical frequences are completed by handling words that are not in the lexicon ( see below ) . trigram probabilities generated from a corpus usually can not directly be used because of the sparsedata problem . this means that there are not enough instances for each trigram to reliably estimate the probability . furthermore , setting a probability to zero because the corresponding trigram never occured in the corpus has an undesired effect . it causes the probability of a complete sequence to be set to zero if its use is necessary for a new text sequence , thus makes it impossible to rank different sequences containing a zero probability . the smoothing paradigm that delivers the best results in tnt is linear interpolation of unigrams , bigrams , and trigrams . therefore , we estimate a trigram probability as follows : p are maximum likelihood estimates of the probabilities , and a1 + a2 ± a3 = 1 , so p again represent probability distributions . we use the context-independent variant of linear interpolation , i.e. , the values of the as do not depend on the particular trigram . contrary to intuition , this yields better results than the context-dependent variant . due to sparse-data problems , one can not estimate a different set of as for each trigram . therefore , it is common practice to group trigrams by frequency and estimate tied sets of as . however , we are not aware of any publication that has investigated frequency groupings for linear interpolation in part-of-speech tagging . all groupings that we have tested yielded at most equivalent results to contextindependent linear interpolation . some groupings even yielded worse results . the tested groupings included a ) one set of as for each frequency value and b ) two classes ( low and high frequency ) on the two ends of the scale , as well as several groupings in between and several settings for partitioning the classes . the values of a1 , a2 , and a3 are estimated by deleted interpolation . this technique successively removes each trigram from the training corpus and estimates best values for the as from all other ngrams in the corpus . given the frequency counts for uni- , bi- , and trigrams , the weights can be very efficiently determined with a processing time linear in the number of different trigrams . the algorithm is given in figure 1 . note that subtracting 1 means taking unseen data into account . without this subtraction the model would overfit the training data and would generally yield worse results . currently , the method of handling unknown words that seems to work best for inflected languages is a suffix analysis as proposed in ( samuelsson , 1993 ) . tag probabilities are set according to the word 's ending . the suffix is a strong predictor for word classes , e.g. , words in the wall street journal part of the penn treebank ending in able are adjectives ( .11 ) in 98 % of the cases ( e.g . fashionable , variable ) , the rest of 2 % are nouns ( e.g . cable , variable ) . the probability distribution for a particular suffix is generated from all words in the training set that share the same suffix of some predefined maximum length . the term suffix as used here means & quot ; final sequence of characters of a word & quot ; which is not necessarily a linguistically meaningful suffix . probabilities are smoothed by successive abstraction . this calculates the probability of a tag t given the last m letters i of an n letter word : p ( t1/7 „ , +1 , , .. ln ) . the sequence of increasingly more general contexts omits more and more characters of the suffix , such that p ( tlin-m+2 , • • • p ( tlin_m+3 , , i ) , , p ( t ) are used for smoothing . the recursion formula is set a = a2 = a3 = 0 foreach trigram t1 , t2 , t3 with f ( ti , t2 , t3 ) > 0 depending on the maximum of the following three values : for i = m ... 0 , using the maximum likelihood estimates p from frequencies in the lexicon , weights oi and the initialization for the markov model , we need the inverse conditional probabilities p ( 1,2_1+1 , ... /tilt ) which are obtained by bayesian inversion . a theoretical motivated argumentation uses the standard deviation of the maximum likelihood probabilities for the weights 0 , ( samuelsson , 1993 ) . this leaves room for interpretation . we use the longest suffix that we can find in the training set ( i.e. , for which the frequency is greater than or equal to 1 ) , but at most 10 characters . this is an empirically determined choice . 2 ) we use a context-independent approach for 0 „ as we did for the contextual weights a . it turned out to be a good choice to set all 0 , to the standard deviation of the unconditioned maximum likelihood probabilities of the tags in the training corpus , i.e. , we set for all i = 0 ... m — 1 , using a tagset of s tags and the average ( 11 ) this usually yields values in the range 0,03 ... 0.10 . 3 ) we use different estimates for uppercase and lowercase words , i.e. , we maintain two different suffix tries depending on the capitalization of the word . this information improves the tagging results . 4 ) another freedom concerns the choice of the words in the lexicon that should be used for suffix handling . should we use all words , or are some of them better suited than others ? accepting that unknown words are most probably infrequent , one can argue that using suffixes of infrequent words in the lexicon is a better approximation for unknown words than using suffixes of frequent words . therefore , we restrict the procedure of suffix handling to words with a frequency smaller than or equal to some threshold value . empirically , 10 turned out to be a good choice for this threshold . additional information that turned out to be useful for the disambiguation process for several corpora and tagsets is capitalization information . tags are usually not informative about capitalization , but probability distributions of tags around capitalized words are different from those not capitalized . the effect is larger for english , which only capitalizes proper names , and smaller for german , which capitalizes all nouns . we use flags ci that are true if wi is a capitalized word and false otherwise . these flags are added to the contextual probability distributions . instead of and equations ( 3 ) to ( 5 ) are updated accordingly . this is equivalent to doubling the size of the tagset and using different tags depending on capitalization . the processing time of the viterbi algorithm ( rabiner , 1989 ) can be reduced by introducing a beam search . each state that receives a 6 value smaller than the largest 6 divided by some threshold value 0 is excluded from further processing . while the viterbi algorithm is guaranteed to find the sequence of states with the highest probability , this is no longer true when beam search is added . nevertheless , for practical purposes and the right choice of 0 , there is virtually no difference between the algorithm with and without a beam . empirically , a value of 0 = 1000 turned out to approximately double the speed of the tagger without affecting the accuracy . the tagger currently tags between 30,000 and 60,000 tokens per second ( including file i/o ) on a pentium 500 running linux . the speed mainly depends on the percentage of unknown words and on the average ambiguity rate . we evaluate the tagger 's performance under several aspects . first of all , we determine the tagging accuracy averaged over ten iterations . the overall accuracy , as well as separate accuracies for known and unknown words are measured . second , learning curves are presented , that indicate the performance when using training corpora of different sizes , starting with as few as 1,000 tokens and ranging to the size of the entire corpus ( minus the test set ) . an important characteristic of statistical taggers is that they not only assign tags to words but also probabilities in order to rank different assignments . we distinguish reliable from unreliable assignments by the quotient of the best and second best assignmentsl . all assignments for which this quotient is larger than some threshold are regarded as reliable , the others as unreliable . as we will see below , accuracies for reliable assignments are much higher . the tests are performed on partitions of the corpora that use 90 % as training set and 10 % as test set , so that the test data is guaranteed to be unseen during training . each result is obtained by repeating the experiment 10 times with different partitions and averaging the single outcomes . in all experiments , contiguous test sets are used . the alternative is a round-robin procedure that puts every 10th sentence into the test set . we argue that contiguous test sets yield more realistic results because completely unseen articles are tagged . using the round-robin procedure , parts of an article are already seen , which significantly reduces the percentage of unknown words . therefore , we expect even 'by definition , this quotient is oo if there is only one possible tag for a given word . higher results when testing on every 10th sentence instead of a contiguous set of 10 % . in the following , accuracy denotes the number of correctly assigned tags divided by the number of tokens in the corpus processed . the tagger is allowed to assign exactly one tag to each token . we distinguish the overall accuracy , taking into account all tokens in the test corpus , and separate accuracies for known and unknown tokens . the latter are interesting , since usually unknown tokens are much more difficult to process than known tokens , for which a list of valid tags can be found in the lexicon . the german negra corpus consists of 20,000 sentences ( 355,000 tokens ) of newspaper texts ( frankfurter rundschau ) that are annotated with parts-ofspeech and predicate-argument structures ( skut et al. , 1997 ) . it was developed at the saarland university in saarbriicken2 . part of it was tagged at the ims stuttgart . this evaluation only uses the partof-speech annotation and ignores structural annotations . tagging accuracies for the negra corpus are shown in table 2 . figure 3 shows the learning curve of the tagger , i.e. , the accuracy depending on the amount of training data . training length is the number of tokens used for training . each training length was tested ten times , training and test sets were randomly chosen and disjoint , results were averaged . the training length is given on a logarithmic scale . it is remarkable that tagging accuracy for known words is very high even for very small training corpora . this means that we have a good chance of getting the right tag if a word is seen at least once during training . average percentages of unknown tokens are shown in the bottom line of each diagram . we exploit the fact that the tagger not only determines tags , but also assigns probabilities . if there is an alternative that has a probability & quot ; close to & quot ; that of the best assignment , this alternative can be viewed as almost equally well suited . the notion of & quot ; close to & quot ; is expressed by the distance of probabilities , and this in turn is expressed by the quotient of probabilities . so , the distance of the probabilities of a best tag tbest and an alternative tag tau is expressed by p ( tbest ) /p ( talt ) 7 which is some value greater or equal to 1 since the best tag assignment has the highest probability . figure 4 shows the accuracy when separating assignments with quotients larger and smaller than the threshold ( hence reliable and unreliable assignments ) . as expected , we find that accuracies for percentage known unknown • overall unknowns acc . acc . acc . a table 5 : part-of-speech tagging accuracy for the penn treebank . the table shows the percentage of unknown tokens , separate accuracies and standard deviations for known and unknown tokens , as well as the overall accuracy . percentage known unknown overall unknowns acc . acc . acc . reliable assignments are much higher than for unreliable assignments . this distinction is , e.g. , useful for annotation projects during the cleaning process , or during pre-processing , so the tagger can emit multiple tags if the best tag is classified as unreliable . we use the wall street journal as contained in the penn treebank for our experiments . the annotation consists of four parts : 1 ) a context-free structure augmented with traces to mark movement and discontinuous constituents , 2 ) phrasal categories that are annotated as node labels , 3 ) a small set of grammatical functions that are annotated as extensions to the node labels , and 4 ) part-of-speech tags ( marcus et al. , 1993 ) . this evaluation only uses the part-ofspeech annotation . the wall street journal part of the penn treebank consists of approx . 50,000 sentences ( 1.2 million tokens ) . tagging accuracies for the penn treebank are shown in table 5 . figure 6 shows the learning curve of the tagger , i.e. , the accuracy depending on the amount of training data . training length is the number of tokens used for training . each training length was tested ten times . training and test sets were disjoint , results are averaged . the training length is given on a logarithmic scale . as for the negra corpus , tagging accuracy is very high for known tokens even with small amounts of training data . we exploit the fact that the tagger not only determines tags , but also assigns probabilities . figure 7 shows the accuracy when separating assignments with quotients larger and smaller than the threshold ( hence reliable and unreliable assignments ) . again , we find that accuracies for reliable assignments are much higher than for unreliable assignments . average part-of-speech tagging accuracy is between 96 % and 97 % , depending on language and tagset , which is at least on a par with state-of-the-art results found in the literature , possibly better . for the penn treebank , ( ratnaparkhi , 1996 ) reports an accuracy of 96.6 % using the maximum entropy approach , our much simpler and therefore faster hmm approach delivers 96.7 % . this comparison needs to be re-examined , since we use a ten-fold crossvalidation and averaging of results while ratnaparkhi only makes one test run . the accuracy for known tokens is significantly higher than for unknown tokens . for the german newspaper data , results are 8.7 % better when the word was seen before and therefore is in the lexicon , than when it was not seen before ( 97.7 % vs. 89.0 % ) . accuracy for known tokens is high even with very small amounts of training data . as few as 1000 tokens are sufficient to achieve 95 % -96 % accuracy for them . it is important for the tagger to have seen a word at least once during training . stochastic taggers assign probabilities to tags . we exploit the probabilities to determine reliability of assignments . for a subset that is determined during processing by the tagger we achieve accuracy rates of over 99 % . the accuracy of the complement set is much lower . this information can , e.g. , be exploited in an annotation project to give an additional treatment to the unreliable assignments , or to pass selected ambiguities to a subsequent processing step . we have shown that a tagger based on markov models yields state-of-the-art results , despite contrary claims found in the literature . for example , the markov model tagger used in the comparison of ( van halteren et al. , 1998 ) yielded worse results than all other taggers . in our opinion , a reason for the wrong claim is that the basic algorithms leave several decisions to the implementor . the rather large amount of freedom was not handled in detail in previous publications : handling of start- and end-of-sequence , the exact smoothing technique , how to determine the weights for context probabilities , details on handling unknown words , and how to determine the weights for unknown words . note that the decisions we made yield good results for both the german and the english corpus . they do so for several other corpora as well . the architecture remains applicable to a large variety of languages . according to current tagger comparisons ( van halteren et al. , 1998 ; zavrel and daelemans , 1999 ) , and according to a comparsion of the results presented here with those in ( ratnaparkhi , 1996 ) , the maximum entropy framework seems to be the only other approach yielding comparable results to the one presented here . it is a very interesting future research topic to determine the advantages of either of these approaches , to find the reason for their high accuracies , and to find a good combination of both . tnt is freely available to universities and related organizations for research purposes ( see http : //www.coli.uni-sb.derthorstenant ) . many thanks go to hans uszkoreit for his support during the development of tnt . most of the work on tnt was carried out while the author received a grant of the deutsche forschungsgemeinschaft in the graduiertenkolleg kognitionswissenschaft saarbriicken . large annotated corpora are the pre-requisite for developing and testing part-ofspeech taggers , and they enable the generation of high-quality language models . therefore , i would like to thank all the people who took the effort to annotate the penn treebank , the susanne corpus , the stuttgarter referenzkorpus , the negra corpus , the verbmobil corpora , and several others . and , last but not least , i would like to thank the users of tnt who provided me with bug reports and valuable suggestions for improvements .
|
tnt - a statistical part-of-speech tagger trigrams'n'tags ( tnt ) is an efficient statistical part-of-speech tagger . contrary to claims found elsewhere in the literature , we argue that a tagger based on markov models performs at least as well as other current approaches , including the maximum entropy framework . a recent comparison has even shown that tnt performs significantly better for the tested corpora . we describe the basic model of tnt , the techniques used for smoothing and for handling unknown words . furthermore , we present evaluations on two corpora . we achieve the automated tagging of a syntactic-structure-based set of grammatical function tags including phrase-chunk and syntactic-role modifiers trained in supervised mode from a tree bank of german .
|
mildly non-projective dependency structures syntactic parsing requires a fine balance between expressivity and complexity , so that naturally occurring structures can be accurately parsed without compromising efficiency . in dependency-based parsing , several constraints have been proposed that restrict the class of permissible structures , such as projectivity , planarity , multi-planarity , well-nestedness , gap degree , and edge degree . while projectivity is generally taken to be too restrictive for natural language syntax , it is not clear which of the other proposals strikes the best balance between expressivity and complexity . in this paper , we review and compare the different constraints theoretically , and provide an experimental evaluation using data from two treebanks , investigating how large a proportion of the structures found in the treebanks are permitted under different constraints . the results indicate that a combination of the well-nestedness constraint and a parametric constraint on discontinuity gives a very good fit with the linguistic data . dependency-based representations have become increasingly popular in syntactic parsing , especially for languages that exhibit free or flexible word order , such as czech ( collins et al. , 1999 ) , bulgarian ( marinov and nivre , 2005 ) , and turkish ( eryi˘git and oflazer , 2006 ) . many practical implementations of dependency parsing are restricted to projective structures , where the projection of a head word has to form a continuous substring of the sentence . while this constraint guarantees good parsing complexity , it is well-known that certain syntactic constructions can only be adequately represented by non-projective dependency structures , where the projection of a head can be discontinuous . this is especially relevant for languages with free or flexible word order . however , recent results in non-projective dependency parsing , especially using data-driven methods , indicate that most non-projective structures required for the analysis of natural language are very nearly projective , differing only minimally from the best projective approximation ( nivre and nilsson , 2005 ; hall and novák , 2005 ; mcdonald and pereira , 2006 ) . this raises the question of whether it is possible to characterize a class of mildly non-projective dependency structures that is rich enough to account for naturally occurring syntactic constructions , yet restricted enough to enable efficient parsing . in this paper , we review a number of proposals for classes of dependency structures that lie between strictly projective and completely unrestricted non-projective structures . these classes have in common that they can be characterized in terms of properties of the dependency structures themselves , rather than in terms of grammar formalisms that generate the structures . we compare the proposals from a theoretical point of view , and evaluate a subset of them empirically by testing their representational adequacy with respect to two dependency treebanks : the prague dependency treebank ( pdt ) ( hajiˇc et al. , 2001 ) , and the danish dependency treebank ( ddt ) ( kromann , 2003 ) . the rest of the paper is structured as follows . in section 2 , we provide a formal definition of dependency structures as a special kind of directed graphs , and characterize the notion of projectivity . in section 3 , we define and compare five different constraints on mildly non-projective dependency structures that can be found in the literature : planarity , multiplanarity , well-nestedness , gap degree , and edge degree . in section 4 , we provide an experimental evaluation of the notions of planarity , well-nestedness , gap degree , and edge degree , by investigating how large a proportion of the dependency structures found in pdt and ddt are allowed under the different constraints . in section 5 , we present our conclusions and suggestions for further research . for the purposes of this paper , a dependency graph is a directed graph on the set of indices corresponding to the tokens of a sentence . we write [ n ] to refer to the set of positive integers up to and including n. throughout this paper , we use standard terminology and notation from graph theory to talk about dependency graphs . in particular , we refer to the elements of the set v as nodes , and to the elements of the set e as edges . we write i -- > j to mean that there is an edge from the node i to the node j ( i.e. , ( i , j ) e e ) , and i -- > * j to mean that the node i dominates the node j , i.e. , that there is a ( possibly empty ) path from i to j . for a given node i , the set of nodes dominated by i is the yield of i . we use the notation 3r ( i ) to refer to the projection of i : the yield of i , arranged in ascending order . most of the literature on dependency grammar and dependency parsing does not allow arbitrary dependency graphs , but imposes certain structural constraints on them . in this paper , we restrict ourselves to dependency graphs that form forests . definition 2 a dependency forest is a dependency graph with two additional properties : figure 1 shows a dependency forest taken from pdt . it has two roots : node 2 ( corresponding to the complementizer proto ) and node 8 ( corresponding to the final punctuation mark ) . some authors extend dependency forests by a special root node with position 0 , and add an edge ( 0 , i ) for every root node i of the remaining graph ( mcdonald et al. , 2005 ) . this ensures that the extended graph always is a tree . although such a definition can be useful , we do not follow it here , since it obscures the distinction between projectivity and planarity to be discussed in section 3 . in contrast to acyclicity and the indegree constraint , both of which impose restrictions on the dependency relation as such , the projectivity constraint concerns the interaction between the dependency relation and the positions of the nodes in the sentence : it says that the nodes in a subtree of a dependency graph must form an interval , where an interval ( with endpoints i and j ) is the set [ i , j ] : = { kev i i < k and k < j } . definition 3 a dependency graph is projective , if the yields of its nodes are intervals . since projectivity requires each node to dominate a continuous substring of the sentence , it corresponds to a ban on discontinuous constituents in phrase structure representations . projectivity is an interesting constraint on dependency structures both from a theoretical and a practical perspective . dependency grammars that only allow projective structures are closely related to context-free grammars ( gaifman , 1965 ; obre¸bski and grali´nski , 2004 ) ; among other things , they have the same ( weak ) expressivity . the projectivity constraint also leads to favourable parsing complexities : chart-based parsing of projective dependency grammars can be done in cubic time ( eisner , 1996 ) ; hard-wiring projectivity into a deterministic dependency parser leads to linear-time parsing in the worst case ( nivre , 2003 ) . while the restriction to projective analyses has a number of advantages , there is clear evidence that it can not be maintained for real-world data ( zeman , 2004 ; nivre , 2006 ) . for example , the graph in figure 1 is non-projective : the yield of the node 1 ( marked by the dashed rectangles ) does not form an interval—the node 2 is ‘ missing ’ . in this section , we present several proposals for structural constraints that relax projectivity , and relate them to each other . the notion of planarity appears in work on link grammar ( sleator and temperley , 1993 ) , where it is traced back to mel ’ ˇcuk ( 1988 ) . informally , a dependency graph is planar , if its edges can be drawn above the sentence without crossing . we emphasize the word above , because planarity as it is understood here does not coincide with the standard graph-theoretic concept of the same name , where one would be allowed to also use the area below the sentence to disentangle the edges . figure 2a shows a dependency graph that is planar but not projective : while there are no crossing edges , the yield of the node 1 ( the set 11 , 3 } ) does not form an interval . using the notation linked ( i , j ) as an abbreviation for the statement ‘ there is an edge from i to j , or vice versa ’ , we formalize planarity as follows : definition 4 a dependency graph is planar , if it does not contain nodes a , b , c , d such that linked ( a , c ) a linked ( b , d ) a a < b < c < d . yli-jyrä ( 2003 ) proposes multiplanarity as a generalization of planarity suitable for modelling dependency analyses , and evaluates it experimentally using data from ddt . definition 5 a dependency graph g = ( v ; e ) is m-planar , if it can be split into m planar graphs such that e = e1u- - -uem . the planar graphs gi are called planes . as an example of a dependency forest that is 2planar but not planar , consider the graph depicted in figure 2b . in this graph , the edges ( 1 , 4 ) and ( 3 , 5 ) are crossing . moving either edge to a separate graph partitions the original graph into two planes . bodirsky et al . ( 2005 ) present two structural constraints on dependency graphs that characterize analyses corresponding to derivations in tree adjoining grammar : the gap degree restriction and the well-nestedness constraint . a gap is a discontinuity in the projection of a node in a dependency graph ( plátek et al. , 2001 ) . more precisely , let 7ri be the projection of the node i . then a gap is a pair ( jk , jk+1 ) of nodes adjacent in 7ri such that definition 6 the gap degree of a node i in a dependency graph , gd ( i ) , is the number of gaps in 7ri . as an example , consider the node labelled i in the dependency graphs in figure 3 . in graph 3a , the projection of i is an interval ( ( 2 , 3 , 4 ) ) , so i has gap degree 0 . in graph 3b , 7ri = ( 2 , 3 , 6 ) contains a single gap ( ( 3 , 6 ) ) , so the gap degree of i is 1 . in the rightmost graph , the gap degree of i is 2 , since 7ri = ( 2 , 4 , 6 ) contains two gaps ( ( 2 , 4 ) and ( 4 , 6 ) ) . definition 7 the gap degree of a dependency graph g , gd ( g ) , is the maximum among the gap degrees of its nodes . thus , the gap degree of the graphs in figure 3 is 0 , 1 and 2 , respectively , since the node i has the maximum gap degree in all three cases . the well-nestedness constraint restricts the positioning of disjoint subtrees in a dependency forest . two subtrees are called disjoint , if neither of their roots dominates the other . definition 8 two subtrees t1 , t2 interleave , if there are nodes l1 , r1 e t1 and l2 , r2 e t2 such that l1 < l2 < r1 < r2 . a dependency graph is well-nested , if no two of its disjoint subtrees interleave . both graph 3a and graph 3b are well-nested . graph 3c is not well-nested . to see this , let t1 be the subtree rooted at the node labelled i , and let t2 be the subtree rooted at j . these subtrees interleave , as t1 contains the nodes 2 and 4 , and t2 contains the nodes 3 and 5 . the notion of edge degree was introduced by nivre ( 2006 ) in order to allow mildly non-projective structures while maintaining good parsing efficiency in data-driven dependency parsing.2 define the span of an edge ( i , j ) as the interval s ( ( i , j ) ) w= [ min ( i , j ) , max ( i , j ) ] . definition 9 let g = ( v i e ) be a dependency forest , let e = ( i , j ) be an edge in e , and let ge be the subgraph of g that is induced by the nodes contained in the span of e. • the degree of an edge e 2 e , ed ( e ) , is the number of connected components c in ge such that the root of c is not dominated by the head of e. • the edge degree of g , ed ( g ) , is the maximum among the degrees of the edges in g. to illustrate the notion of edge degree , we return to figure 3 . graph 3a has edge degree 0 : the only edge that spans more nodes than its head and its dependent is ( 1 , 5 ) , but the root of the connected component f2 , 3 , 4g is dominated by 1 . both graph 3b and 3c have edge degree 1 : the edge ( 3 , 6 ) in graph 3b and the edges ( 2 , 4 ) , ( 3 , 5 ) and ( 4 , 6 ) in graph 3c each span a single connected component that is not dominated by the respective head . apart from proposals for structural constraints relaxing projectivity , there are dependency frameworks that in principle allow unrestricted graphs , but provide mechanisms to control the actually permitted forms of non-projectivity in the grammar . the non-projective dependency grammar of kahane et al . ( 1998 ) is based on an operation on dependency trees called lifting : a ‘ lift ’ of a tree t is the new tree that is obtained when one replaces one 2we use the term edge degree instead of the original simple term degree from nivre ( 2006 ) to mark the distinction from the notion of gap degree . or more edges ( i , k ) in t by edges ( j , k ) , where j ! * i . the exact conditions under which a certain lifting may take place are specified in the rules of the grammar . a dependency tree is acceptable , if it can be lifted to form a projective graph.3 a similar design is pursued in topological dependency grammar ( duchier and debusmann , 2001 ) , where a dependency analysis consists of two , mutually constraining graphs : the id graph represents information about immediate dominance , the lp graph models the topological structure of a sentence . as a principle of the grammar , the lp graph is required to be a lift of the id graph ; this lifting can be constrained in the lexicon . the structural conditions we have presented here naturally fall into two groups : multiplanarity , gap degree and edge degree are parametric constraints with an infinite scale of possible values ; planarity and well-nestedness come as binary constraints . we discuss these two groups in turn . parametric constraints with respect to the graded constraints , we find that multiplanarity is different from both gap degree and edge degree in that it involves a notion of optimization : since every dependency graph is m-planar for some sufficiently large m ( put each edge onto a separate plane ) , the interesting question in the context of multiplanarity is about the minimal values for m that occur in real-world data . but then , one not only needs to show that a dependency graph can be decomposed into m planar graphs , but also that this decomposition is the one with the smallest number of planes among all possible decompositions . up to now , no tractable algorithm to find the minimal decomposition has been given , so itis not clear how to evaluate the significance of the concept as such . the evaluation presented by yli-jyrä ( 2003 ) makes use of additional constraints that are sufficient to make the decomposition unique . the fundamental difference between gap degree and edge degree is that the gap degree measures the number of discontinuities within a subtree , while the edge degree measures the number of intervening constituents spanned by a single edge . this difference is illustrated by the graphs displayed in figure 4 . graph 4a has gap degree 2 but edge degree 1 : the subtree rooted at node 2 ( marked by the solid edges ) has two gaps , but each of its edges only spans one connected component not dominated by 2 ( marked by the squares ) . in contrast , graph 4b has gap degree 1 but edge degree 2 : the subtree rooted at node 2 has one gap , but this gap contains two components not dominated by 2 . nivre ( 2006 ) shows experimentally that limiting the permissible edge degree to 1 or 2 can reduce the average parsing time for a deterministic algorithm from quadratic to linear , while omitting less than 1 % of the structures found in ddt and pdt . it can be expected that constraints on the gap degree would have very similar effects . binary constraints for the two binary constraints , we find that well-nestedness subsumes planarity : a graph that contains interleaving subtrees can not be drawn without crossing edges , so every planar graph must also be well-nested . to see that the converse does not hold , consider graph 3b , which is well-nested , but not planar . since both planarity and well-nestedness are proper extensions of projectivity , we get the following hierarchy for sets of dependency graphs : projective c planar c well-nested c unrestricted the planarity constraint appears like a very natural one at first sight , as it expresses the intuition that ‘ crossing edges are bad ’ , but still allows a limited form of non-projectivity . however , many authors use planarity in conjunction with a special representation of the root node : either as an artificial node at the sentence boundary , as we mentioned in section 2 , or as the target of an infinitely long perpendicular edge coming ‘ from the outside ’ , as in earlier versions of word grammar ( hudson , 2003 ) . in these situations , planarity reduces to projectivity , so nothing is gained . even in cases where planarity is used without a special representation of the root node , it remains a peculiar concept . when we compare it with the notion of gaps , for example , we find that , in a planar dependency tree , every gap .i ; j/ must contain the root node r , in the sense that i < r < j : if the gap would only contain non-root nodes k , then the two paths from r to k and from i to j would cross . this particular property does not seem to be mirrored in any linguistic prediction . in contrast to planarity , well-nestedness is independent from both gap degree and edge degree in the sense that for every d > 0 , there are both wellnested and non-well-nested dependency graphs with gap degree or edge degree d. all projective dependency graphs ( d = 0 ) are trivially well-nested . well-nestedness also brings computational benefits . in particular , chart-based parsers for grammar formalisms in which derivations obey the well-nestedness constraint ( such as tree adjoining grammar ) are not hampered by the ‘ crossing configurations ’ to which satta ( 1992 ) attributes the fact that the universal recognition problem of linear context-free rewriting systems is x30-complete . in this section , we present an experimental evaluation of planarity , well-nestedness , gap degree , and edge degree , by examining how large a proportion of the structures found in two dependency treebanks are allowed under different constraints . assuming that the treebank structures are sampled from naturally occurring structures in natural language , this provides an indirect evaluation of the linguistic adequacy of the different proposals . the experiments are based on data from the prague dependency treebank ( pdt ) ( hajiˇc et al. , 2001 ) and the danish dependency treebank ( ddt ) ( kromann , 2003 ) . pdt contains 1.5m words of newspaper text , annotated in three layers according to the theoretical framework of functional generative description ( böhmová et al. , 2003 ) . our experiments concern only the analytical layer , and are based on the dedicated training section of the treebank . ddt comprises 100k words of text selected from the danish parole corpus , with annotation property all structures gap degree 0 gap degree 1 gap degree 2 gap degree 3 gap degree 4 edge degree 0 edge degree 1 edge degree 2 edge degree 3 edge degree 4 edge degree 5 edge degree 6 projective planar well-nested of primary and secondary dependencies based on discontinuous grammar ( kromann , 2003 ) . only primary dependencies are considered in the experiments , which are based on the entire treebank.4 the results of our experiments are given in table 1 . for the binary constraints ( planarity , well-nestedness ) , we simply report the number and percentage of structures in each data set that satisfy the constraint . for the parametric constraints ( gap degree , edge degree ) , we report the number and percentage of structures having degree d ( d > 0 ) , where degree 0 is equivalent ( for both gap degree and edge degree ) to projectivity . for ddt , we see that about 15 % of all analyses are non-projective . the minimal degree of non-projectivity required to cover all of the data is 2 in the case of gap degree and 4 in the case of edge degree . for both measures , the number of structures drops quickly as the degree increases . ( as an example , only 7 or 0.17 % of the analyses in ddt have gap 4a total number of 17 analyses in ddt were excluded because they either had more than one root node , or violated the indegree constraint . ( both cases are annotation errors . ) degree 2 . ) regarding the binary constraints , we find that planarity accounts for slightly more than the projective structures ( 86.41 % of the data is planar ) , while almost all structures in ddt ( 99.89 % ) meet the well-nestedness constraint . the difference between the two constraints becomes clearer when we base the figures on the set of non-projective structures only : out of these , less than 10 % are planar , while more than 99 % are well-nested . for pdt , both the number of non-projective structures ( around 23 % ) and the minimal degrees of non-projectivity required to cover the full data ( gap degree 4 and edge degree 6 ) are higher than in ddt . the proportion of planar analyses is smaller than in ddt if we base it on the set of all structures ( 82.16 % ) , but significantly larger when based on the set of non-projective structures only ( 22.93 % ) . however , this is still very far from the well-nestedness constraint , which has almost perfect coverage on both data sets . as a general result , our experiments confirm previous studies on non-projective dependency parsing ( nivre and nilsson , 2005 ; hall and novák , 2005 ; mcdonald and pereira , 2006 ) : the phenomenon of non-projectivity can not be ignored without also ignoring a significant portion of real-world data ( around 15 % for ddt , and 23 % for pdt ) . at the same time , already a small step beyond projectivity accounts for almost all of the structures occurring in these treebanks . more specifically , we find that already an edge degree restriction of d < 1 covers 98.24 % of ddt and 99.54 % of pdt , while the same restriction on the gap degree scale achieves a coverage of 99.84 % ( ddt ) and 99.57 % ( pdt ) . together with the previous evidence that both measures also have computational advantages , this provides a strong indication for the usefulness of these constraints in the context of non-projective dependency parsing . when we compare the two graded constraints to each other , we find that the gap degree measure partitions the data into less and larger clusters than the edge degree , which may be an advantage in the context of using the degree constraints as features in a data-driven approach towards parsing . however , our purely quantitative experiments can not answer the question , which of the two measures yields the more informative clusters . the planarity constraint appears to be of little use as a generalization of projectivity : enforcing it excludes more than 75 % of the non-projective data in pdt , and 90 % of the data in ddt . the relatively large difference in coverage between the two treebanks may at least partially be explained with their different annotation schemes for sentence-final punctuation . in ddt , sentence-final punctuation marks are annotated as dependents of the main verb of a dependency nexus . this , as we have discussed above , places severe restrictions on permitted forms of non-projectivity in the remaining sentence , as every discontinuity that includes the main verb must also include the dependent punctuation marks . on the other hand , in pdt , a sentencefinal punctuation mark is annotated as a separate root node with no dependents . this scheme does not restrict the remaining discontinuities at all . in contrast to planarity , the well-nestedness constraint appears to constitute a very attractive extension of projectivity . for one thing , the almost perfect coverage of well-nestedness on ddt and pdt ( 99.89 % ) could by no means be expected on purely combinatorial grounds—only 7 % of all possible dependency structures for sentences of length 17 ( the average sentence length in pdt ) , and only slightly more than 5 % of all possible dependency structures for sentences of length 18 ( the average sentence length in ddt ) are well-nested.5 moreover , a cursory inspection of the few problematic cases in ddt indicates that violations of the wellnestedness constraint may , at least in part , be due to properties of the annotation scheme , such as the analysis of punctuation in quotations . however , a more detailed analysis of the data from both treebanks is needed before any stronger conclusions can be drawn concerning well-nestedness . in this paper , we have reviewed a number of proposals for the characterization of mildly non-projective dependency structures , motivated by the need to find a better balance between expressivity and complexity than that offered by either strictly projective or unrestricted non-projective structures . experimental evaluation based on data from two treebanks shows , that a combination of the wellnestedness constraint and parametric constraints on discontinuity ( formalized either as gap degree or edge degree ) gives a very good fit with the empirical linguistic data . important goals for future work are to widen the empirical basis by investigating more languages , and to perform a more detailed analysis of linguistic phenomena that violate certain constraints . another important line of research is the integration of these constraints into parsing algorithms for non-projective dependency structures , potentially leading to a better trade-off between accuracy and efficiency than that obtained with existing methods . acknowledgements we thank three anonymous reviewers of this paper for their comments . the work of marco kuhlmann is funded by the collaborative research centre 378 ‘ resource-adaptive cognitive processes ’ of the deutsche forschungsgemeinschaft . the work of joakim nivre is partially supported by the swedish research council .
|
mildly non-projective dependency structures syntactic parsing requires a fine balance between expressivity and complexity , so that naturally occurring structures can be accurately parsed without compromising efficiency . in dependency-based parsing , several constraints have been proposed that restrict the class of permissible structures , such as projectivity , planarity , multi-planarity , well-nestedness , gap degree , and edge degree . while projectivity is generally taken to be too restrictive for natural language syntax , it is not clear which of the other proposals strikes the best balance between expressivity and complexity . in this paper , we review and compare the different constraints theoretically , and provide an experimental evaluation using data from two treebanks , investigating how large a proportion of the structures found in the treebanks are permitted under different constraints . the results indicate that a combination of the well-nestedness constraint and a parametric constraint on discontinuity gives a very good fit with the linguistic data .
|
using corpus statistics and wordnet relations for sense identification corpus-based approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck . we show how knowledge-based techniques can be used to open the bottleneck by automatically locating training corpora . we describe a statistical classifier that combines topical context with local cues to identify a word sense . the classifier is used to disambiguate a noun , a verb , and an adjective . a knowledge base in the form of wordnet 's lexical relations is used to automatically locate training examples in a general text corpus . test results are compared with those from manually tagged training examples . corpus-based approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck . we show how knowledge-based techniques can be used to open the bottleneck by automatically locating training corpora . we describe a statistical classifier that combines topical context with local cues to identify a word sense . the classifier is used to disambiguate a noun , a verb , and an adjective . a knowledge base in the form of wordnet 's lexical relations is used to automatically locate training examples in a general text corpus . test results are compared with those from manually tagged training examples . an impressive array of statistical methods have been developed for word sense identification . they range from dictionary-based approaches that rely on definitions ( vdronis and ide 1990 ; wilks et al . 1993 ) to corpus-based approaches that use only word cooccurrence frequencies extracted from large textual corpora ( schtitze 1995 ; dagan and itai 1994 ) . we have drawn on these two traditions , using corpus-based co-occurrence and the lexical knowledge base that is embodied in the wordnet lexicon . the two traditions complement each other . corpus-based approaches have the advantage of being generally applicable to new texts , domains , and corpora without needing costly and perhaps error-prone parsing or semantic analysis . they require only training corpora in which the sense distinctions have been marked , but therein lies their weakness . obtaining training materials for statistical methods is costly and timeconsuming—it is a & quot ; knowledge acquisition bottleneck & quot ; ( gale , church , and yarowsky 1992a ) . to open this bottleneck , we use wordnet 's lexical relations to locate unsupervised training examples . section 2 describes a statistical classifier , tlc ( topical/local classifier ) , that uses topical context ( the open-class words that co-occur with a particular sense ) , local context ( the open- and closed-class items that occur within a small window around a word ) , or a combination of the two . the results of combining the two types of context to disambiguate a noun ( line ) , a verb ( serve ) , and an adjective ( hard ) are presented . the following questions are discussed : when is topical context superior to local context ( and vice versa ) ? is their combination superior to either type alone ? do the answers to these questions depend on the size of the training ? do they depend on the syntactic category of the target ? manually tagged training materials were used in the development of tlc and the experiments in section 2 . the cognitive science laboratory at princeton university , with support from nsf-arpa , is producing textual corpora that can be used in developing and evaluating automatic methods for disambiguation . examples of the different meanings of one thousand common , polysemous , open-class english words are being manually tagged . the results of this effort will be a useful resource for training statistical classifiers , but what about the next thousand polysemous words , and the next ? in order to identify senses of these words , it will be necessary to learn how to harvest training examples automatically . section 3 describes wordnet 's lexical relations and the role that monosemous & quot ; relatives & quot ; of polysemous words can play in creating unsupervised training materials . tlc is trained with automatically extracted examples , its performance is compared with that obtained from manually tagged training materials . work on automatic sense identification from the 1950s onward has been well summarized by hirst ( 1987 ) and dagan and itai ( 1994 ) . the discussion below is limited to work that is closely related to our research . hearst ( 1991 ) represents local context with a shallow syntactic parse in which the context is segmented into prepositional phrases , noun phrases , and verb groups . the target noun is coded for the word it modifies , the word that modifies it , and the prepositions that precede and follow it . open-class items within ±3 phrase segments of the target are coded in terms of their relation to the target ( modifier or head ) or their role in a construct that is adjacent to the target . evidence is combined in a manner similar to that used by the local classifier component of tlc . with supervised training of up to 70 sentences per sense , performance on three homographs was quite good ( 88-100 % correct ) ; with fewer training examples and semantically related senses , performance on two additional words was less satisfactory ( 73-77 % correct ) . gale , church , and yarowsky ( 1992a ) developed a topical classifier based on bayesian decision theory . the only information the classifier uses is an unordered list of words that co-occur with the target in training examples . no other cues , such as part-of-speech tags or word order , are used . leacock , towel ! , and voorhees ( 1993 ) compared this bayesian classifier with a content vector classifier as used in information retrieval and a neural network with backpropagation . the classifiers were compared using different numbers of senses ( two , three , or six manually tagged senses of line ) and different amounts of training material ( 50 , 100 , and 200 examples ) . on the sixsense task , the classifiers averaged 74 % correct answers . leacock , towel ! , and voorhees ( 1993 ) found that the response patterns of the three classifiers converged , suggesting that each of the classifiers was extracting as much data as is available in purely topical approaches that look only at word counts from training examples . if this is the case , any technique that uses only topical information will not be significantly more accurate than the three classifiers tested . leacock , towell , and voorhees ( 1996 ) showed that performance of the content vector topical classifier could be improved with the addition of local templates— specific word patterns that were recognized as being indicative of a particular sense— in an extension of an idea initially suggested by weiss ( 1973 ) . although the templates proved to be highly reliable when they occurred , all too often , none were found . yarowsky ( 1993 ) also found that template-like structures are very powerful indicators of sense . he located collocations by looking at adjacent words or at the first word to the left or right in a given part of speech and found that , with binary ambiguity , a word has only one sense in a given collocation with a probability of 90-99 % .1 however , he had an average of only 29 % recall ( i.e. , the collocations were found in only 29 % of the cases ) . when local information occurred it was highly reliable , but all too often , it did not occur . bruce and wiebe ( 1994a , 1994b ) have developed a classifier that represents local context by morphology ( the inflection on the target word ) , the syntactic category of words within a window of ±2 words from the target , and collocation-specific items found in the sentence . the collocation-specific items are those determined to be the most informative , where an item is considered informative if the model for independence between it and a sense tag provided a poor fit to the training data . the relative probabilities of senses , available from the training corpus , are used in the decision process as prior probabilities . for each test example , the evidence in its local context is combined in a bayesian-type model of the probability of each sense , and the most probable sense is selected . performance ranges from 77-84 % correct on the test words , where a lower bound for performance based on always selecting the most frequent sense for the same words ( i.e. , the sense with the greatest prior probability ) would yield 53-80 % correct . yarowsky ( 1994 ) , building on his earlier work , designed a classifier that looks at words within ±k positions from the target ; lemma forms are obtained through morphological analysis ; and a coarse part-of-speech assignment is performed by dictionary lookup . context is represented by collocations based on words or parts of speech at specific positions within the window or , less specifically , in any position . also coded are some special classes of words , such as weekday , that might serve to distinguish among word senses . for each type of local-context evidence found in the corpus , a log-likelihood ratio is constructed , indicating the strength of the evidence for one form of the homograph versus the other . these ratios are then arranged in a sorted decision list with the largest values ( strongest evidence ) first . a decision is made for a test sentence by scanning down the decision list until a match is found . thus , only the single best piece of evidence is used . the classifier was tested on disambiguating the homographs that result from accent removal in spanish and french ( e.g. , seria , seria ) . in tests with the number of training examples ranging from a few hundred to several thousand , overall accuracy was high , above 90 % . clearly , sense identification is an active area of research , and considerable ingenuity is apparent . but despite the promising results reported in this literature , the reality is that there still are no large-scale , operational systems for tagging the senses of words in text . the statistical classifier , tlc , uses topical context , local context , or a combination of the two , for word sense identification . tlc 's flexibility in using both forms is an important asset for our investigations . a noun , a verb , and an adjective were tested in this study . table 1 provides a synonym or brief gloss for each of the senses used . training corpora and testing corpora were collected as follows : wall street journal corpus and from the american printing house for the blind corpus . ' examples for hard were taken from the ldc san jose mercury news ( sjm ) corpus . each consisted of the sentence containing the target and one sentence preceding it . the resulting strings had an average length of 49 items . 2 . examples where the target was the head of an unambiguous collocation were removed from the files . being unambiguous , they do not need to be disambiguated . these collocations , for example , product line and hard candy were found using wordnet . in section 3 , we consider how they can be used for unsupervised training . examples where the target was part of a proper noun were also removed ; for example , japan air lines was not taken as an example of line . first 25 , 50 , 100 , and 200 examples of the least frequent sense , and examples from the other senses in numbers that reflected their relative frequencies in the corpus . as an illustration , in the smallest training set for hard , there were 25 examples of the least frequent sense , 37 examples of the second most frequent sense , and 256 examples of the most frequent sense . the test sets were of fixed size : each contained 150 of the least frequent sense and examples of the other senses in numbers that reflected their relative frequencies . the operation of tlc consists of preprocessing , training , and testing . during preprocessing , examples are tagged with a part-of-speech tagger ( brill 1994 ) ; special tags are inserted at sentence breaks ; and each open-class word found in wordnet is replaced with its base form . this step normalizes across morphological variants without resorting to the more drastic measure of stemming . morphological information is not lost , since the part-of-speech tag remains unchanged . training consists of counting the frequencies of various contextual cues for each sense . testing consists of taking a new example of the polysemous word and computing the most probable sense , based on the cues present in the context of the new item . a comparison is made to the sense assigned by a human judge , and the classifier 's decision is scored as correct or incorrect . tlc uses a bayesian approach to find the sense s , that is the most probable given the cues ci contained in a context window of ±k positions around the polysemous target word . for each s „ the probability is computed with bayes ' rule : as golding ( 1995 ) points out , the term p ( c_k , . • • s , ) is difficult to estimate because of the sparse data problem , but if we assume , as is often done , that the occurrence of each cue is independent of the others , then this term can be replaced with : in tlc , we have made this assumption and have estimated p ( ci i si ) from the training . of course , the sparse data problem affects these probabilities too , and so tlc uses the good-turing formula ( good 1953 ; chiang , lin , and su 1995 ) , to smooth the values of p ( cj s , ) , including providing probabilities for cues that did not occur in the training . tlc actually uses the mean of the good-turing value and the training-derived value for p ( cj s , ) . when cues do not appear in training , it uses the mean of the goodturing value and the global probability of the cue p ( ci ) , obtained from a large text corpus . this approach to smoothing has yielded consistently better performance than relying on the good-turing values alone . tuation . for this cue type , p ( cj i s , ) is the probability that item cl appears precisely at location j for sense si . positions j = —2 , —1 , 1,2 are used . the global probabilities , for example p ( the_i ) , are based on counts of closed-class items found at these positions relative to the nouns in a large text corpus . the local window width of ±2 was selected after pilot testing on the semantically tagged brown corpus . as in ( 2 ) above , the local window does not extend beyond a sentence boundary . 4 . part-of-speech tags in the positions j = —2 , —1,0 , 1,2 are also used as cues . the probabilities for these tags are computed for specific positions ( e.g. , p ( dt_i i p ( dta ) in a manner similar to that described in ( 3 ) above . when tlc is configured to use only topical information , cue type ( 1 ) is employed . when it is configured for local information , cue types ( 2 ) , ( 3 ) , and ( 4 ) are used . finally , in combined mode , the set of cues contains all four types . 2.3 results figures 1 to 3 show the accuracy of the classifier as a function of the size of the training set when using local context , topical context , and a combination of the two , averaged across three runs for each training set . to the extent that the words used are representative , some clear differences appear as a function of syntactic category . with the verb serve , local context was more reliable than topical context at all levels of training ( 78 % versus 68 % with 200 training examples for the least frequent sense ) . the combination of local and topical context showed improvement ( 83 % ) over either form alone ( see figure 1 ) . with the adjective hard , local context was much more reliable as an indicator of sense than topical context for all training sizes ( 83 % versus 60 % with 200 training examples ) and the combined classifier 's performance ( at 83 % ) was the same as for local ( see figure 2 ) . in the case of the noun line , topical was slightly better than local at all set sizes , but with 200 training examples , their combination yielded 84 % accuracy , greater than either topical ( 78 % ) or local ( 67 % ) alone ( see figure 3 ) . to summarize , local context was more reliable than topical context as an indicator of sense for this verb and this adjective , but slightly less reliable for this noun . the combination of local and topical context showed improved or equal performance for all three words . performance for all of the classifiers improved with increased training size . all classifiers performed best with at least 200 training examples per sense , but the learning curve tended to level off beyond a minimum 100 training examples . these results are consistent with those of yarowsky ( 1993 ) , based on his experiments with pseudowords , homophones , and homonyms ( discussed below ) . he observed that performance for verbs and adjectives dropped sharply as the window increased , while distant context remained useful for nouns . thus one is tempted to conclude that nouns depend more on topic than do verbs and adjectives . but such a conclusion is probably an overgeneralization , inasmuch as some noun senses are clearly nontopical . thus , leacock , towell , and voorhees ( 1993 ) found that some senses of the noun line are not susceptible to disambiguation with topical context . for example , the 'textual ' sense of line can appear with any topic , whereas the 'product ' sense of line can not . when it happens that a nontopical sense accounts for a large proportion of occurrences ( in our study , all senses of hard are nontopical ) , then adding topical context to local will have little benefit and may even reduce accuracy . one should not conclude from these results that the topical classifiers and tlc are inferior to the classifiers reviewed in section 2 . in our experiments , monosemous collocations in wordnet that contain the target word were systematically removed from the training and testing materials . this was done on the assumption that these words are not ambiguous . removing them undoubtedly made the task more difficult than it would normally be . how much more difficult ? an estimate is possible . we classifier performance on four senses of the verb serve . percentage accounted for by most frequent sense = 41 % . searched through 7,000 sentences containing line and found 1,470 sentences contained line as the head of a monosemous collocation in wordnet , i.e. , line could be correctly disambiguated in some 21 % of those 7,000 sentences simply on the basis of the wordnet entries in which it occurred . in other words , if these sentences had been included in the experiment—and had been identified by automatic lookup—overall accuracy would have increased from 83 % to 87 % . using topical context alone , tlc performs no worse than other topical classifiers . leacock , towell , and voorhees ( 1993 ) report that the three topical classifiers tested averaged 74 % accuracy on six senses of the noun line . with these same training and testing data , tlc performed at 73 % accuracy . similarly , when the content vector and neural network classifiers were run on manually tagged training and testing examples of the verb serve , they averaged 74 % accuracy—as did tlc using only topical context . when local context is combined with topical , tlc is superior to the topical classifiers compared in the leacock , towel ! , and voorhees ( 1993 ) study . just how useful is a sense classifier whose accuracy is 85 % or less ? probably not very useful if it is part of a fully automated nlp application , but its performance might be adequate in an interactive application ( e.g. , machine-assisted translation , on-line thesaurus functions in word processing , interactive information retrieval ) . in fact , when recall does not have to be 100 % ( as when a human is in the loop ) the precision of the classifier can be improved considerably . the classifier described above always selects the sense that has the highest probability . we have observed that when classifier performance on three senses of the adjective hard . percentage accounted for by most frequent sense = 80 % . the difference between the probability of this sense and that of the second highest is relatively small , the classifier 's choice is often incorrect . one way to improve the precision of the classifier , though at the price of reduced recall , is to identify these situations and allow it to respond do not know rather than forcing a decision . what is needed is a measure of the difference in the probabilities of the two senses . following the approach of dagan and itai ( 1994 ) , we use the log of the ratio of the probabilities in ( pi/p2 ) for this purpose . based on this value , a threshold e can be set to control when the classifier selects the most probable sense . for example , if e = 2 , then ln ( pi/p2 ) must be 2 or greater for a decision to be made . dagan and itai ( 1994 ) also describe a way to make the threshold dynamic so that it adjusts for the amount of evidence used to estimate pi and p2 . the basic idea is to create a one-tailed confidence interval so that we can state with probability 1 — a that the true value of the difference measure is greater than o . when the amount of evidence is small , the value of the measure must be larger in order to insure that e is indeed exceeded . table 2 shows precision and recall values for serve , hard , and line at eight different settings of 0 using a 60 % confidence interval . tlc was first trained on 100 examples of each sense , and it was then tested on separate 100-example sets . in all cases , precision was positively correlated with the square root of 0 ( all r values > .97 ) , and recall was negatively correlated with the square root of 0 ( r values < —.96 ) . as cross-validation , the equations of the lines that fit the precision and recall results on the test sample were used to predict the precision and recall at the various values of 0 on a second test sample . they provided a good fit to the new data , accounting for an average of 93 % of the variance . the standard errors of estimate for hard , serve , and line were .028 , .030 , and .029 for precision , and .053 , .068 , and .041 for recall . this demonstrates that it is possible to produce accurate predictions of precision and recall as a function of for new test sets . when the threshold is set to a large value , precision approaches 100 % . the criterion thus provides a way to locate those cases that can be identified automatically with very high accuracy . when tlc uses a high criterion for assigning senses , it can be used to augment the training examples by automatically collecting new examples from the test corpus . in summary , the results obtained with tlc support the following preliminary conclusions : ( a ) improvement with training levels off after about 100 training examples for the least frequent sense ; ( b ) the high predictive power of local context for the verb and adjective indicate that the local parameters effectively capture syntactically mediated relations , e.g. , the subject and object or complement of verbs , or the noun that an adjective modifies ; ( c ) nouns may be more & quot ; topical & quot ; than verbs and adjectives , and therefore benefit more from the combination of topical and local context ; ( d ) the precision of tlc can be considerably improved at the price of recall , a trade-off that may be desirable in some interactive nlp applications . a final observation we can make is that when topical and local information is combined , what we have called & quot ; nontopical senses & quot ; can reduce overall accuracy . for example , the 'textual ' sense of line is relatively topic-independent . the results of the line experiment were not affected too adversely because the nontopical sense of line accounted for only 10 % of the training examples . the effects of nontopical senses will be more serious when most senses are nontopical , as in the case of many adjectives and verbs . the generality of these conclusions must , of course , be tested with additional words , which brings us to the problem of obtaining training and testing corpora . on one hand , it is surprising that a purely statistical classifier can & quot ; learn & quot ; how to identify a sense of a polysemous word with as few as 100 example contexts . on the other hand , anyone who has manually built such sets knows that even collecting 100 examples of each sense is a long and tedious process . the next section presents one way in which the lexical knowledge in wordnet can be used to extract training examples automatically . corpus-based word sense identifiers are data hungry—it takes them mere seconds to digest all of the information contained in training materials that take months to prepare manually . so , although statistical classifiers are undeniably effective , they are not feasible until we can obtain reliable unsupervised training data . in the gale , church , and yarowsky ( 1992a ) study , training and testing materials were automatically acquired using an aligned french-english bilingual corpus by searching for english words that have two different french translations . for example , english tokens of sentence were translated as either peine or phrase . they collected contexts of sentence translated as peine to build a corpus for the judicial sense , and collected contexts of sentence translated as phrase to build a corpus for the grammatical sense . one problem with relying on bilingual corpora for data collection is that bilingual corpora are rare , and aligned bilingual corpora are even rarer . another is that since french and english are so closely related , different senses of polysemous english words often translate to the same french word . for example , line is equally polysemous in french and english—and most senses of line translate into french as ligne . several artificial techniques have been used so that classifiers can be developed and tested without having to invest in manually tagging the data : yarowsky ( 1993 ) and schtitze ( 1995 ) have acquired training and testing materials by creating pseudowords from existing nonhomographic forms . for example , a pseudoword was created by combining abused/escorted . examples containing the string escorted were collected to train on one sense of the pseudoword and examples containing the string abused were collected to train on the other sense . in addition , yarowsky ( 1993 ) used homophones ( e.g. , cellar/seller ) and yarowsky ( 1994 ) created homographs by stripping accents from french and spanish words . although these latter techniques are useful in their own right ( e.g. , spoken language systems or corrupted transmissions ) , the resulting materials do not generalize to the acquisition of tagged training for real polysemous or even homographic words . the results of disambiguation strategies reported for pseudowords and the like are consistently above 95 % overall accuracy , far higher than those reported for disambiguating three or more senses of polysemous words ( wilks et al . 1993 ; leacock , towel ! , and voorhees 1993 ) . yarowsky ( 1992 ) used a thesaurus to collect training materials . he tested the unsupervised training materials on 12 nouns with almost perfect results on homonyms ( 95-99 % ) , 72 % accuracy for four senses of interest , and 77 % on three senses of cone . the training was collected in the following manner . take a roget 's category—his examples were tool and animal—and collect sentences from a corpus ( in this case , grolier 's encyclopedia ) using the words in each category . consider the noun crane , which appears in both the roget 's categories tool and animal . to represent the tool category , yarowsky extracted contexts from gro/ier 's encyclopedia . for example , contexts with the words adz , shovel , crane , sickle , and so on . similarly he collected sentences with names of animals from the animal category . in these samples , crane and drill appeared under both categories . yarowsky points out that the resulting noise will be a problem only when one of the spurious senses is salient , dominating the training set , and he uses frequency-based weights to minimize these effects . we propose to minimize spurious training by using monosemous words and collocations—on the assumption that , if a word has only one sense in wordnet , it is monosemous . schtitze ( 1995 ) developed a statistical topical approach to word sense identification that provides its own automatically extracted training examples . for each occurrence t of a polysemous word in a corpus , a context vector is constructed by summing all the vectors that represent the co-occurrence patterns of the open-class words in t 's context ( i.e. , topical information is expressed as a kind of second-order co-occurrence ) . these context vectors are clustered , and the centroid of each cluster is used to represent a & quot ; sense. & quot ; when given a new occurrence of the word , a vector of the words in its context is constructed , and this vector is compared to the sense representations to find the closest match . schulze has used the method to disambiguate pseudowords , homographs , and polysemous words . performance varies depending , in part , on the number of clusters that are created to represent senses , and on the degree to which the distinctions correspond to different topics . this approach performs very well , especially with pseudowords and homographs . however , there is no automatic means to map the sense representations derived from the system onto the more conventional word senses found in dictionaries . consequently , it does not provide disambiguated examples that can be used by other systems . yarowsky ( 1995 ) has proposed automatically augmenting a small set of experimenter-supplied seed collocations ( e.g. , manufacturing plant and plant life for two different senses of the noun plant ) into a much larger set of training materials . he resolved the problem of the sparseness of his collocations by iteratively bootstrapping acquisition of training materials from a few seed collocations for each sense of a homograph . he locates examples containing the seeds in the corpus and analyzes these to find new predictive patterns in these sentences and retrieves examples containing these patterns . he repeats this step iteratively . results for the 12 pairs of homographs reported are almost perfect . in his paper , yarowsky suggests wordnet as a source for the seed collocations—a suggestion that we pursue in the next section . wordnet is particularly well suited to the task of locating sense-relevant context because each word sense is represented as a node in a rich semantic lexical network with synonymy , hyponymy , and meronymy links to other words , some of them polysemous and others monosemous . these lexical & quot ; relatives & quot ; provide a key to finding relevant training sentences in a corpus . for example , the noun suit is polysemous , but one sense of it has business suit as a monosemous daughter and another has legal proceeding as a hypernym . by collecting sentences containing the unambiguous nouns business suit and legal proceeding we can build two corpora of contexts for the respective senses of the polysemous word . all the systems described in section 2.1 could benefit from the additional training materials that monosemous relatives can provide . the wordnet on-line lexical database ( miller 1990 , 1995 ) has been developed at princeton university over the past 10 years . ' like a standard dictionary , wordnet contains the definitions of words . it differs from a standard dictionary in that , instead of being organized alphabetically , wordnet is organized conceptually . the basic unit in wordnet is a synonym set , or synset , which represents a lexicalized concept . for example , wordnet version 1.5 distinguishes between two senses of the noun shot with the synsets { shot , snapshot } and { shot , injection } . in the context , & quot ; the photographer took a shot of mary , & quot ; the word snapshot can be substituted for one sense of shot . in the context , & quot ; the nurse gave mary a flu shot , & quot ; the word injection can be substituted for another sense of shot . nouns , verbs , adjectives , and adverbs are each organized differently in wordnet . all are organized in synsets , but the semantic relations among the synsets differ depending on the grammatical category , as can be seen in table 3 . nouns are organized in a hierarchical tree structure based on hypernymy/hyponymy . the hyponym of a noun is its subordinate , and the relation between a hyponym and its hypernym is an is a kind of relation . for example , maple is a hyponym of tree , which is to say that a maple is a kind of tree . hypernymy ( supername ) and its inverse , hyponymy ( subname ) , are transitive semantic relations between synsets . meronymy ( part-name ) , and its inverse holortymy ( whole-name ) , are complex semantic relations that distinguish component parts , substantive parts , and member parts . the verbal hierarchy is based on troponymy , the is a manner of relation . for example , stroll is a troponym of walk , which is to say that strolling is a manner of walking . entailment relations between verbs are also coded in wordnet . the organization of attributive adjectives is based on the antonymy relation . where direct antonyms exist , adjective synsets point to antonym synsets . a head adjective is one that has a direct antonym ( e.g. , hot versus cold or long versus short ) . many adjectives , like sultry , have no direct antonyms . when an adjective has no direct antonym , its synset points to a head that is semantically similar to it . thus sultry and torrid are similar in meaning to hot , which has the direct antonym of cold . so , although sultry has no direct antonym , it has cold as its indirect antonym . relational adjectives do not have antonyms ; instead they point to nouns . consider the difference between a nervous disorder and a nervous student . in the former , nervous pertains to a noun , as in nervous system , whereas the latter is defined by its relation to other adjectives—its synonyms ( e.g. , edgy ) and antonyms ( e.g. , relaxed ) . adverbs have synonymy and antonymy relations . when the adverb is morphologically related to an adjective ( when an -ly suffix is added to an adjective ) and semantically related to the adjective as well , the adverb points to the adjective . we have had some success in exploiting wordnet 's semantic relations for word sense identification . since the main problem with classifiers that use local context is the sparseness of the training data , leacock and chodorow ( 1998 ) used a proximity measure on the hypernym relation to replace the subject and complement of the verb serve in the testing examples with the subject and complement from training examples that were & quot ; closest & quot ; to them in the noun hierarchy . for example , one of the test sentences was & quot ; sauerbraten is usually served with dumplings , & quot ; where neither sauerbraten nor dumpling appeared in any training sentence . the similarity measures on wordnet found that sauerbraten was most similar to dinner in the training , and dumpling to bacon . these nouns were substituted for the novel ones in the test sets . thus the sentence & quot ; dinner is usually served with bacon & quot ; was substituted for the original sentence . augmentation of the local .context classifier with wordnet similarity measures showed a small but consistent improvement in the classifier 's performance . the improvement was greater with the smaller training sets . resnik ( 1992 ) uses an information-based measure , the most informative class , on the wordnet taxonomy . a class consists of the synonyms found at a node and the synonyms at all the nodes that it dominates ( all of its hyponyms ) . based on verb/object pairs collected from a corpus , resnik found , for example , that the objects for the verb open fall into two classes : receptacle and oral communication . conversely , the class of a verb 's object could be used to determine the appropriate sense of that verb . the experiments in the next section depend on a subset of the wordnet lexical relations , those involving monosemous relatives , so we were interested in determining just what proportion of word senses have such relatives . we examined 8,500 polysemous nouns that appeared in a moderate-size , 25-million-word corpus . in all , these 8,500 nouns have more than 24,000 wordnet senses . restricting the relations to synonyms , immediate hyponyms ( i.e. , daughters ) , and immediate hypernyms ( parents ) , we found that about 64 % ( 15,400 ) have monosemous relatives attested in the corpus . with larger corpora ( e.g. , with text obtained by web crawling ) and more lexical relations ( e.g. , meronymy ) , this percentage can be expected to increase . the approach we have used is related to that of yarowsky ( 1992 ) in that training materials are collected using a knowledge base , but it differs in other respects , notably in the selection of training and testing materials , the choice of a knowledge base , and use of both topical and local classifiers . yarowsky collects his training and testing materials from a specialized corpus , grolier 's encyclopedia . it remains to be seen whether a statistical classifier trained on a topically organized corpus such as an encyclopedia will perform in the same way when tested on general unrestricted text , such as newspapers , periodicals , and books . one of our goals is to determine whether automatic extraction of training examples is feasible using general corpora . in his experiment , yarowsky uses an updated on-line version of roget 's thesaurus that is not generally available to the research community . the only generally available version of roget 's is the 1912 edition , which contains many lexical gaps . we are using wordnet , which can be obtained via anonymous ftp . yarowsky 's classifier is purely topical , but we also examine local context . finally , we hope to avoid inclusion of spurious senses by using monosemous relatives . in this experiment we collected monosemous relatives of senses of 14 nouns . training sets are created in the following manner . a program called autotrain retrieves from wordnet all of the monosemous relatives of a polysemous word sense , samples and retrieves example sentences containing these monosemous relatives from a 30-million-word corpus of the san jose mercury news , and formats them for tlc . the sampling process retrieves the & quot ; closest & quot ; relatives first . for example , suppose that the system is asked to retrieve 100 examples for each sense of the noun court . the system first looks for the strongest or top-level relatives : for monosemous synonyms of the sense ( e.g. , tribunal ) and for daughter collocations that contain the target word as the head ( e.g. , superior court ) and tallies the number of examples in the corpus for each . if the corpus has 100 or more examples for these top-level relatives , it retrieves a sampling of them and formats them for tlc . if there are not enough top-level examples , the remainder of the target 's monosemous relatives are inspected in the order : all other daughters ; hyponym collocations that contain the target ; all other hyponyms ; hypernyms ; and , finally , sisters . autotrain takes as broad a sampling as possible across the corpus and never takes more than one example from an article . the number of examples for each relative is based on the relative proportion of its occurrences in the corpus . table 4 shows the monosemous relatives that were used to train five senses of the noun line—the monosemous relatives of the sixth sense in the original study , line as an abstract division , are not attested in the sim corpus . the purpose of the experiment was to see how well tlc performed using unsupervised training and , when possible , to compare this with its performance when training on the manually tagged materials being produced at princeton 's cognitive science laboratory . ' when a sufficient number of examples for two or more senses were available , 100 examples of each sense were set aside to use in training . the remainder were used for testing . only the topical and local open-class cues were used , since preliminary tests showed that performance declined when using local closed-class and part-of-speech cues obtained from the monosemous relatives . this is not surprising , as many of the relatives are collocations whose local syntax is quite different from that of the polysemous word in its typical usage . for example , the 'formation ' sense of line is often followed by an of-phrase as in a line of children , but its relative , picket line , is not . prior probabilities for the sense were taken from the manually tagged materials . table 5 shows the results when tlc was trained on monosemous relatives and on manually tagged training materials . baseline performance is when the classifier always chooses the most frequent sense . eight additional words had a sufficient number of manually tagged examples for testing but not for training tlc . these are shown in table 6 . for four of the examples in table 5 , training with relatives produced results within 1 % or 2 % of manually tagged training . line and work , however , showed a substantial decrease in performance . in the case of line , this might be due to overly specific training contexts . almost half of the training examples for the 'formation ' sense of line come from one relative , picket line . in fact , all of the monosemous relatives , except for rivet line and trap line , are human formations . this may have skewed training so that the classifier performs poorly on other uses of line as formation . in order to compare our results with those reported in yarowsky ( 1992 ) , we trained and tested on the same two senses of the noun duty that yarowsky had tested ( 'obligation ' and 'tax ' ) . he reported that his thesaurus-based approach yielded 96 % precision with 100 % recall . tlc used training examples based on monosemous wordnet relatives and correctly identified the senses with 93.5 % precision at 100 % recall . table 6 shows tlc 's performance on the other eight words after training with monosemous relatives and testing on manually tagged examples . performance is about the same as , or only slightly better than , the highest prior probability . in part , this is due to the rather high probability of the most frequent sense for this set . the values in the table are based on decisions made on all test examples . if a threshold is set for tlc ( see section 2.4 ) , precision of the classifier can be increased substantially , at the expense of recall . table 7 shows recall levels when tlc is trained on monosemous relatives and the value of e is set for 95 % precision . operating in this mode , the classifier can gather new training materials , automatically , and with high precision . this is a particularly good way to find clear cases of the most frequent sense . the results also show that not all words are well suited to this kind of operation . little can be gained for a word like work , where the two senses , 'activity ' and 'product , ' are closely related and therefore difficult for the classifier to distinguish , due to a high degree of overlap in the training contexts . problems of this sort can be detected even before testing , by computing correlations between the vectors of open-class words for the different senses . the cosine correlation between the 'activity ' and 'product ' senses of work is r = .49 , indicating a high degree of overlap . the mean correlation between pairs of senses for the other words in table 7 is r = .31 . our evidence indicates that local context is superior to topical context as an indicator of word sense when using a statistical classifier . the benefits of adding topical to local context alone depend on syntactic category as well as on the characteristics of the individual word . the three words studied yielded three different patterns ; a substantial benefit for the noun line , slightly less for the verb serve , and none for the adjective hard . some word senses are simply not limited to specific topics , and appear freely in many different domains of discourse . the existence of nontopical senses also limits the applicability of the & quot ; one sense per discourse & quot ; generalization of gale , church , and yarowsky ( 1992b ) , who observed that , within a document , a repeated word is almost always used in the same sense . future work should be directed toward developing methods for determining when a word has a nontopical sense . one approach to this problem is to look for a word that appears in many more topical domains than its total number of senses . because the supply of manually tagged training data will always be limited , we propose a method to obtain training data automatically using commonly available materials : exploiting wordnet 's lexical relations to harvest training examples from ldc corpora or even the world wide web . we found this method to be effective , although not as effective as using manually tagged training . we have presented the components of a system for acquiring unsupervised training materials that can be used with any statistical classifier . the components can be fit together in the following manner . for a polysemous word , locate the monosemous relatives for each of its senses in wordnet and extract examples containing these relatives from a large corpus . senses whose contexts greatly overlap can be identified with a simple cosine correlation . often , correlations are high between senses of a word that are systematically related , as we saw for the 'activity ' and 'product ' senses of work . in some cases , the contexts for the two closely related senses may be combined . since the frequencies of the monosemous relatives do not correlate with the frequencies of the senses , prior probabilities must be estimated for classifiers that use them . in the experiments of section 3.2 , these were estimated from the testing materials . they can also be estimated from a small manually tagged sample , such as the parts of the brown corpus that have been tagged with senses in wordnet . when the threshold is set to maximize precision , the results are highly reliable and can be used to support an interactive application , such as machine-assisted translation , with the goal of reducing the amount of interaction . although we have looked at only a few examples , it is clear that , given wordnet and a large enough corpus , the methods outlined for training on monosemous relatives can be generalized to build training materials for thousands of polysemous words . we are indebted to the other members of the wordnet group who have provided advice and technical support : christiane fellbaum , shari landes , and randee tengi . we are also grateful to paul bagyenda , ben johnson-laird and joshua schecter . we thank scott wayland , tim allison and jill hollifield for tagging the serve and hard corpora . finally we are grateful to the three anonymous cl reviewers for their comments and advice . this material is based upon work supported in part by the national science foundation under nsf award no . 1r19528983 and by the defense advanced research projects agency , grant no . n00014-91-1634 .
|
using corpus statistics and wordnet relations for sense identification corpus-based approaches to word sense identification have flexibility and generality but suffer from a knowledge acquisition bottleneck . we show how knowledge-based techniques can be used to open the bottleneck by automatically locating training corpora . we describe a statistical classifier that combines topical context with local cues to identity a word sense . the classifier is used to disambiguate a noun , a verb , and an adjective . a knowledge base in the form of wordnet 's lexical relations is used to automatically locate training examples in a general text corpus . test results are compared with those from manually tagged training examples . we present a method to obtain sense-tagged examples using monosemous relatives .
|
"automatic labeling of semantic roles present a system for identifying the semantic relationships , (...TRUNCATED)
| "automatic labeling of semantic roles we present a system for identifying the semantic relationships(...TRUNCATED)
|
"generative models for statistical parsing with combinatory categorial grammar this paper compares a(...TRUNCATED)
| "generative models for statistical parsing with combinatory categorial grammar this paper compares a(...TRUNCATED)
|
"corpus statistics meet the noun compound : some empirical results tagged dependency -.— tagged ad(...TRUNCATED)
| "corpus statistics meet the noun compound : some empirical results a variety of statistical methods (...TRUNCATED)
|
"building a large annotated corpus of english : the penn treebank mitchell p. marcus * university of(...TRUNCATED)
|
building a large annotated corpus of english : the penn treebank
|
"the automated acquisit ion of topic signatures for text summarizat ion chin -yew l in and eduard ho(...TRUNCATED)
| "the automated acquisition of topic signatures for text summarization in order to produce a good sum(...TRUNCATED)
|
"improving machine translation performance by exploiting non-parallel corpora we present a novel met(...TRUNCATED)
| "improving machine translation performance by exploiting non-parallel corpora we present a novel met(...TRUNCATED)
|
"introduction to the special issue on word sense disambiguation : the state of the art the automatic(...TRUNCATED)
| "introduction to the special issue on word sense disambiguation : the state of the art we present a (...TRUNCATED)
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 8