Text
stringlengths
45
130k
Id
stringlengths
8
8
Summary
stringlengths
55
2.67k
three generative lexicalized models for statistical parsing in this paper we first propose a new statistical parsing model which is a generative model of lexicalised contextfree grammar we then extend the model to include a probabilistic treatment of both subcategorisation and whmovement results on wall street journal text show that the parser performs at 881875 constituent precisionrecall an average improvement of 23 over generative models of syntax have been central in linguistics since they were introduced in each sentencetree pair in a language has an associated topdown derivation consisting of a sequence of rule applications of a grammarthese models can be extended to be statistical by defining probability distributions at points of nondeterminism in the derivations thereby assigning a probability p to each pairprobabilistic context free grammar was an early example of a statistical grammara pcfg can be lexicalised by associating a headword with each nonterminal in a parse tree thus far and which both make heavy use of lexical information have reported the best statistical parsing performance on wall street journal textneither of these models is generative instead they both estimate p directlythis paper proposes three new parsing modelsmodel 1 is essentially a generative version of the model described in in model 2 we extend the parser to make the complementadjunct distinction by adding probabilities over subcategorisation frames for headwordsin model 3 we give a probabilistic treatment of whmovement which is derived from the analysis given in generalized phrase structure grammar the work makes two advances over previous models first model 1 performs significantly better than and models 2 and 3 give further improvements our final results are 881875 constituent precisionrecall an average improvement of 23 over second the parsers in and produce trees without information about whmovement or subcategorisationmost nlp applications will need this information to extract predicateargument structure from parse treesin the remainder of this paper we describe the 3 models in section 2 discuss practical issues in section 3 give results in section 4 and give conclusions in section 5in general a statistical parsing model defines the conditional probability p for each candidate parse tree t for a sentence s the parser itself is an algorithm which searches for the tree tbt that maximises pa generative model uses the observation that maximising p is equivalent to maximising p 1 to a topdown derivation of the treein a pcfg for a tree derived by n applications of contextfree rewrite rules lh si rhs 1 i n the rewrite rules are either internal to the tree where lhs is a nonterminal and rhs is a string of one or more nonterminals or lexical where lhs is a part of speech tag and rhs is a worda pcfg can be lexicalised2 by associating a word w and a partofspeech tag t with each nonterminal x in the treethus we write a nonterminal as x where x and x is a constituent labeleach rule now has the form3 h is the headchild of the phrase which inherits the headword h from its parent p l1l7 and are left and right modifiers of h either n or m may be zero and n m 0 for unary rulesfigure 1 shows a tree which will be used as an example throughout this paperthe addition of lexical heads leads to an enormous number of potential rules making direct estimation of p infeasible because of sparse data problemswe decompose the generation of the rhs of a rule such as given the lhs into three steps first generating the head then making the independence assumptions that the left and right modifiers are generated by separate 0thorder markov processes 4 for example the probability of the rule s np np vp would be estimated as but in general the probabilities could be conditioned on any of the preceding modifiersin fact if the derivation order is fixed to be depthfirst that is each modifier recursively generates the subtree below it before the next modifier is generated then the model can also condition on any structure below the preceding modifiersfor the moment we exploit this by making the approximations where distancei and distance are functions of the surface string from the head word to the edge of the constituent the distance measure is the same as in a vector with the following 3 elements is the string of zero length does the string contain a verb does the string contain 0 1 2 or 2 commas i p h h distance the distance is a function of the surface string from the word after h to the last word of r2 inclusivein principle the model could condition on any structure dominated by h r1 or r2 distinction and subcategorisation the tree in figure 1 is an example of the importance of the complementadjunct distinctionit would be useful to identify quotmarksquot as a subject and quotlast weekquot as an adjunct but this distinction is not made in the tree as both nps are in the same position from here on we will identify complements by attaching a quotcquot suffix to nonterminals figure 3 gives an example treea postprocessing stage could add this detail to the parser output but we give two reasons for making the distinction while parsing first identifying complements is complex enough to warrant a probabilistic treatmentlexical information is needed for example knowledge that quotweekquot is likely to be a temporal modifierknowledge about subcategorisation preferences for example that a verb takes exactly one subject is also requiredthese problems are not restricted to nps compare quotthe spokeswoman said quot vs quotbonds beat shortterm investments quot where an sbar headed by quotthatquot is a complement but an sbar headed by quotbecausequot is an adjunctthe second reason for making the complementadjunct distinction while parsing is that it may help parsing accuracythe assumption that complements are generated independently of each other often leads to incorrect parses see figure 4 for further explanationadjuncts in the penn treebank we add the quotcquot suffix to all nonterminals in training data which satisfy the following conditions in addition the first child following the head of a prepositional phrase is marked as a complementthe model could be retrained on training data with the enhanced set of nonterminals and it might learn the lexical properties which distinguish complements and adjuncts however it would still suffer from the bad independence assumptions illustrated in figure 4to solve these kinds of problems the generative process is extended to include a probabilistic choice of left and right subcategorisation frames other leads to errorsin the probability of generating both quotdreyfusquot and quotfundquot as subjects p i s vp was p i s vp was is unreasonably high is similar p vpc i vp vb was p i vp vb was p i vp vb was is a bad independence assumptionprceach subcat frame is a multiset6 specifying the complements which the head requires in its left or right modifiers spectivelythus the subcat requirements are added to the conditioning contextas complements are generated they are removed from the appropriate subcat multisetmost importantly the probability of generating the stop symbol will be 0 when the subcat frame is nonempty and the probability of generating a complement will be 0 when it is not in the subcat frame thus all and only the required complements will be generatedthe probability of the phrase s np npc vp is now here the head initially decides to take a single npc to its left and no complements to its rightnpc is immediately generated as the required subject and npc is removed from lc leaving it empty when the next modifier np is generatedthe incorrect structures in figure 4 should now have low probability because pic and p are smallanother obstacle to extracting predicateargument structure from parse trees is whmovementthis section describes a probabilistic treatment of extraction from relative clausesnoun phrases are most often extracted from subject position object position or from within pps it might be possible to write rulebased patterns which identify traces in a parse treehowever we argue again that this task is best integrated into the parser the task is complex enough to warrant a probabilistic treatment and integration may help parsing accuracya couple of complexities are that modification by an sbar does not always involve extraction quot and it is not uncommon for extraction to occur through several constituents quotthe second reason for an integrated treatment of traces is to improve the parameterisation of the modelin particular the subcategorisation probabilities are smeared by extractionin examples 1 2 and 3 above bought is a transitive verb but without knowledge of traces example 2 in training data will contribute to the probability of bought being an intransitive verbformalisms similar to gpsg handle np extraction by adding a gap feature to each nonterminal in the tree and propagating gaps through the tree until they are finally discharged as a trace complement in extraction cases the penn treebank annotation coindexes a trace with the whnp head of the sbar so it is straightforward to add this information to trees in training datagiven that the lhs of the rule has a gap there are 3 ways that the gap can be passed down to the rhs head the gap is passed to the head of the phrase as in rule in figure 5left right the gap is passed on recursively to one of the left or right modifiers of the head or is discharged as a trace argument to the leftright of the headin rule it is passed on to a right modifier the s complementin rule a trace is generated to the right of the head vbwe specify a parameter pg where g is either head left or rightthe generative process is extended to choose between these cases after generating the head of the phrasethe rest of the phrase is then generated in different ways depending on how the gap is propagated in the head case the left and right modifiers are generated as normalin the left right cases a gap requirement is added to either the left or right subcat variablethis requirement is fulfilled when a trace or a modifier nonterminal which has the gap feature is generatedfor example rule sbar whnp sc has probability in rule right is chosen so the gap requirement is added to rcgeneration of sc ulfills both the sc and gap requirements in rcin rule right is chosen againnote that generation of trace satisfies both the npc and gap subcat requirementstable 1 shows the various levels of backoff for each type of parameter in the modelnote that we decompose p l i p h w t a lc into the product pli i phwtalc x pl2 and then smooth these two probabilities separately in each case the final estimate is where e1 e2 and e3 are maximum likelihood estimates with the context at levels 1 2 and 3 in the table and ai a2 and a3 are smoothing parameters where 0 ai 1all words occurring less than 5 times in training data and words in test data which have never been seen in training are replaced with the quotunknownquot tokenthis allows the model to robustly handle the statistics for rare or new wordspart of speech tags are generated along with the words in this modelwhen parsing the pos tags allowed for each word are limited to those which have been seen in training data for that wordfor unknown words the output from the tagger described in is used as the single possible tag for that worda cky style dynamic programming chart parser is used to find the maximum probability tree for each sentence the parser was trained on sections 02 21 of the wall street journal portion of the penn treebank and tested on section 23 we use the parseval measures to compare performance number of correct constituents in proposed parse number of constituents in proposed parse number of correct constituents in proposed parse number of constituents in treebank parse crossing brackets number of constituents which violate constituent boundaries with a constituent in the treebank parsefor a constituent to be correct it must span the same set of words and have the same label as a constituent in the treebank parsetable 2 shows the results for models 1 2 and 3the precisionrecall of the traces found by model 3 was 933901 where three criteria must be met for a trace to be quotcorrectquot it must be an argument to the correct headword it must be in the correct position in relation to that head word it must be dominated by the correct nonterminal labelfor example in figure 5 the trace is an argument to bought which it follows and it is dominated by a vpof the 436 cases 342 were stringvacuous extraction from subject position recovered with 971982 precisionrecall and 94 were longer distance cases recovered with 76606 precisionrecall 9model 1 is similar in structure to the major differences being that the quotscorequot for each bigram dependency is ps 8 collapses advp and prt to the same label for comparison we also removed this distinction when calculating scores9we exclude infinitival relative clauses from these figures for example quoti called a plumber trace to fix the sinkquot where plumber is coindexed with the trace subject of the infinitivalthe algorithm scored 4118 precisionrecall on the 60 cases in section 23 but infinitival relatives are extremely difficult even for human annotators to distinguish from purpose clauses rather than ps and that there are the additional probabilities of generating the head and the stop symbols for each constituenthowever model 1 has some advantages which may account for the improved performancethe model in is deficient that is for most sentences s et p 1 because probability mass is lost to dependency structures which violate the hard constraint that no links may crossfor reasons we do not have space to describe here model 1 has advantages in its treatment of unary rules and the distance measurethe generative model can condition on any structure that has been previously generated we exploit this in models 2 and 3 whereas is restricted to conditioning on features of the surface string alone also uses a lexicalised generative modelin our notation he decomposes p as p x penn treebank annotation style leads to a very large number of contextfree rules so that directly estimating p may lead to sparse data problems or problems with coverage the complementadjunct distinction and traces increase the number of rules compounding this problem proposes 3 dependency models and gives results that show that a generative model similar to model 1 performs best of the threehowever a pure dependency model omits nonterminal information which is importantfor example quothopequot is likely to generate a vp modifier whereas quotrequirequot is likely to generate an s modifier but omitting nonterminals conflates these two cases giving high probability to incorrect structures such as quoti hope jim to sleepquot or quoti require to sleepquot extends a generative dependency model to include an additional state variable which is equivalent to having nonterminals his suggestions may be close to our models 1 and 2 but he does not fully specify the details of his model and does not give results for parsing accuracy describe a model where the rhs of a rule is generated by a markov process although the process is not headcenteredthey increase the set of nonterminals by adding semantic labels rather than by adding lexical headwords describe a historybased approach which uses decision trees to estimate pour models use much less sophisticated ngram estimation methods and might well benefit from methods such as decisiontree estimation which could condition on richer history than just surface distancethere has recently been interest in using dependencybased parsing models in speech recognition for example it is interesting to note that models 1 2 or 3 could be used as language modelsthe probability for any sentence can be estimated as p et p or as p pwe intend to perform experiments to compare the perplexity of the various models and a structurally similar pure pcfg1this paper has proposed a generative lexicalised probabilistic parsing modelwe have shown that linguistically fundamental ideas namely subcategorisation and whmovement can be given a statistical interpretationthis improves parsing performance and more importantly adds useful information to the parser outputi would like to thank mitch marcus jason eisner dan melamed and adwait ratnaparkhi for many useful discussions and comments on earlier versions of this paperthis work has also benefited greatly from suggestions and advice from scott miller
P97-1003
three generative lexicalized models for statistical parsingin this paper we first propose a new statistical parsing model which is a generative model of lexicalised contextfree grammarwe then extend the model to include a probabilistic treatment of both subcategorisation and whmovementresults on wall street journal text show that the parser performs at 881875 constituent precisionrecall an average improvement of 23 over we provide a 29million word parsed corpus from the wall street journal
automatic detection of text genre as the text databases available to users become larger and more heterogeneous genre becomes increasingly important for computational linguistics as a complement to topical and structural principles of classification we propose a theory of genres as of correlate with various surface cues and argue that genre detection based on surface cues is as successful as detection based on deeper structural properties computational linguists have been concerned for the most part with two aspects of texts their structure and their contentthat is we consider texts on the one hand as formal objects and on the other as symbols with semantic or referential valuesin this paper we want to consider texts from the point of view of genre that is according to the various functional roles they playgenre is necessarily a heterogeneous classificatory principle which is based among other things on the way a text was created the way it is distributed the register of language it uses and the kind of audience it is addressed tofor all its complexity this attribute can be extremely important for many of the core problems that computational linguists are concerned withparsing accuracy could be increased by taking genre into account similarly for postagging in wordsense disambiguation many senses are largely restricted to texts of a particular style such as colloquial or formal in information retrieval genre classification could enable users to sort search results according to their immediate interestspeople who go into a bookstore or library are not usually looking simply for information about a particular topic but rather have requirements of genre as well they are looking for scholarly articles about hypnotism novels about the french revolution editorials about the supercollider and so forthif genre classification is so useful why has not it figured much in computational linguistics before nowone important reason is that up to now the digitized corpora and collections which are the subject of much cl research have been for the most part generically homogeneous so that the problem of genre identification could be set asideto a large extent the problems of genre classification do not become salient until we are confronted with large and heterogeneous search domains like the worldwide webanother reason for the neglect of genre though is that it can be a difficult notion to get a conceptual handle on particularly in contrast with properties of structure or topicality which for all their complications involve wellexplored territoryin order to do systematic work on automatic genre classification by contrast we require the answers to some basic theoretical and methodological questionsis genre a single property or attribute that can be neatly laid out in some hierarchical structureor are we really talking about a multidimensional space of properties that have little more in common than that they are more or less orthogonal to topicalityand once we have the theoretical prerequisites in place we have to ask whether genre can be reliably identified by means of computationally tractable cuesin a broad sense the word quotgenrequot is merely a literary substitute for quotkind of textquot and discussions of literary classification stretch back to aristotlewe will use the term quotgenre here to refer to any widely recognized class of texts defined by some common communicative purpose or other functional traits provided the function is connected to some formal cues or commonalities and that the class is extensiblefor example an editorial is a shortish prose argument expressing an opinion on some matter of immediate public concern typically written in an impersonal and relatively formal style in which the author is denoted by the pronoun webut we would probably not use the term quotgenrequot to describe merely the class of texts that have the objective of persuading someone to do something since that class which would include editorials sermons prayers advertisements and so forth has no distinguishing formal propertiesat the other end of the scale we would probably not use quotgenrequot to describe the class of sermons by john donne since that class while it has distinctive formal characteristics is not extensiblenothing hangs in the balance on this definition but it seems to accord reasonably well with ordinary usagethe traditional literature on genre is rich with classificatory schemes and systems some of which might in retrospect be analyzed as simple attribute systems dubrow fowler frye hernadi hobbes staiger and todorov we will refer here to the attributes used in classifying genres as generic facetsa facet is simply a property which distinguishes a class of texts that answers to certain practical interests and which is moreover associated with a characteristic set of computable structural or linguistic properties whether categorical or statistical which we will describe as quotgeneric cuesquot in principle a given text can be described in terms of an indefinitely large number of facetsfor example a newspaper story about a balkan peace initiative is an example of a broadcast as opposed to directed communication a property that correlates formally with certain uses of the pronoun youit is also an example of a narrative as opposed to a directive suasive or descriptive communication and this facet correlates among other things with a high incidence of preterite verb formsapart from giving us a theoretical framework for understanding genres facets offer two practical advantagesfirst some applications benefit from categorization according to facet not genrefor example in an information retrieval context we will want to consider the opinion feature most highly when we are searching for public reactions to the supercollider where newspaper columns editorials and letters to the editor will be of roughly equal interestfor other purposes we will want to stress narrativity for example in looking for accounts of the storming of the bastille in either novels or historiessecondly we can extend our classification to genres not previously encounteredsuppose that we are presented with the unfamiliar category financial analystsreportby analyzing genres as bundles of facets we can categorize this genre as institutional and as nonsuasive or nonargumentative whereas a system trained on genres as atomic entities would not be able to make sense of an unfamiliar categorythe first linguistic research on genre that uses quantitative methods is that of biber which draws on work on stylistic analysis readability indexing and differences between spoken and written languagebiber ranks genres along several textual quotdimensionsquot which are constructed by applying factor analysis to a set of linguistic syntactic and lexical featuresthose dimensions are then characterized in terms such as quotinformative vs involvedquot or quotnarrative vs nonnarrativequot factors are not used for genre classification rather factors are used to validate hypotheses about the functions of various linguistic featuresan important and more relevant set of experiments which deserves careful attention is presented in karlgren and cutting they too begin with a corpus of handclassified texts the brown corpusone difficulty here however is that it is not clear to what extent the brown corpus classification used in this work is relevant for practical or theoretical purposesfor example the category popular lorequot contains an article by the decidedly highbrow harold rosenberg from commentary and articles from model railroader and gourmet surely not a natural class by any reasonable standardin addition many of the text features in karlgren and cutting are structural cues that require taggingwe will replace these cues with two new classes of cues that are easily computable characterlevel cues and deviation cuesthis section discusses generic cues the observableproperties of a text that are associated with facetsexamples of structural cues are passives nominalizations topicalized sentences and counts of the frequency of syntactic categories these cues are not much discussed in the traditional literature on genre but have come to the fore in recent work for purposes of automatic classification they have the limitation that they require tagged or parsed textsmost facets are correlated with lexical cuesexamples of ones that we use are terms of address which predominate in papers like the new york times latinate affixes which signal certain highbrow registers like scientific articles or scholarly works and words used in expressing dates which are common in certain types of narrative such as news storiescharacterlevel cues are mainly punctuation cues and other separators and delimiters used to mark text categories like phrases clauses and sentences such features have not been used in previous work on genre recognition but we believe they have an important role to play being at once significant and very frequentexamples include counts of question marks exclamations marks capitalized and hyphenated words and acronymsderivative cues are ratios and variation measures derived from measures of lexical and characterlevel featuresratios correlate in certain ways with genre and have been widely used in previous workwe represent ratios implicitly as sums of other cues by transforming all counts into natural logarithmsfor example instead of estimating separate weights a 3 and y for the ratios words per sentence characters per word and words per type respectively we express this desired weighting the 55 cues in our experiments can be combined to almost 3000 different ratiosthe log representation ensures that all these ratios are available implicitly while avoiding overfitting and the high computational cost of training on a large set of cuesvariation measures capture the amount of variation of a certain count cue in a text this type of useful metric has not been used in previous work on genrethe experiments in this paper are based on 55 cues from the last three groups lexical characterlevel and derivative cuesthese cues are easily computable in contrast to the structural cues that have figured prominently in previous work on genrethe corpus of texts used for this study was the brown corpusfor the reasons mentioned above we used our own classification system and eliminated texts that did not fall unequivocally into one of our categorieswe ended up using 499 of the 802 texts in the brown corpusfor our experiments we analyzed the texts in terms of three categorical facets brownarrative and genrebrow characterizes a text in terms of the presumptions made with respect to the required intellectual background of the target audienceits levels are popular middle uppermiddle and highfor example the mainstream american press is classified as middle and tabloid newspapers as popularthe narrative facet is binary telling whether a text is written in a narrative mode primarily relating a sequence of eventsthe genre facet has the values reportage editorial scitech legalnonfiction fictionthe first two characterize two types of articles from the daily or weekly press reportage and editorialsthe level scitech denominates scientific or technical writings and legal characterizes various types of writings about law and government administrationfinally nonfiction is a fairly diverse category encompassing most other types of expository writing and fiction is used for works of fictionour corpus of 499 texts was divided into a training subcorpus and an evaluation subcorpus the evaluation subcorpus was designed to have approximately equal numbers of all represented combinations of facet levelsmost such combinations have six texts in the evaluation corpus but due to small numbers of some types of texts some extant combinations are underrepresentedwithin this stratified framework texts were chosen by a pseudo randomnumber generatorthis setup results in different quantitative compositions of training and evaluation setfor example the most frequent genre level in the training subcorpus is reportage but in the evaluation subcorpus nonfiction predominateswe chose logistic regression as our basic numerical methodtwo informal pilot studies indicated that it gave better results than linear discrimination and linear regressionlr is a statistical technique for modeling a binary response variable by a linear combination of one or more predictor variables using a logit link function and modeling variance with a binomial random variable ie the dependent variable log is modeled as a linear combination of the independent variablesthe model has the form g xi3 where r is the estimated response probability xi is the feature vector for text i and 3 is the weight vector which is estimated from the matrix of feature vectorsthe optimal value of i3 is derived via maximum likelihood estimation using splus for binary decisions the application of lr was straightforwardfor the polytomous facets genre and brow we computed a predictor function independently for each level of each facet and chose the category with the highest predictionthe most discriminating of the 55 variables were selected using stepwise backward selection based on the aic criterion a separate set of variables was selected for each binary discrimination taskin order to see whether our easilycomputable surface cues are comparable in power to the structural cues used in karlgren and cutting we also ran lr with the cues used in their experimentbecause we use individual texts in our experiments instead of the fixedlength conglomerate samples of karlgren and cutting we averaged all count features over text lengthbecause of the high number of variables in our experiments there is a danger that overfitting occurslr also forces us to simulate polytomous decisions by a series of binary decisions instead of directly modeling a multinomial responsefinally classical lr does not model variable interactionsfor these reasons we ran a second set of experiments with neural networks which generally do well with a high number of variables because they protect against overfittingneural nets also naturally model variable interactionswe used two architectures a simple perceptron and a multilayer perceptron with all input units connected to all units of the hidden layer and all units of the hidden layer connected to all output unitsfor binary decisions such as determining whether or not a text is narrative the output layer consists of one sigmoidal output unit for polytomous decisions it consists of four or six softmax units the size of the hidden layer was chosen to be three times as large as the size of the output layer for binary decisions the simple perceptron fits a logistic model just as lr doeshowever it is less prone to overfitting because we train it using threefold crossvalidationvariables are selected by summing the crossentropy error over the three validation sets and eliminating the variable that if eliminated results in the lowest crossentropy errorthe elimination cycle is repeated until this summed crossentropy error starts increasingbecause this selection technique is timeconsuming we only apply it to a subset of the discriminationstable 1 gives the results of the experimentsfor each genre facet it compares our results using surface cues against results using karlgren and cutting structural cues on the one hand and against a baseline on the other each text in the evaluation suite was tested for each facetthus the number 78 for narrative under method quotlr allquot means that when all texts were subjected to the narrative test 78 of them were classified correctlythere are at least two major ways of conceiving what the baseline should be in this experimentif the machine were to guess randomly among k categories the probability of a correct guess would be 1k ie 12 for narrative 16 for genre and 14 for browbut one could get dramatic improvement just by building a machine that always guesses the most populated category nonfict for genremiddle for brow and no for narrativethe first approach would be fair because our machines in fact have no prior knowledge of the distribution of genre facets in the evaluation suite but we decided to be conservative and evaluate our methods against the latter baselineno matter which approach one takes however each of the numbers in the table is significant at p 05 by a binomial distributionthat is there is less than a 5 chance that a machine guessing randomly could have come up with results so much better than the baselineit will be recalled that in the lr models the facets with more than two levels were computed by means of binary decision machines for each level then choosing the level with the most positive scoretherefore some feeling for the internal functioning of our algorithms can be obtained by seeing what the performance is for each of these binary machines and for the sake of comparison this information is also given for some of the neural net modelstable 2 shows how often each of the binary machines correctly determined whether a text did or did not fall in a particular facet levelhere again the appropriate baseline could be determined two waysin a machine that chooses randomly performance would be 50 and all of the numbers in the table would be significantly better than chance but a simple machine that always guesses no would perform much better and it is against this stricter standard that we computed the baseline in table 2here the binomial distribution shows that some numbers are not significantly better than the baselinethe numbers that are significantly better than chance at p 05 by the binomial distribution are starredtables 1 and 2 present aggregate results when all texts are classified for each facet or leveltable 3 by contrast shows which classifications are assigned for texts that actually belong to a specific known levelfor example the first row shows that of the 18 texts that really are of the reportage genre level 83 were correctly classified as reportage 6 were misclassified as editorial and 11 as nonfictionbecause of space constraints we present this amount of detail only for the six genre levels with logistic regression on selected surface variablesthe experiments indicate that categorization decisions can be made with reasonable accuracy on the basis of surface cuesall of the facet level assignments are significantly better than a baseline of always choosing the most frequent level and the performance appears even better when one considers that the machines do not actually know what the most frequent level iswhen one takes a closer look at the performance of the component machines it is clear that some facet levels are detected better than otherstable 2 shows that within the facet genre our systems do a particularly good job on reportage and fiction trend correctly but not necessarily significantly for scitech and nonfiction but perform less well for editorial and legal textswe suspect that the indifferent performance in scitech and legal texts may simply reflect the fact that these genre levels are fairly infrequent in the brown corpus and hence in our training settable 3 sheds some light on the other casesthe lower performance on the editorial and nonfiction tests stems mostly from misclassifying many nonfiction texts as editorialsuch confusion suggests that these genre types are closely related to each other as in fact they areeditorials might best be treated in future experiments as a subtype of nonfiction perhaps distinguished by separate facets such as opinion and institutional authorshipalthough table 1 shows that our methods predict brow at abovebaseline levels further analysis indicates that most of this performance comes from accuracy in deciding whether or not a text is high browthe other levels are identified at near baseline performancethis suggests problems with the labeling of the brow feature in the training datain particular we had labeled journalistic texts on the basis of the overall brow of the host publication a simplification that ignores variation among authors and the practice of printing features from other publicationswe plan to improve those labelings in future experiments by classifying brow on an articlebyarticle basisthe experiments suggest that there is only a small difference between surface and structural cuescomparing lr with surface cues and lr with structural cues as input we find that they yield about the same performance averages of 770 vs 775 for all variables and 784 vs 789 for selected variableslooking at the independent binary decisions on a taskbytask basis surface cues are worse in 10 cases notenumbers are the percentage of the evaluation subcorpus which were correctly assigned to the appropriate facet level the baseline column tells what percentage would be correct if the machine always guessed the most frequent levellr is logistic regression over our surface cues or karlgren and cutting structural cues 2lp and 3lp are 2 or 3layer perceptrons using our surface cuesunder each experimentall tells the results when all cues are used and sel tells the results when for each level one selects the most discriminating cuesa dash indicates that an experiment was not runnotenumbers are the percentage of the evaluation subcorpus which was correctly classified on a binary discrimination taskthe baseline column tells what percentage would be got correct by guessing no for each levelheaders have the same meaning as in table 1 means significantly better than baseline at p 05 using a binomial distribution notenumbers are the percentage of the texts actually belonging to the genre level indicated in the first column that were classified as belonging to each of the genre levels indicated in the column headersthus the diagonals are correct guesses and each row would sum to 100 but for rounding error and better in 8 casessuch a result is expected if we assume that either cue representation is equally likely to do better than the other 041we conclude that there is at best a marginal advantage to using structural cues an advantage that will not justify the additional computational cost in most casesour goal in this paper has been to prepare the ground for using genre in a wide variety of areas in natural language processingthe main remaining technical challenge is to find an effective strategy for variable selection in order to avoid overfitting during trainingthe fact that the neural networks have a higher performance on average and a much higher performance for some discriminations indicates that overfitting and variable interactions are important problems to tackleon the theoretical side we have developed a taxonomy of genres and facetsgenres are considered to be generally reducible to bundles of facets though sometimes with some irreducible atomic residuethis way of looking at the problem allows us to define the relationships between different genres instead of regarding them as atomic entitieswe also have a framework for accommodating new genres as yet unseen bundles of facetsfinally by decomposing genres into facets we can concentrate on whatever generic aspect is important in a particular application further practical tests of our theory will come in applications of genre classification to tagging summarization and other tasks in computational linguisticswe are particularly interested in applications to information retrieval where users are often looking for texts with particular quite narrow generic properties authoritatively written documents opinion pieces scientific articles and so onsorting search results according to genre will gain importance as the typical data base becomes increasingly heterogeneouswe hope to show that the usefulness of retrieval tools can be dramatically improved if genre is one of the selection criteria that users can exploit
P97-1005
automatic detection of text genreas the text databases available to users become larger and more heterogeneous genre becomes increasingly important for computational linguistics as a complement to topical and structural principles of classificationwe propose a theory of genres as bundles of facets which correlate with various surface cues and argue that genre detection based on surface cues is as successful as detection based on deeper structural propertieswe believe that parsing and wordsense disambiguation can also benefit from genre classificationwe avoid structured markers since they require tagged or parsed text and replace them with characterlevel markers and derivative markers ie ratios and variation measures derived from measure of lexical and characterlevel markers
using syntactic dependency as local context to resolve word sense ambiguity most previous corpusbased algorithms disambiguate a word with a classifier trained from previous usages of the same word separate classifiers have to be trained for different words we present an algorithm that uses the same knowledge sources to disambiguate different words the algorithm does not require a sensetagged corpus and exploits the fact that two different words are likely to have similar meanings if they occur in identical local contexts given a word its context and its possible meanings the problem of word sense disambiguation is to determine the meaning of the word in that contextwsd is useful in many natural language tasks such as choosing the correct word in machine translation and coreference resolutionin several recent proposals statistical and machine learning techniques were used to extract classifiers from handtagged corpusyarowsky proposed an unsupervised method that used heuristics to obtain seed classifications and expanded the results to the other parts of the corpus thus avoided the need to handannotate any examplesmost previous corpusbased wsd algorithms determine the meanings of polysemous words by exploiting their local contextsa basic intuition that underlies those algorithms is the following two occurrences of the same word have identical meanings if they have similar local contextsin other words most previous corpusbased wsd algorithms learn to disambiguate a polysemous word from previous usages of the same wordthis has several undesirable consequencesfirstly a word must occur thousands of times before a good classifier can be learnedin yarowsky experiment an average of 3936 examples were used to disambiguate between two sensesin ng and lee experiment 192800 occurrences of 191 words were used as training examplesthere are thousands of polysemous words eg there are 11562 polysemous nouns in wordnetfor every polysemous word to occur thousands of times each the corpus must contain billions of wordssecondly learning to disambiguate a word from the previous usages of the same word means that whatever was learned for one word is not used on other words which obviously missed generality in natural languagesthirdly these algorithms cannot deal with words for which classifiers have not been learnedin this paper we present a wsd algorithm that relies on a different intuition two different words are likely to have similar meanings if they occur in identical local contextsconsider the sentence the new facility will employ 500 of the existing 600 employees the word quotfacilityquot has 5 possible meanings in wordnet 15 installation proficiencytechnique adeptness readiness toiletbathroomto disambiguate the word we consider other words that appeared in an identical local context as quotfacilityquot in table 1 is a list of words that have also been used as the subject of quotemployquot in a 25millionword wall street journal corpusthe quotfreqquot column are the number of times these words were used as the subject of quotemployquotorg includes all proper names recognized as organizations the loga column are their likelihood ratios the meaning of quotfacilityquot in can be determined by choosing one of its 5 senses that is most similar to the meanings of words in table 1this way a polysemous word is disambiguated with past usages of other wordswhether or not it appears in the corpus is irrelevantour approach offers several advantages the required resources of the algorithm include the following an untagged text corpus a broadcoverage parser a concept hierarchy such as the wordnet or roget thesaurus and a similarity measure between conceptsin the next section we introduce our definition of local contexts and the database of local contextsa description of the disambiguation algorithm is presented in section 3section 4 discusses the evaluation resultspsychological experiments show that humans are able to resolve word sense ambiguities given a narrow window of surrounding words most wsd algorithms take as input ito be defined in section 31 a polysemous word and its local contextdifferent systems have different definitions of local contextsin the local context of a word is an unordered set of words in the sentence containing the word and the preceding sentencein a local context of a word consists of an ordered sequence of 6 surrounding partofspeech tags its morphological features and a set of collocationsin our approach a local context of a word is defined in terms of the syntactic dependencies between the word and other words in the same sentencea dependency relationship is an asymmetric binary relationship between a word called head and another word called modifier dependency grammars represent sentence structures as a set of dependency relationshipsnormally the dependency relationships form a tree that connects all the words in a sentencean example dependency structure is shown in the local context of a word w is a triple that corresponds to a dependency relationship in which w is the head or the modifier where type is the type of the dependency relationship such as subj adjn comp i etc word is the word related to w via the dependency relationship and posit ion can either be head or modthe position indicates whether word is the head or the modifier in dependency relationsince a word may be involved in several dependency relationships each occurrence of a word may have multiple local contextsthe local contexts of the two nouns quotboyquot and quotdogquot in are as follows boy dog using a broad coverage parser to parse a corpus we construct a local context databasean entry in the database is a pair where lc is a local context and c is a set of tripleseach triple specifies how often word occurred in lc and the likelihood ratio of lc and wordthe likelihood ratio is obtained by treating word and c as a bigram and computed with the formula in the database entry corresponding to table 1 is as followsthe polysemous words in the input text are disambiguated in the following steps step a parse the input text and extract local contexts of each wordlet lc denote the set of local contexts of all occurrences of w in the input textstep bsearch the local context database and find words that appeared in an identical local context as w they are called selectors of w step c select a sense s of w that maximizes the similarity between w and selectorsstep d the sense s is assigned to all occurrences of w in the input textthis implements the quotone sense per discoursequot heuristic advocated in step c needs further explanationin the next subsection we define the similarity between two word senses we then explain how the similarity between a word and its selectors is maximizedthere have been several proposed measures for similarity between two concepts all of those similarity measures are defined directly by a formulawe use instead an informationtheoretic definition of similarity that can be derived from the following assumptions where cornmon is a proposition that states the commonalities between a and b is the amount of information contained in the proposition s assumption 2 the differences between a and b is measured by where describe is a proposition that describes what a and b areassumption 3 the similarity between a and b sim is a function of their commonality and differencesthat is sim f i the domain of f is ix 0 y 0 y xassumption 4 similarity is independent of the unit used in the information measureaccording to information theory log bp where p is the probability of s and b is the unitwhen b 2 is the number of bits needed to encode s since log bx 12412 assumption 4 means that the function f must satisfy the following condition vc 0 f f consists of two independent parts then the sim is the sum of the similarities computed when each part of the commonality is consideredin other words f f fa corollary of assumption 5 is that vy f f f 0 which means that when there is no commonality between a and b their similarity is 0 no matter how different they arefor example the similarity between quotdepthfirst searchquot and quotleather sofaquot is neither higher nor lower than the similarity between quotrectanglequot and quotinterest ratequotassumption 6 the similarity between a pair of identical objects is 1when a and b are identical knowning their commonalities means knowing what they are ie i i therefore the function f must have the following property vx f 1assumption 7 the function f is continuoussimilarity theorem the similarity between a and b is measured by the ratio between the amount of information needed to state the commonality of a and b and the information needed to fully describe what a and b are proof to prove the theorem we need to show f s since f f we only need to show that when li is a rational number f the result can be generalized to all real numbers because f is continuous and for any real number there are rational numbers that are infinitely close to itsuppose m and n are positive integersthus f f substituting fi for x in this equation qedfor examplefigure 1 is a fragment of the wordnetthe nodes are concepts the links represent isa relationshipsthe number attached to a node c is the probability p that a randomly selected noun refers to an instance of c the probabilities are estimated by the frequency of concepts in semcor a sensetagged subset of the brown corpusif x is a hill and y is a coast the commonality between x and p is that quotx is a geoform and y is a geoformquotthe information contained in this statement is 2 x logpthe similarity between the concepts hill and coast is where p of both c and cwe now provide the details of step c in our algorithmthe input to this step consists of a polysemous word v110 and its selectors wi w2 wothe word wi has ni senses sii sin step c1 construct a similarity matrix the rows and columns represent word sensesthe matrix is divided into x blocksthe blocks on the diagonal are all osthe elements in block sii are the similarity measures between the senses of wi and the senses of itsimilarity measures lower than a threshold 0 are considered to be noise and are ignoredin our experiments 9 02 was usedstep c5 modify the similarity matrix to remove the similarity values between other senses of wi and senses of other wordsfor all 1 j m such that 1 e 1ni and 1 0 imax and j imax and m e 1 nil let us consider again the word quotfacilityquot in it has two local contexts subject of quotemployquot and modifiee of quotnewquot table 1 lists words that appeared in the first local contexttable 2 lists words that appeared in the second local contextonly words with top20 likelihood ratio were used in our experimentsthe two groups of words are merged and used as the selectors of quotfacilityquotthe words quotfacilityquot has 5 senses in the wordnetsenses 1 and 5 are subclasses of artifactsenses 2 and 3 are kinds of statesense 4 is a kind of abstractionmany of the selectors in tables 1 and table 2 have artifact senses such as quotpostquot quotproductquot quotsystemquot quotunitquot quotmemory devicequot quotmachinequot quotplantquot quotmodelquot quotprogramquot etctherefore senses 1 and 5 of quotfacilityquot received much more support 537 and 242 respectively than other sensessense 1 is selectedconsider another example that involves an unknown proper name we treat unknown proper nouns as a polysemous word which could refer to a person an organization or a locationsince quotdreamlandquot is the subject of quotemployedquot its meaning is determined by maximizing the similarity between one of person organization locaton and the words in table 1since table 1 contains many quotorganizationquot words the support for the quotorganizationquot sense is much higher than the otherswe used a subset of the semcor to evaluate our algorithmgeneralpurpose lexical resources such as wordnet longman dictionary of contemporary english and roget thesaurus strive to achieve completenessthey often make subtle distinctions between word sensesas a result when the wsd task is defined as choosing a sense out of a list of senses in a generalpurpose lexical resource even humans may frequently disagree with one another on what the correct sense should bethe subtle distinctions between different word senses are often unnecessarytherefore we relaxed the correctness criteriona selected sense sanswer is correct if it is quotsimilar enoughquot to the sense tag s key in semcorwe experimented with three interpretations of quotsimilar enoughquotthe strictest interpretation is sim1 which is true only when sanswerskeythe most relaxed interpretation is sim 0 which is true if sanswer and skey are the descendents of the same toplevel concepts in wordnet a compromise between these two is sim 027 where 027 is the average similarity of 50000 randomly generated pairs in which w and w belong to the same roget categorywe use three words quotdutyquot quotinterestquot and quotlinequot as examples to provide a rough idea about what sim 027 meansthe word quotdutyquot has three senses in wordnet 15the similarity between the three senses are all below 027 although the similarity between senses 1 and 2 is very close to the thresholdthe word quotinterestquot has 8 sensessenses 1 and 7 are merged2 senses 3 4 and 5 are mergedthe word quotinterestquot is reduced to a 5way ambiguous wordthe other three senses are 2 6 and 8 the word quotlinequot has 27 sensesthe similarity threshold 027 reduces the number of senses to 14the reduced senses are where each group is a reduced sense and the numbers are original wordnet sense numberswe used a 25millionword wall street journal corpus to construct the local context databasethe text was parsed in 126 hours on a sparcultra 1140 with 96mb of memorywe then extracted from the parse trees 8665362 dependency relationships in which the head or the modifier is a nounwe then filtered out pairs with a likelihood ratio lower than 5 the resulting database contains 354670 local contexts with a total of 1067451 words in them since the local context database is constructed from wsj corpus which are mostly business news we only used the quotpress reportagequot part of semcor which consists of 7 files with about 2000 words eachfurthermore we only applied our algorithm to nounstable 3 shows the results on 2832 polysemous nouns in semcorthis number also includes proper nouns that do not contain simple markers to indicate its categorysuch a proper noun is treated as a 3way ambiguous word person organization or locationwe also showed as a baseline the performance of the simple strategy of always choosing the first sense of a word in the wordnetsince the wordnet senses are ordered according to their frequency in semcor choosing the first sense is roughly the same as choosing the sense with highest prior probability except that we are not using all the files in semcorit can be seen from table 3 that our algorithm performed slightly worse than the baseline when the strictest correctness criterion is usedhowever when the condition is relaxed its performance gain is much lager than the baselinethis means that when the algorithm makes mistakes the mistakes tend to be close to the correct answerthe step c in section 32 is similar to resnik noun group disambiguation although he did not address the question of the creation of noun groupsthe earlier work on wsd that is most similar to ours is they proposed a set of heuristic rules that are based on the idea that objects of the same or similar verbs are similarour algorithm treats all local contexts equally in its decisionmakinghowever some local contexts hardly provide any constraint on the meaning of a wordfor example the object of quotgetquot can practically be anythingthis type of contexts should be filtered out or discounted in decisionmakingour assumption that similar words appear in identical context does not always holdfor example where per refers to proper names recognized as personsnone of these is similar to the quotbody partquot meaning of quotheartquotin fact quotheartquot is the only body part that beatswe have presented a new algorithm for word sense disambiguationunlike most previous corpusbased wsd algorithm where separate classifiers are trained for different words we use the same local context database and a concept hierarchy as the knowledge sources for disambiguating all wordsthis allows our algorithm to deal with infrequent words or unknown proper nounsunnecessarily subtle distinction between word senses is a wellknown problem for evaluating wsd algorithms with generalpurpose lexical resourcesour use of similarity measure to relax the correctness criterion provides a possible solution to this problem
P97-1009
using syntactic dependency as local context to resolve word sense ambiguitymost previous corpusbased algorithms disambiguate a word with a classifier trained from previous usages of the same wordseparate classifiers have to be trained for different wordswe present an algorithm that uses the same knowledge sources to disambiguate different wordsthe algorithm does not require a sensetagged corpus and exploits the fact that two different words are likely to have similar meanings if they occur in identical local contextswe define the similarity between two objects to be the amount of information contained in the commonolity between the objects divided by the amount of information in the descriptions of the objects
the rhetorical parsing of unrestricted natural language texts we derive the rhetorical structures of texts by means of two new surfaceformbased algorithms one that identifies discourse usages of cue phrases and breaks sentences into clauses and one that produces valid rhetorical structure trees for unrestricted natural language texts the algorithms use information that was derived from a corpus analysis of cue phrases researchers of natural language have repeatedly acknowledged that texts are not just a sequence of words nor even a sequence of clauses and sentenceshowever despite the impressive number of discourserelated theories that have been proposed so far there have emerged no algorithms capable of deriving the discourse structure of an unrestricted texton one hand efforts such as those described by asher lascarides asher and oberlander kamp and reyle grover et al and prust scha and van den berg take the position that discourse structures can be built only in conjunction with fully specified clause and sentence structuresand hobbs theory assumes that sophisticated knowledge bases and inference mechanisms are needed for determining the relations between discourse unitsdespite the formal elegance of these approaches they are very domain dependent and therefore unable to handle more than a few restricted exampleson the other hand although the theories described by grosz and sidner polanyi and mann and thompson are successfully applied manually they are too informal to support an automatic approach to discourse analysisin contrast with this previous work the rhetorical parser that we present builds discourse trees for unrestricted textswe first discuss the key concepts on which our approach relies and the corpus analysis that provides the empirical data for our rhetorical parsing algorithmwe discuss then an algorithm that recognizes discourse usages of cue phrases and that determines clause boundaries within sentenceslastly we present the rhetorical parser and an example of its operation the mathematical foundations of the rhetorical parsing algorithm rely on a firstorder formalization of valid text structures the assumptions of the formalization are the following1the elementary units of complex text structures are nonoverlapping spans of text2rhetorical coherence and cohesive relations hold between textual units of various sizes3relations can be partitioned into two classes paratactic and hypotacticparatactic relations are those that hold between spans of equal importancehypotactic relations are those that hold between a span that is essential for the writer purpose ie a nucleus and a span that increases the understanding of the nucleus but is not essential for the writer purpose ie a satellite4the abstract structure of most texts is a binary treelike structure5if a relation holds between two textual spans of the tree structure of a text that relation also holds between the most important units of the constituent subspansthe most important units of a textual span are determined recursively they correspond to the most important units of the immediate subspans when the relation that holds between these subspans is paratactic and to the most important units of the nucleus subspan when the relation that holds between the immediate subspans is hypotacticin our previous work we presented a complete axiomatization of these principles in the context of rhetorical structure theory and we described an algorithm that starting from the set of textual units that make up a text and the set of elementary rhetorical relations that hold between these units can derive all the valid discourse trees of that textconsequently if one is to build discourse trees for unrestricted texts the problems that remain to be solved are the automatic determination of the textual units and the rhetorical relations that hold between themin this paper we show how one can find and exploit approximate solutions for both of these problems by capitalizing on the occurrences of certain lexicogrammatical constructssuch constructs can include tense and aspect certain patterns of pronominalization and anaphoric usages itclefts and discourse markers or cue phrases in the work described here we investigate how far we can get by focusing our attention only on discourse markers and lexicogrammatical constructs that can be detected by a shallow analysis of natural language textsthe intuition behind our choice relies on the following facts the discourse segments that they relatein other words we assume that the texts that we process are wellformed from a discourse perspective much as researchers in sentence parsing assume that they are wellformed from a syntactic perspectiveas a consequence we assume that one can bootstrap the full syntactic semantic and pragmatic analysis of the clauses that make up a text and still end up with a reliable discourse structure for that textgiven the above discussion the immediate objection that one can raise is that discourse markers are doubly ambiguous in some cases their use is only sentential ie they make a semantic contribution to the interpretation of a clause and even in the cases where markers have a discourse usage they are ambiguous with respect to the rhetorical relations that they mark and the sizes of the textual spans that they connectwe address now each of these objections in turnsentential and discourse usages of cue phrasesempirical studies on the disambiguation of cue phrases have shown that just by considering the orthographic environment in which a discourse marker occurs one can distinguish between sentential and discourse usages in about 80 of caseswe have taken hirschberg and litman research one step further and designed a comprehensive corpus analysis that enabled us to improve their results and coveragethe method procedure and results of our corpus analysis are discussed in section 3discourse markers are ambiguous with respect to the rhetorical relations that they mark and the sizes of the units that they connectwhen we began this research no empirical data supported the extent to which this ambiguity characterizes natural language textsto better understand this problem the corpus analysis described in section 3 was designed so as to also provide information about the types of rhetorical relations rhetorical statuses and sizes of textual spans that each marker can indicatewe knew from the beginning that it would be impossible to predict exactly the types of relations and the sizes of the spans that a given cue markshowever given that the structure that we are trying to build is highly constrained such a prediction proved to be unnecessary the overall constraints on the structure of discourse that we enumerated in the beginning of this section cancel out most of the configurations of elementary constraints that do not yield correct discourse treesconsider for example the following text although discourse markers are ambiguous one can use them to build discourse trees for unrestricted texts2 this will lead to many new applications in natural language processing3 for the sake of the argument assume that we are able to break text into textual units as labelled above and that we are interested now in finding rhetorical relations between these unitsassume now that we can infer that although marks a concessive relation between satellite 1 and nucleus either 2 or 3 and the colon an elaboration between satellite 3 and nucleus either 1 or 2if we use the convention that hypotactic relations are represented as firstorder predicates having the form rhet_rel and that paratactic relations are represented as predicates having the form rhet_rel a correct representation for text is then the set of two disjunctions given in rhet_re rhet _rel v rhet_re despite the ambiguity of the relations the overall rhetorical structure constraints will associate only one discourse tree with text namely the tree given in figure 1 any discourse tree configuration that uses relations rhet_re and rhet_re will be ruled outfor example relation rhet_re will be ruled out because unit 1 is not an important unit for span 12 and as mentioned at the beginning of this section a rhetorical relation that holds between two spans of a valid text structure must also hold between their most important units the important unit of span 12 is unit 2 ie the nucleus of the relation r het felwe used previous work on cue phrases to create an initial set of more than 450 potential discourse markersfor each potential discourse marker we then used an automatic procedure that extracted from the brown corpus a set of text fragmentseach text fragment contained a quotwindowquot of approximately 200 words and an emphasized occurrence of a markeron average we randomly selected approximately 19 text fragments per marker having few texts for the markers that do not occur very often in the corpus and up to 60 text fragments for markers such as and which we considered to be highly ambiguousoverall we randomly selected more than 7900 textsall the text fragments associated with a potential cue phrase were paired with a set of slots in which an analyst described the following1the orthographic environment that characterizes the usage of the potential discourse markerthis included occurrences of periods commas colons semicolons etc2the type of usage sentential discourse or both3the position of the marker in the textual unit to which it belonged beginningmedial or end4the right boundary of the textual unit associated with the marker5the relative position of the textual unit that the unit containing the marker was connected to before or after6the rhetorical relations that the cue phrase signaled7the textual types of the units connected by the discourse marker from clause to multiple_paragraph8the rhetorical status of each textual unit involved in the relation nucleus or satellitethe algorithms described in this paper rely on the results derived from the analysis of 1600 of the 7900 text fragmentsafter the slots for each text fragment were filled the results were automatically exported into a relational databasethe database was then examined semiautomatically with the purpose of deriving procedures that a shallow analyzer could use to identify discourse usages of cue phrases break sentences into clauses and hypothesize rhetorical relations between textual unitsfor each discourse usage of a cue phrase we derived the followingat the time of writing we have identified 1253 occurrences of cue phrases that exhibit discourse usages and associated with each of them procedures that instruct a shallow analyzer how the surrounding text should be broken into textual unitsthis information is used by an algorithm that concurrently identifies discourse usages of cue phrases and determines the clauses that a text is made ofthe algorithm examines a text sentence by sentence and determines a set of potential discourse markers that occur in each sentenceit then applies left to right the procedures that are associated with each potential markerthese procedures have the following possible effects they can cause an immediate breaking of the current sentence into clausesfor example when an 11 althoughquot marker is found a new clause whose right boundary is just before the occurrence of the marker is createdthe algorithm is then recursively applied on the text that is found having a discourse usagefor example when the cue phrase quotalthoughquot is identified it is also assigned a discourse usagethe decision of whether a cue phrase is considered to have a discourse usage is sometimes based on the context in which that phrase occurs ie it depends on the occurrence of other cue phrasesfor example an quotandquot will not be assigned a discourse usage in most of the cases however when it occurs in conjunction with quotalthoughquot ie quotand althoughquot it will be assigned such a rolethe most important criterion for using a cue phrase in the marker identification procedure is that the cue phrase is used as a discourse marker in at least 90 of the examples that were extracted from the corpusthe enforcement of this criterion reduces on one hand the recall of the discourse markers that can be detected but on the other hand increases significantly the precisionwe chose this deliberately because during the corpus analysis we noticed that most of the markers that connect large textual units can be identified by a shallow analyzerin fact the discourse marker that is responsible for most of our algorithm recall failures is andsince a shallow analyzer cannot identify with sufficient precision whether an occurrence of and has a discourse or a sentential usage most of its occurrences are therefore ignoredit is true that in this way the discourse structures that we build lose some potential finer granularity but fortunately from a rhetorical analysis perspective the loss has insignificant global repercussions the vast majority of the relations that we miss due to recall failures of and are joint and sequence relations that hold between adjacent clausesevaluationto evaluate our algorithm we randomly selected three texts each belonging to a different genre three independent judges graduate students in computational linguistics broke the texts into clausesthe judges were given no instructions about the criteria that they had to apply in order to determine the clause boundaries rather they were supposed to rely on their intuition and preferred definition of clausethe locations in texts that were labelled as clause boundaries by at least two of the three judges were considered to be quotvalid clause boundariesquotwe used the valid clause boundaries assigned by judges as indicators of discourse usages of cue phrases and we determined manually the cue phrases that signalled a discourse relationfor example if an quotandquot was used in a sentence and if the judges agreed that a clause boundary existed just before the quotandquot we assigned that quotandquot a discourse usageotherwise we assigned it a sentential usagehence we manually determined all discourse usages of cue phrases and all discourse boundaries between elementary unitswe then applied our marker and clause identification algorithm on the same textsour algorithm found 808 of the discourse markers with a precision of 895 a result that outperforms hirschberg and litman the same algorithm identified correctly 813 of the clause boundaries with a precision of 903 we are not aware of any surfaceformbased algorithms that achieve similar resultsthe rhetorical parsing algorithm is outlined in figure 2in the first step the marker and clause identification algorithm is appliedonce the textual units are determined the rhetorical parser uses the procedures derived from the corpus analysis to hypothesize rhetorical relations between the textual unitsa constraintsatisfaction procedure similar to that described in then determines all the valid discourse trees for detailsthe rhetorical parsing algorithm has been fully implemented in cdiscourse is ambiguous the same way sentences are more than one discourse structure is usually produced for a textin our experiments we noticed at least for english that the quotbestquot discourse trees are usually those that are skewed to the rightwe believe that the explanation of this observation is that text processing is essentially a lefttoright processusually people write texts so that the most important ideas go first both at the paragraph and at the text levelthe more text writers add the more they elaborate on the text that went before as a consequence incremental discourse building consists mostly of expansion of the right branchesin order to deal with the ambiguity of discourse the rhetorical parser computes a weight for each valid discourse tree and retains only those that are maximalthe weight function reflects how skewed to the right a tree isconsider the following text from the november 1996 issue of scientific american the words in italics denote the discourse markers the square brackets denote in fact journalists are trained to employ this quotpyramidquot approach to writing consciously the boundaries of elementary textual units and the curly brackets denote the boundaries of parenthetical textual units that were determined by the rhetorical parser for details the numbers associated with the square brackets are identification labelswith its distant orbit 50 percent farther from the sun than earth and slim atmospheric blanket mars experiences frigid weather conditions2 surface temperatures typically average about 60 degrees celsius at the equator and can dip to 123 degrees c near the poles3 only the midday sun at tropical latitudes is warm enough to thaw ice on occasion but any liquid water formed in this way would evaporate almost instantly5 because of the low atmospheric pressure6 although the atmosphere holds a small amount of water and waterice clouds sometimes develop7 most martian weather involves blowing dust or carbon dioxide8 each win terfor example a blizzard of frozen carbon dioxide rages over one pole and a few meters of this dryice snow accumulate as previously frozen carbon dioxide evaporates from the opposite polar cap9 yet even on the summer pole in this case the algorithm constructs 8 different treesthe trees are ordered according to their weightsthe quotbestquot tree for text has weight 3 and is fully represented in figure 3the postscript file corresponding to figure 3 was automatically generated by a backend algorithm that uses quotdotquot a preprocessor for drawing directed graphsthe convention that we use is that nuclei are surrounded by solid boxes and satellites by dotted boxes the links between a node and the subordinate nucleus or nuclei are represented by solid arrows and the links between a node and the subordinate satellites by dotted linesthe occurrences of parenthetical information are marked in the text by a p and a unique subordinate satellite that contains the parenthetical informationwe believe that there are two ways to evaluate the correctness of the discourse trees that an automatic process buildsone way is to compare the automatically derived trees with trees that have been built manuallyanother way is to evaluate the impact that the discourse trees that we derive automatically have on the accuracy of other natural language processing tasks such as anaphora resolution intention recognition or text summarizationin this paper we describe evaluations that follow both these avenuesunfortunately the linguistic community has not yet built a corpus of discourse trees against which our rhetorical parser can be evaluated with the effectiveness that traditional parsers areto circumvent this problem two analysts manually built the discourse trees for five texts that ranged from 161 to 725 wordsalthough there were some differences with respect to the names of the relations that the analysts used the agreement with respect to the status assigned to various units and the overall shapes of the trees was significantin order to measure this agreement we associated an importance score to each textual unit in a tree and computed the spearman correlation coefficients between the importance scores derived from the discourse trees built by each analyst2 the spearman correlation coefficient between the ranks assigned for each textual unit on the bases of the discourse trees built by the two analysts was very high 0798 at p 00001 level of significancethe differences between the two analysts came mainly from their interpretations of two of the texts the discourse trees of one analyst mirrored the paragraph structure of the texts while the discourse trees of the other mirrored a logical organization of the text which that analyst believed to be importantthe spearman correlation coefficients with respect to the importance of textual units between the discourse trees built by our program and those built by each analyst were 0480 p 00001 and 0449 p 00001these lower correlation values were due to the differences in the overall shape of the trees and to the fact that the granularity of the discourse trees built by the program was not as fine as that of the trees built by the analystsbesides directly comparing the trees built by the program with those built by analysts we also evaluated the impact that our trees could have on the task of summarizing texta summarization program that uses the rhetorical parser described here recalled 66 of the sentences considered important by 13 judges in the same five texts with a precision of 68in contrast a random procedure recalled on average only 384 of the sentences considered important by the judges with a precision of 384and the microsoft office 97 summarizer recalled 41 of the important sentences with a precision of 39we discuss at length the experiments from which the data presented above was derived in the rhetorical parser presented in this paper uses only the structural constraints that were enumerated in section 2corelational constraints focus theme anaphoric links and other syntactic semantic and pragmatic factors do not yet play a role in our system but we nevertheless expect them to reduce the number of valid discourse trees that can be associated with a textwe also expect that other robust methods for determining coherence relations between textual units such as those described by harabagiu and moldovan will improve the accuracy of the routines that hypothesize the rhetorical relations that hold between adjacent unitswe are not aware of the existence of any other rhetorical parser for englishhowever sumita et al report on a discourse analyzer for japaneseeven if one ignores some computational quotbonusesquot that can be easily exploited by a japanese discourse analyzer there are still some key differences between sumita work and oursparticularly important is the fact that the theoretical foundations of sumita et al analyzer do not seem to be able to accommodate the ambiguity of discourse markers in their are independent of each other against the alternative hypothesis that the rank of a variable is correlated with the rank of another variablethe value of the statistic ranges from 1 indicating that high ranks of one variable occur with low ranks of the other variable through 0 indicating no correlation between the variables to 1 indicating that high ranks of one variable occur with high ranks of the other variable system discourse markers are considered unambiguous with respect to the relations that they signalin contrast our system uses a mathematical model in which this ambiguity is acknowledged and appropriately treatedalso the discourse trees that we build are very constrained structures as a consequence we do not overgenerate invalid trees as sumita et al dofurthermore we use only surfacebased methods for determining the markers and textual units and use clauses as the minimal units of the discourse treesin contrast sumita et al use deep syntactic and semantic processing techniques for determining the markers and the textual units and use sentences as minimal units in the discourse structures that they builda detailed comparison of our work with sumita et al and others work is given in we introduced the notion of rhetorical parsing ie the process through which natural language texts are automatically mapped into discourse treesin order to make rhetorical parsing work we improved previous algorithms for cue phrase disambiguation and proposed new algorithms for determining the elementary textual units and for computing the valid discourse trees of a textthe solution that we described is both general and robustacknowledgementsthis research would have not been possible without the help of graeme hirst there are no right words to thank him for iti am grateful to melanie baljko phil edmonds and steve green for their help with the corpus analysisthis research was supported by the natural sciences and engineering research council of canada
P97-1013
the rhetorical parsing of unrestricted natural language textswe derive the rhetorical structures of texts by means of two new surfaceformbased algorithms one that identifies discourse usages of cue phrases and breaks sentences into clauses and one that produces valid rhetorical structure trees for unrestricted natural language textsthe algorithms use information that was derived from a corpus analysis of cue phraseswe describe a method for text summarization based on a nuclearity and selective retention of hierarchical fragments
machine transliteration it is challenging to translate names and technical terms across languages with different alphabets and sound inventories these items are commonly transliterated ie replaced with approximate phonetic equivalents example english comes out in japanese translating such items from japanese back to english is even more challenging and of practical interest as transliterated items make up the bulk of text phrases not found in bilingual dictionaries we describe and evaluate a method for performing backwards transliterations by machine this method uses a generative model incorporating several distinct stages in the transliteration process translators must deal with many problems and one of the most frequent is translating proper names and technical termsfor language pairs like spanishenglish this presents no great challenge a phrase like antonio gil usually gets translated as antonio gilhowever the situation is more complicated for language pairs that employ very different alphabets and sound systems such as japaneseenglish and arabicenglishphonetic translation across these pairs is called transliterationwe will look at japaneseenglish transliteration in this paperjapanese frequently imports vocabulary from other languages primarily from englishit has a special phonetic alphabet called katakana which is used primarily to write down foreign names and loanwordsto write a word like golfbag in katakana some compromises must be madefor example japanese has no distinct l and ft sounds the two english sounds collapse onto the same japanese sounda similar compromise must be struck for english ii and f also japanese generally uses an alternating consonantvowel structure making it impossible to pronounce lfb without intervening vowelskatakana writing is a syllabary rather than an alphabetthere is one symbol for ga another for gi another for gu etcso the way to write golfbag in katakana is 1 7 7 roughly pronounced goruhubagguhere are a few more examples notice how the transliteration is more phonetic than orthographic the letter h in johnson does not produce any katakanaalso a dotseparator is used to separate words but not consistentlyand transliteration is clearly an informationlosing operation aisukuri imu loses the distinction between ice cream and i screamtransliteration is not trivial to automate but we will be concerned with an even more challenging problemgoing from katakana back to english ie backtransliterationautomating backtransliteration has great practical importance in japaneseenglish machine translationkatakana phrases are the largest source of text phrases that do not appear in bilingual dictionaries or training corpora however very little computational work has been done in this area briefly mentions a patternmatching approach while discuss a hybrid neuralnetexpertsystem approach to transliterationthe informationlosing aspect of transliteration makes it hard to inverthere are some problem instances taken from actual newspaper articles1 english translations appear later in this paperhere are a few observations about backtransliteration like most problems in computational linguistics this one requires full world knowledge for a 100 solutionchoosing between katarina and catalina might even require detailed knowledge of geography and figure skatingat that level human translators find the problem quite difficult as well so we only aim to match or possibly exceed their performancebilingual glossaries contain many entries mapping katakana phrases onto english phrases eg it is possible to automatically analyze such pairs to gain enough knowledge to accurately map new katakana phrases that come along and learning approach travels well to other languages pairshowever a naive approach to finding direct correspondences between english letters and katakana symbols suffers from a number of problemsone can easily wind up with a system that proposes iskrym as a backtransliteration of aisukuriimutaking letter frequencies into account improves this to a more plausiblelooking isclimmoving to real words may give is crime the i corresponds to ai the corresponds to su etcunfortunately the correct answer here is ice creamafter initial experiments along these lines we decided to step back and build a generative model of the transliteration process which goes like this this divides our problem into five subproblemsfortunately there are techniques for coordinating solutions to such subproblems and for using generative models in the reverse directionthese techniques rely on probabilities and bayes rulesuppose we build an english phrase generator that produces word sequences according to some probability distribution pand suppose we build an english pronouncer that takes a word sequence and assigns it a set of pronunciations again probabilistically according to some pgiven a pronunciation p we may want to search for the word sequence w that maximizes pbayesrule let us us equivalently maximize p p exactly the two distributions we have modeledextending this notion we settled down to build five probability distributions given a katakana string o observed by ocr we want to find the english word sequence w that maximizes the sum over all e j and k of following we implement p in a weighted finitestate acceptor and we implement the other distributions in weighted finitestate transducers a wfsa is an statetransition diagram with weights and symbols on the transitions making some output sequences more likely than othersa wfst is a wfsa with a pair of symbols on each transition one input and one outputinputs and outputs may include the empty symbol c also following we have implemented a general composition algorithm for constructing an integrated model p from models p and p treating wfsas as wfsts with identical inputs and outputswe use this to combine an observed katakana string with each of the models in turnthe result is a large wfsa containing all possible english translationswe use dijkstra shortestpath algorithm to extract the most probable onethe approach is modularwe can test each engine independently and be confident that their results are combined correctlywe do no pruning so the final wfsa contains every solution however unlikelythe only approximation is the viterbi one which searches for the best path through a wfsa instead of the best sequence this section describes how we designed and built each of our five modelsfor consistency we continue to print written english word sequences in italics english sound sequences in all capitals japanese sound sequences in lower case and katakana sequences naturally the first model generates scored word sequences the idea being that ice cream should score higher than ice creme which should score higher than aice kreemwe adopted a simple unigram scoring method that multiplies the scores of the known words and phrases in a sequenceour 262000entry frequency list draws its words and phrases from the wall street journal corpus an online english name list and an online gazeteer of place names2 a portion of the wfsa looks like this an ideal word sequence model would look a bit differentit would prefer exactly those strings which are actually grist for japanese transliteratorsfor example people rarely transliterate auxiliary verbs but surnames are often transliteratedwe have approximated such a model by removing highfrequency words like has an are am were them and does plus unlikely words corresponding to japanese sound bites like coup and ohwe also built a separate word sequence model containing only english first and last namesif we know that the transliterated phrase is a personal name this model is more precisethe next wfst converts english word sequences into english sound sequenceswe use the english phoneme inventory from the online cmu pronunciation dictionary3 minus the stress marksthis gives a total of 40 sounds including 14 vowel sounds 25 consonant sounds plus our special symbol the dictionary has pronunciations for 110000 words and we organized a phonemetree based wfst from it ee note that we insert an optional pause between word pronunciationsdue to memory limitations we only used the 50000 most frequent wordswe originally thought to build a general lettertosound wfst on the theory that while wrong pronunciations might occasionally be generated japanese transliterators also mispronounce wordshowever our lettertosound wfst did not match the performance of japanese translit2 available from the acl data collection initiative httpwwwspeechcscmueducgibincmudict erators and it turns out that mispronunciations are l ow l ow modeled adequately in the next stage of the cascadei a r 0 0 r o o next we map english sound sequences onto japanese sound sequencesthis is an inherently informationlosing process as english r and l sounds collapse onto japanese r the 14 english vowel sounds collapse onto the 5 japanese vowel sounds etcwe face two immediate problems an obvious target inventory is the japanese syllabary itself written down in katakana or a roman equivalent with this approach the english sound k corresponds to one of t are or depending on its contextunfortunately because katakana is a syllabary we would be unable to express an obvious and useful generalization namely that english k usually corresponds to japanese k independent of contextmoreover the correspondence of japanese katakana writing to japanese sound sequences is not perfectly onetoone so an independent sound inventory is wellmotivated in any caseour japanese sound inventory includes 39 symbols 5 vowel sounds 33 consonant sounds and one special symbol an english sound sequence like might map onto a japanese sound sequence like note that long japanese vowel sounds are written with two symbols instead of just one this scheme is attractive because japanese sequences are almost always longer than english sequencesour wfst is learned automatically from 8000 pairs of englishjapanese sound sequences eg we were able to produce these pairs by manipulating a small englishkatakana glossaryfor each glossary entry we converted english words into english sounds using the previous section model and we converted katakana words into japanese sounds using the next section modelwe then applied the estimationmaximization algorithm to generate symbolmapping probabilities shown in figure 1our them training goes like this 1for each englishjapanese sequence pair compute all possible alignments between their elementsin our case an alignment is a drawing that connects each english sound with one or more japanese sounds such that all japanese sounds are covered and no lines crossfor example there are two ways to align the pair we then build a wfst directly from the symbolmapping probabilities our wfst has 99 states and 283 arcswe have also built models that allow individual english sounds to be quotswallowedquot however these models are expensive to compute and lead to a vast number of hypotheses during wfst compositionfurthermore in disallowing quotswallowingquot we were able to automatically remove hundreds of potentially harmful pairs from our training set eg because no alignments are possible such pairs are skipped by the learning algorithm cases like these must be solved by dictionary lookup anywayonly two pairs failed to align when we wished they hadboth involved turning english y uw into japanese you as in cu kurerenote also that our model translates each english sound without regard to contextwe have built also contextbased models using decision trees recoded as wfstsfor example at the end of a word english t is likely to come out as rather than however contextbased models proved unnecessary case as learned by estimationmaximizationonly mappings with conditional probabilities greater than 1 are shown so the figures may not sum to 1 for backtransliterationthey are more useful for englishtojapanese forward transliterationto map japanese sound sequences like onto katakana sequences like we manually constructed two wfstscomposed together they yield an integrated wfst with 53 states and 303 arcsthe first wfst simply merges long japanese vowel sounds into new symbols aa uu ee and oothe second wfst maps japanese sounds onto katakana symbolsthe basic idea is to consume a whole syllable worth of sounds before producing any katakana eg o 3 this fragment shows one kind of spelling variation in japanese long vowel sounds are usually written with a long vowel mark but are sometimes written with repeated katakana we combined corpus analysis with guidelines from a japanese textbook to turn up many spelling variations and unusual katakana symbols and so onspelling variation is clearest in cases where an english word like switch shows up transliterated variously in different dictionariestreating these variations as an equivalence class enables us to learn general sound mappings even if our bilingual glossary adheres to a single narrow spelling conventionwe do not however and harmfully restrictive in their unsinoothed incarnations generate all katakana sequences with this model for example we do not output strings that begin with a subscripted vowel katakanaso this model also serves to filter out some illformed katakana sequences possibly proposed by optical character recognitionperhaps uncharitably we can view optical character recognition as a device that garbles perfectly good katakana sequencestypical confusions made by our commercial ocr system include for 1 for 1quot 7 for 7 and 7 for 1to generate preocr text we collected 19500 characters worth of katakana words stored them in a file and printed them outto generate postocr text we ocrd the printoutswe then ran the them algorithm to determine symbolmapping probabilitieshere is part of that table this model outputs a superset of the 81 katakana symbols including spurious quote marks alphabetic symbols and the numeral 7we can now use the models to do a sample backtransliterationwe start with a katakana phrase as observed by ocrwe then serially compose it with the models in reverse ordereach intermediate stage is a wfsa that encodes many possibilitiesthe final stage contains all backtransliterations suggested by the models and we finally extract the best onewe start with the masutaazutoonamento problem from section 1our ocr observes q jthis string has two recognition errors for and 1 for t we turn the string into a chained 12state11arc wfsa and compose it with the p modelthis yields a fatter 12state15arc wfsa which accepts the correct spelling at a lower probabilitynext comes the p model which produces a 28state31arc wfsa whose highestscoring sequence is masutaazutoochiment o next comes p yielding a 62state241arc wfsa whose best sequence is next to last comes p which results in a 2982state4601arc wfsa whose best sequence is masters tone am ent awe this english string is closest phonetically to the japanese but we are willing to trade phonetic proximity for more sensical english we rescore this wfsa by composing it with p and extract the best translation we have performed two largescale experiments one using a fulllanguage p model and one using a personal name language modelin the first experiment we extracted 1449 unique katakana phrases from a corpus of 100 short news articlesof these 222 were missing from an online 100000entry bilingual dictionarywe backtransliterated these 222 phrasesmany of the translations are perfect technical program sex scandal omaha beach new york times ramon diazothers are close tanya harding nickel simpson danger washington world capsome miss the mark nancy care again plus occur patriot miss realwhile it is difficult to judge overall accuracysome of the phases are onomatopoetic and others are simply too hard even for good human translatorsit is easier to identify system weaknesses and most of these lie in the p modelfor example nancy kerrigan should be preferred over nancy care againin a second experiment we took katakana versions of the names of 100 yous politicians eg 712 y q 7l and q y7 we backtransliterated these by machine and asked four human subjects to do the samethese subjects were native english speakers and newsaware we gave them brief instructions examples and hintsthe results were as follows human machine 27 64 7 12 66 24 there is room for improvement on both sidesbeing english speakers the human subjects were good at english name spelling and yous politics but not at japanese phoneticsa native japanese speaker might be expert at the latter but not the formerpeople who are expert in all of these areas however are rareon the automatic side many errors can be correcteda firstnamelastname model would rank richard bryan more highly than richard briana bigram model would prefer orren hatch over olin hatchother errors are due to unigram training problems or more rarely incorrect or brittle phonetic modelsfor example quotlongquot occurs much more often than quotronquot in newspaper text and our word selection does not exclude phrases like quotlong islandquot so we get long wyden instead of ron wydenrare errors are due to incorrect or brittle phonetic modelsstill the machine performance is impressivewhen word separators are removed from the katakana phrases rendering the task exceedingly difficult for people the machine performance is unchangedwhen we use ocr7 of katakana tokens are misrecognized affecting 50 of test strings but accuracy only drops from 64 to 52we have presented a method for automatic backtransliteration which while far from perfect is highly competitiveit also achieves the objectives outlined in section 1it ports easily to new language pairs the p and p models are entirely reusable while other models are learned automaticallyit is robust against ocr noise in a rare example of highlevel language processing being useful in improving lowlevel ocrwe plan to replace our shortestpath extraction algorithm with one of the recently developed kshortest path algorithms we will then return a ranked list of the k best translations for subsequent contextual disambiguation either by machine or as part of an interactive manmachine systemwe also plan to explore probabilistic models for arabicenglish transliterationsimply identifying which arabic words to transliterate is a difficult task in itself and while japanese tends to insert extra vowel sounds arabic is usually written without any vowelsfinally it should also be possible to embed our phonetic shift model p inside a speech recognizer to help adjust for a heavy japanese accent although we have not experimented in this areawe would like to thank alton earl ingram yolanda gil bonnie gloverstalls richard whitney and kenji yamada for their helpful commentswe would correct phonetically equivalent but misspelled incorrect also like to thank our sponsors at the department of defense
P97-1017
machine transliterationit is challenging to translate names and technical terms across languages with different alphabets and sound inventoriesthese items are commonly transliterated ie replaced with approximate phonetic equivalentsfor example computer in english comes out as in japanesetranslating such items from japanese back to english is even more challenging and of practical interest as transliterated items make up the bulk of text phrases not found in bilingual dictionarieswe describe and evaluate a method for performing backwards transliterations by machinethis method uses a generative model incorporating several distinct stages in the transliteration processwe propose to compose a set of weighted finite state transducers to solve the problem of backtransliteration from japanese katakana to english
predicting the semantic orientation of adjectives we identify and validate from a large corpus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectives a loglinear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations achieving 82 accuracy in this task when each conjunction is considered independently combining the constraints across many adjectives a clustering algorithm separates the adjectives into groups of different orientations and finally adjectives are labeled positive or negative evaluations on real data and simulation experiments indicate high levels of performance classification precision is more than 90 for adjectives that occur in a modest number of conjunctions in the corpus the semantic orientation or polarity of a word indicates the direction the word deviates from the norm for its semantic group or lexical field it also constrains the word usage in the language due to its evaluative characteristics for example some nearly synonymous words differ in orientation because one implies desirability and the other does not in linguistic constructs such as conjunctions which impose constraints on the semantic orientation of their arguments the choices of arguments and connective are mutually constrained as illustrated by the tax proposal was simple and wellreceived simplistic but wellreceived simplistic and wellreceived by the publicin addition almost all antonyms have different semantic orientationsif we know that two words relate to the same property but have different orientations we can usually infer that they are antonymsgiven that semantically similar words can be identified automatically on the basis of distributional properties and linguistic cues identifying the semantic orientation of words would allow a system to further refine the retrieved semantic similarity relationships extracting antonymsunfortunately dictionaries and similar sources do not include semantic orientation information 2 explicit links between antonyms and synonyms may also be lacking particularly when they depend on the domain of discourse for example the opposition bear bull appears only in stock market reports where the two words take specialized meaningsin this paper we present and evaluate a method that automatically retrieves semantic orientation information using indirect information collected from a large corpusbecause the method relies on the corpus it extracts domaindependent information and automatically adapts to a new domain when the corpus is changedour method achieves high precision and while our focus to date has been on adjectives it can be directly applied to other word classesultimately our goal is to use this method in a larger system to automatically identify antonyms and distinguish near synonymsour approach relies on an analysis of textual corpora that correlates linguistic features or indicators with i exceptions include a small number of terms that are both negative from a pragmatic viewpoint and yet stand in an antonyrnic relationship such terms frequently lexicalize two unwanted extremes eg verboseterse semantic orientationwhile no direct indicators of positive or negative semantic orientation have been proposed3 we demonstrate that conjunctions between adjectives provide indirect information about orientationfor most connectives the conjoined adjectives usually are of the same orientation compare fair and legitimate and corrupt and brutal which actually occur in our corpus with fair and brutal and corrupt and legitimate which are semantically anomalousthe situation is reversed for but which usually connects two adjectives of different orientationsthe system identifies and uses this indirect information in the following stages in the following sections we first present the set of adjectives used for training and evaluationwe next validate our hypothesis that conjunctions constrain the orientation of conjoined adjectives and then describe the remaining three steps of the algorithmafter presenting our results and evaluation we discuss simulation experiments that show how our method performs under different conditions of sparseness of datafor our experiments we use the 21 million word 1987 wall street journal corpus automatically annotated with partofspeech tags using the parts tagger in order to verify our hypothesis about the orientations of conjoined adjectives and also to train and evaluate our subsequent algorithms we need a 3certain words inflected with negative affixes tend to be mostly negative but this rule applies only to a fraction of the negative wordsfurthermore there are words so inflected which have positive orientation eg independent and unbiasedpositive adequate central clever famous intelligent remarkable reputed sensitive slender thriving negative contagious drunken ignorant lanky listless primitive strident troublesome unresolved unsuspecting set of adjectives with predetermined orientation labelswe constructed this set by taking all adjectives appearing in our corpus 20 times or more then removing adjectives that have no orientationthese are typically members of groups of complementary qualitative terms eg domestic or medicalwe then assigned an orientation label to each adjective using an evaluative approachthe criterion was whether the use of this adjective ascribes in general a positive or negative quality to the modified item making it better or worse than a similar unmodified itemwe were unable to reach a unique label out of context for several adjectives which we removed from consideration for example cheap is positive if it is used as a synonym of inexpensive but negative if it implies inferior qualitythe operations of selecting adjectives and assigning labels were performed before testing our conjunction hypothesis or implementing any other algorithms to avoid any influence on our labelsthe final set contained 1336 adjectives figure 1 shows randomly selected terms from this setto further validate our set of labeled adjectives we subsequently asked four people to independently label a randomly drawn sample of 500 of these adjectivesthey agreed with us that the positivenegative concept applies to 8915 of these adjectives on averagefor the adjectives where a positive or negative label was assigned by both us and the independent evaluators the average agreement on the label was 9738the average interreviewer agreement on labeled adjectives was 9697these results are extremely significant statistically and compare favorably with validation studies performed for other tasks in the pastthey show that positive and negative orientation are objective properties that can be reliably determined by humansto extract conjunctions between adjectives we used a twolevel finitestate grammar which covers complex modification patterns and nounadjective appositionrunning this parser on the 21 million word corpus we collected 13426 conjunctions of adjectives expanding to a total of 15431 conjoined adjective pairsafter morphological transformations the remaining 15048 conjunction tokens involve 9296 distinct pairs of conjoined adjectives each conjunction token is classified by the parser according to three variables the conjunction used the type of modification and the number of the modified noun 4 validation of the conjunction hypothesis using the three attributes extracted by the parser we constructed a crossclassification of the conjunctions in a threeway tablewe counted types and tokens of each conjoined pair that had both members in the set of preselected labeled adjectives discussed above 2748 of all conjoined pairs and 4024 of all conjunction occurrences met this criterionwe augmented this table with marginal totals arriving at 90 categories each of which represents a triplet of attribute values possibly with one or more quotdo not carequot elementswe then measured the percentage of conjunctions in each category with adjectives of same or different orientationsunder the null hypothesis of same proportions of adjective pairs of same and different orientation in a given category the number of same or differentorientation pairs follows a binomial distribution with p 05 we show in table 1 the results for several representative categories and summarize all results below different orientations there are rather surprisingly small differences in the behavior of conjunctions between linguistic environments there are a few exceptions eg appositive and conjunctions modifying plural nouns are evenly split between same and different orientationbut in these exceptional cases the sample is very small and the observed behavior may be due to chancethe analysis in the previous section suggests a baseline method for classifying links between adjectives since 7784 of all links from conjunctions indicate same orientation we can achieve this level of performance by always guessing that a link is of the sameorientation typehowever we can improve performance by noting that conjunctions using but exhibit the opposite pattern usually involving adjectives of different orientationsthus a revised but still simple rule predicts a differentorientation link if the two adjectives have been seen in a but conjunction and a sameorientation link otherwise assuming the two adjectives were seen connected by at least one conjunctionmorphological relationships between adjectives also play a roleadjectives related in form almost always have different semantic orientationswe implemented a morphological analyzer which matches adjectives related in this mannerthis process is highly accurate but unfortunately does not apply to many of the possible pairs in our set of 1336 labeled adjectives 102 pairs are morphologically related among them 99 are of different orientation yielding 9706 accuracy for the morphology methodthis information is orthogonal to that extracted from conjunctions only 12 of the 102 morphologically related pairs have been observed in conjunctions in our corpusthus we add to the predictions made from conjunctions the differentorientation links suggested by morphological relationshipswe improve the accuracy of classifying links derived from conjunctions as same or different orientation with a loglinear regression model exploiting the differences between the various conjunction categoriesthis is a generalized linear model with a linear predictor wtx where x is the vector of the observed counts in the various conjunction categories for the particular adjective pair we try to classify and w is a vector of weights to be learned during trainingthe response y is nonlinearly related to 77 through the inverse logit function en note that y e with each of these endpoints associated with one of the possible outcomeswe have 90 possible predictor variables 42 of which are linearly independentsince using all the 42 independent predictors invites overfitting we have investigated subsets of the full loglinear model for our data using the method of iterative stepwise refinement starting with an initial model variables are added or dropped if their contribution to the reduction or increase of the residual deviance compares favorably to the resulting loss or gain of residual degrees of freedomthis process led to the selection of nine predictor variableswe evaluated the three prediction models discussed above with and without the secondary source of morphology relationsfor the loglinear model we repeatedly partitioned our data into equally sized training and testing sets estimated the weights on the training set and scored the model performance on the testing set averaging the resulting scorestable 2 shows the results of these analysesalthough the loglinear model offers only a small improvement on pair classification than the simpler but prediction rule it confers the important advantage 5when morphology is to be used as a supplementary predictor we remove the morphologically related pairs from the training and testing sets of rating each prediction between 0 and 1we make extensive use of this in the next phase of our algorithmthe third phase of our method assigns the adjectives into groups placing adjectives of the same orientation in the same groupeach pair of adjectives has an associated dissimilarity value between 0 and 1 adjectives connected by sameorientation links have low dissimilarities and conversely differentorientation links result in high dissimilaritiesadjective pairs with no connecting links are assigned the neutral dissimilarity 05the baseline and but methods make qualitative distinctions only for them we define dissimilarity for sameorientation links as one minus the probability that such a classification link is correct and dissimilarity for differentorientation links as the probability that such a classification is correctthese probabilities are estimated from separate training datanote that for these prediction models dissimilarities are identical for similarly classified linksthe loglinear model on the other hand offers an estimate of how good each prediction is since it produces a value y between 0 and 1we construct the model so that 1 corresponds to sameorientation and define dissimilarity as one minus the produced valuesame and differentorientation links between adjectives form a graphto partition the graph nodes into subsets of the same orientation we employ an iterative optimization procedure on each connected component based on the exchange method a nonhierarchical clustering algorithm we define an objective function 4 scoring each possible partition p of the adjectives into two subgroups ci and c2 as where ici stands for the cardinality of cluster i and d is the dissimilarity between adjectives x and ywe want to select the partition pmin that minimizes subject to the additional constraint that for each adjective x in a cluster c where c is the complement of cluster c ie the other member of the partitionthis constraint based on rousseeuw silhouettes helps correct wrong cluster assignmentsto find pi we first construct a random partition of the adjectives then locate the adjective that will most reduce the objective function if it is moved from its current clusterwe move this adjective and proceed with the next iteration until no movements can improve the objective functionat the final iteration the cluster assignment of any adjective that violates constraint is changedthis is a steepestdescent hillclimbing method and thus is guaranteed to convergehowever it will in general find a local minimum rather than the global one the problem is npcomplete we can arbitrarily increase the probability of finding the globally optimal solution by repeatedly running the algorithm with different starting partitionsthe clustering algorithm separates each component of the graph into two groups of adjectives but does not actually label the adjectives as positive or negativeto accomplish that we use a simple criterion that applies only to pairs or groups of words of opposite orientationwe have previously shown that in oppositions of gradable adjectives where one member is semantically unmarked the unmarked member is the most frequent one about 81 of the timethis is relevant to our task because semantic markedness exhibits a strong correlation with orientation the unmarked member almost always having positive orientation we compute the average frequency of the words in each group expecting the group with higher average frequency to contain the positive termsthis aggregation operation increases the precision of the labeling dramatically since indicators for many pairs of words are combined even when some of the words are incorrectly assigned to their groupsince graph connectivity affects performance we devised a method of selecting test sets that makes this dependence explicitnote that the graph density is largely a function of corpus size and thus can be increased by adding more datanevertheless we report results on sparser test sets to show how our algorithm scales upwe separated our sets of adjectives a and conjunction and morphologybased links l into training and testing groups by selecting for several values of the parameter a the maximal subset of a aa which includes an adjective x if and only if there exist at least a links from l between x and other elements of aathis operation in turn defines a subset of l la which includes all links between members of aawe train our loglinear model on l la compute predictions and dissimilarities for the links in la and use these to classify and label the adjectives in aa a must be at least 2 since we need to leave some links for trainingtable 3 shows the results of these experiments for 2 to 5our method produced the correct classification between 78 of the time on the sparsest test set up to more than 92 of the time when a higher number of links was presentmoreover in all cases the ratio of the two group frequencies correctly identified the positive subgroupthese results are extremely significant statistically when compared with the baseline method of randomly assigning orientations to adjectives or the baseline method of always predicting the most frequent category figure 2 shows some of the adjectives in set a4 and their classificationsclassified as positive bold decisive disturbing generous good honest important large mature patient peaceful positive proud sound stimulating straightforward strange talented vigorous witty classified as negativea strong point of our method is that decisions on individual words are aggregated to provide decisions on how to group words into a class and whether to label the class as positive or negativethus the overall result can be much more accurate than the individual indicatorsto verify this we ran a series of simulation experimentseach experiment measures how our algorithm performs for a given level of precision p for identifying links and a given average number of links k for each wordthe goal is to show that even when p is low given enough data we can achieve high performance for the groupingas we noted earlier the corpus data is eventually represented in our system as a graph with the nodes corresponding to adjectives and the links to predictions about whether the two connected adjectives have the same or different orientationthus the parameter p in the simulation experiments measures how well we are able to predict each link independently of the others and the parameter k measures the number of distinct adjectives each adjective appears with in conjunctionsp therefore directly represents the precision of the link classification algorithm while k indirectly represents the corpus sizeto measure the effect of p and k we need to carry out a series of experiments where we systematically vary their valuesfor example as k increases for a given level of precision p for individual links we want to measure how this affects overall accuracy of the resulting groups of nodesthus we need to construct a series of data sets or graphs which represent different scenarios corresponding to a given combination of values of p and k to do this we construct a random graph by randomly assigning 50 nodes to the two possible orientationsbecause we do not have frequency and morphology information on these abstract nodes we cannot predict whether two nodes are of the same or different orientationrather we randomly assign links between nodes so that on average each node participates in k links and 100 x p of all links connect nodes of the same orientationthen we consider these links as identified by the link prediction algorithm as connecting two nodes with the same orientation this is equivalent to the baseline link classification method and provides a lower bound on the performance of the algorithm actually used in our system because of the lack of actual measurements such as frequency on these abstract nodes we also decouple the partitioning and labeling components of our system and score the partition found under the best matching conditions for the actual labelsthus the simulation measures only how well the system separates positive from negative adjectives not how well it determines which is whichhowever in all the experiments performed on real corpus data the system correctly found the labels of the groups any misclassifications came from misplacing an adjective in the wrong groupthe whole procedure of constructing the random graph and finding and scoring the groups is repeated 200 times for any given combination of p and k and the results are averaged thus avoiding accidentally evaluating our system on a graph that is not truly representative of graphs with the given p and k we observe that even for relatively low p our ability to correctly classify the nodes approaches very high levels with a modest number of linksfor p 08 we need only about 7 links per adjective for classification performance over 90 and only 12 links per adjective for performance over 996 the difference between low and high values of p is in the rate at which increasing data increases overall precisionthese results are somewhat more optimistic than those obtained with real data a difference which is probably due to the uniform distributional assumptions in the simulationnevertheless we expect the trends to be similar to the ones shown in figure 3 and the results of table 3 on real data support this expectationwe have proposed and verified from corpus data constraints on the semantic orientations of conjoined adjectiveswe used these constraints to automatically construct a loglinear regression model which combined with supplementary morphology rules predicts whether two conjoined adjectives are of same in each figure the last x coordinate indicates the d the dotted line shows the performance of a random or different orientation with 82 accuracywe then classified several sets of adjectives according to the links inferred in this way and labeled them as positive or negative obtaining 92 accuracy on the classification task for reasonably dense graphs and 100 accuracy on the labeling tasksimulation experiments establish that very high levels of performance can be obtained with a modest number of links per word even when the links themselves are not always correctly classifiedas part of our clustering algorithm output a quotgoodnessoffitquot measure for each word is computed based on rousseeuw silhouettesthis measure ranks the words according to how well they fit in their group and can thus be used as a quantitative measure of orientation refining the binary positivenegative distinctionby restricting the labeling decisions to words with high values of this measure we can also increase the precision of our system at the cost of sacrificing some coveragewe are currently combining the output of this system with a semantic group finding system so that we can automatically identify antonyms from the corpus without access to any semantic descriptionsthe learned semantic categorization of the adjectives can also be used in the reverse direction to help in interpreting the conjunctions they participatewe will also extend our analyses to nouns and verbsthis work was supported in part by the office of naval research under grant n000149510745 jointly by the office of naval research and the advanced research projects agency under grant n0001489j1782 by the national science foundation under grant ger9024069 and by the new york state center for advanced technology under contracts nysstfcat013 and nysstfcat013we thank ken church and the att bell laboratories for making the parts partofspeech tagger available to uswe also thank dragomir radev eric siegel and gregory sean mckinley who provided models for the categorization of the adjectives in our training and testing sets as positive and negative
P97-1023
predicting the semantic orientation of adjectiveswe identify and validate from a large corpus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectivesa loglinear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations achieving 82 accuracy in this task when each conjunction is considered independentlycombining the constraints across many adjectives a clustering algorithm separates the adjectives into groups of different orientations and finally adjectives are labeled positive or negativeevaluations on real data and simulation experiments indicate high levels of performance classification precision is more than 90 for adjectives that occur in a modest number of conjunctions in the corpuswe cluster adjectives into and sets based on conjunction constructions weighted similarity graphs minimumcuts supervised learning and clustering
paradise a framework for evaluating spoken dialogue agents this paper presents paradise a general framework for evaluating spoken rlialogue agents the framework decouples task requirements from an agent dialogue behaviors supports comparisons among dialogue strategies enables the calculation of performance over subdialogues and whole dialogues specifies the relative contribution of various factors to performance and makes it possible to compare agents performing different tasks by normalizing for task complexity recent advances in dialogue modeling speech recognition and natural language processing have made it possible to build spoken dialogue agents for a wide variety of applicationspotential benefits of such agents include remote or handsfree access ease of use naturalness and greater efficiency of interactionhowever a critical obstacle to progress in this area is the lack of a general framework for evaluating and comparing the performance of different dialogue agentsone widely used approach to evaluation is based on the notion of a reference answer an agent responses to a query are compared with a predefined key of minimum and maximum reference answers performance is the proportion of responses that match the keythis approach has many widely acknowledged limitations eg although there may be many potential dialogue strategies for carrying out a task the key is tied to one particular dialogue strategyin contrast agents using different dialogue strategies can be compared with measures such as inappropriate utterance ratio turn correction ratio concept accuracy implicit recovery and transaction success consider a comparison of two train timetable information agents where agent a in dialogue 1 uses an explicit confirmation strategy while agent b in dialogue 2 uses an implicit confirmation strategy danieli and gerbino found that agent a had a higher transaction success rate and produced less inappropriate and repair utterances than agent b and thus concluded that agent a was more robust than agent bhowever one limitation of both this approach and the reference answer approach is the inability to generalize results to other tasks and environments such generalization requires the identification of factors that affect performance for example while danieli and gerbino found that agent a dialogue strategy produced dialogues that were approximately twice as long as agent b they had no way of determining whether agent a higher transaction success or agent b efficiency was more critical to performancein addition to agent factors such as dialogue strategy task factors such as database size and environmental factors such as background noise may also be relevant predictors of performancethese approaches are also limited in that they currently do not calculate performance over subdialogues as well as whole dialogues correlate performance with an external validation criterion or normalize performance for task complexitythis paper describes paradise a general framework for evaluating spoken dialogue agents that addresses these limitationsparadise supports comparisons among dialogue strategies by providing a task representation that decouples what an agent needs to achieve in terms of the task requirements from how the agent carries out the task via dialogueparadise uses a decisiontheoretic framework to specify the relative contribution of various factors to an agent overall performanceperformance is modeled as a weighted function of a taskbased success measure and dialoguebased cost measures where weights are computed by correlating user satisfaction with performancealso performance can be calculated for subdialogues as well as whole dialoguessince the goal of this paper is to explain and illustrate the application of the paradise framework for expository purposes the paper uses simplified domains with hypothetical data throughoutsection 2 describes paradise performance model and section 3 discusses its generality before concluding in section 4paradise uses methods from decision theory to combine a disparate set of performance measures into a single performance evaluation functionthe use of decision theory requires a specification of both the objectives of the decision problem and a set of measures for operationalizing the objectivesthe paradise model is based on the structure of objectives shown in figure 1the paradise model posits that performance can be correlated with a meaningful external criterion such as usability and thus that the overall goal of a spoken dialogue agent is to maximize an objective related to usabilityuser satisfaction ratings have been frequently used in the literature as an external indicator of the usability of a dialogue agentthe model further posits that two types of factors are potential relevant contributors to user satisfaction and that two types of factors are potential relevant contributors to costs in addition to the use of decision theory to create this objective structure other novel aspects of paradise include the use of the kappa coefficient to operationalize task success and the use of linear regression to quantify the relative contribution of the success and cost factors to user satisfactionthe remainder of this section explains the measures used to operationalize the set of objectives and the methodology for estimating a quantitative performance function that reflects the objective structuresection 21 describes paradise task representation which is needed to calculate the taskbased success measure described in section 22section 23 describes the cost measures considered in paradise which reflect both the efficiency and the naturalness of an agent dialogue behaviorssection 24 describes the use of linear regression and user satisfaction to estimate the relative contribution of the success and cost measures in a single performance functionfinally section 25 explains how performance can be calculated for subdialogues as well as whole dialogues while section 26 summarizes the methoda general evaluation framework requires a task representation that decouples what an agent and user accomplish from how the task is accomplished using dialogue strategieswe propose that an attribute value matrix can represent many dialogue tasksthis consists of the information that must be exchanged between the agent and the user during the dialogue represented as a set of ordered pairs of attributes and their possible values2 as a first illustrative example consider a simplification of the train timetable domain of dialogues 1 and 2 where the timetable only contains information about rushhour trains between four cities as shown in table 1this avm consists of four attributes 3 in table 1 these attributevalue pairs are annotated with the direction of information flow to represent who acquires the information although this information is not used for evaluationduring the dialogue the agent must acquire from the user the values of dc ac and dr while the user must acquire dtperformance evaluation for an agent requires a corpus of dialogues between users and the agent in which users execute a set of scenarioseach scenario execution has a corresponding avm instantiation indicating the task information requirements for the scenario where each attribute is paired with the attribute value obtained via the dialoguefor example assume that a scenario requires the user to find a train from torino to milano that leaves in the evening as in the longer versions of dialogues 1 and 2 in figures 2 and 34 table 2 contains an avm corresponding to a quotkeyquot for this scenarioall dialogues resulting from execution of this scenario in which the agent and the user correctly convey all attribute values would have the same avm as the scenario key in table 2the avms of the remaining dialogues would differ from the key by at least one valuethus even though the dialogue strategies in figures 2 and 3 are radically different the avm task representation for these dialogues is identical and the performance of the system for the same task can thus be assessed on the basis of the avm representationsuccess at the task for a whole dialogue is measured by how well the agent and user achieve the information requirements of the task by the end of the dialogue this section explains how paradise uses the kappa coefficient to operationalize the taskbased success measure in figure 1the kappa coefficient k is calculated from a confusion matrix that summarizes how well an agent achieves the information requirements of a particular task for a set of dialogues instantiating a set of scenarios5 for example tables 3 and 4 show two hypothetical confusion matrices that could have been generated in an evaluation of 100 complete dialogues with each of two train timetable agents a and b 6 the values in the matrix cells are based on comparisons between the dialogue and scenario key avmswhenever an attribute value in a dialogue avm matches the value in its scenario key the number in the appropriate diagonal cell of the matrix is incremented by 1the off diagonal cells represent misunderstandings that are not corrected in the dialoguenote that depending on the strategy that a spoken dialogue agent uses confusions across attributes are possible eg quotmilanoquot could be confused with quotmorningquot the effect of misunderstandings that are corrected during the course of the dialogue are reflected in the costs associated with the dialogue as will be discussed belowthe first matrix summarizes how the 100 avms representing each dialogue with agent a compare with the avms representing the relevant scenario keys while the second matrix summarizes the information exchange with agent b labels vi to v4 in each matrix represent the possible values of departcity shown in table 1 v5 to v8 are for arrivalcity etccolumns represent the key specifying which information values the agent and user were supposed to communicate to one another given a particular scenariorows represent the data collected from the dialogue corpus reflecting what attribute values were actually communicated between the agent and the usergiven a confusion matrix m success at achieving the information requirements of the task is measured with the kappa coefficient by chance7 when there is no agreement other than that which would be expected by chance n 0when there is total agreement k 1 ic is superior to other measures of success such as transaction success concept accuracy and percent agreement because n takes into account the inherent complexity of the task by correcting for chance expected agreementthus rc provides a basis for comparisons across agents that are performing different taskswhen the prior distribution of the categories is unknown p the expected chance agreement between the data and the key can be estimated from the distribution of the values in the keysthis can be calculated from confusion matrix m since the columns represent the values in the keysin particular p is the proportion of times that the avms for the actual set of dialogues agree with the avms for the scenario keys and p is the proportion of times that the avms for the dialogues and the keys are expected to agree 7k has been used to measure pairwise agreement among coders making category judgments thus the observed useragent interactions are modeled as a coder and the ideal interactions as an expert coder where ti is the sum of the frequencies in column i of m and t is the sum of the frequencies in m p the actual agreement between the data and the key is always computed from the confusion matrix m given the confusion matrices in tables 3 and 4 p 0079 for both agents8 for agent a p 0795 and frc 0777 while for agent b p 059 and c0555 suggesting that agent a is more successful than b in achieving the task goalsas shown in figure 1 performance is also a function of a combination of cost measuresintuitively cost measures should be calculated on the basis of any user or agent dialogue behaviors that should be minimizeda wide range of cost measures have been used in previous work these include pure efficiency measures such as the number of turns or elapsed time to complete the task as well as measures of qualitative phenomena such as inappropriate or repair utterances paradise represents each cost measure as a function ci that can be applied to any dialoguefirst consider the simplest case of calculating efficiency measures over a whole dialoguefor example let cl be the total number of utterancesfor the whole dialogue d1 in figure 2 c1 is 23 utterancesfor the whole dialogue d2 in figure 3 ci is 10 utterancesto calculate costs over subdialogues and for some of the qualitative measures it is necessary to be able to specify which information goals each utterance contributes toparadise uses its avm representation to link the information goals of the task to any arbitrary dialogue behavior by tagging the dialogue with the attributes for the task9 this makes it possible to evaluate any potential dialogue strategies for achieving the task as well as to evaluate dialogue strategies that operate at the level of dialogue subtasks consider the longer versions of dialogues 1 and 2 in figures 2 and 3each utterance in figures 2 and 3 has been tagged using one or more of the attribute abbreviations in table 1 according to the subtask the utterance contributes toas a convention of this type of tagging using a single confusion matrix for all attributes as in tables 3 and 4 inflates 1g when there are few crossattribute confusions by making p smallerin some cases it might be desirable to calculate ic first for identification of attributes and then for values within attributes or to average ic for each attribute to produce an overall ic for the task9this tagging can be hand generated or system generated and hand correctedpreliminary studies indicate that reliability for human tagging is higher for avm attribute tagging than for other types of discourse segment tagging utterances that contribute to the success of the whole dialogue such as greetings are tagged with all the attributessince the structure of a dialogue reflects the structure of the task the tagging of a dialogue by the avm attributes can be used to generate a hierarchical discourse structure such as that shown in figure 4 for dialogue 1 for example segment s2 in figure 4 is about both departcity and arrivalcity it contains segments s3 and s4 within it and consists of utterances ui u6tagging by avm attributes is required to calculate costs over subdialogues since for any subdialogue task attributes define the subdialoguefor subdialogue s4 in figure 4 which is about the attribute arrivalcity and consists of utterances a6 and u6 c is 2tagging by avm attributes is also required to calculate the cost of some of the qualitative measures such as number of repair utterances for example let c2 be the number of repair utterancesthe repair utterances in figure 2 are a3 through u6 thus c2 is 10 utterances and c2 is 2 utterancesthe repair utterance in figure 3 is u2 but note that according to the avm task tagging u2 simultaneously addresses the information goals for departrangein general if an utterance you contributes to the information goals of n different attributes each attribute accounts for un of any costs derivable from youthus c2 is 5given a set of ci it is necessary to combine the difwprevious work has shown that this can be done with high reliability ferent cost measures in order to determine their relative contribution to performancethe next section explains how to combine is with a set of ci to yield an overall performance measuregiven the definition of success and costs above and the model in figure 1 performance for any dialogue d is defined as followsquot here a is a weight on is the cost functions ci are weighted by wi and h is a z score normalization function the normalization function is used to overcome the problem that the values of ci are not on the same scale as k and that the cost measures ci may also be calculated over widely varying scales this problem is easily solved by normalizing each factor x to its z score cr where is the standard deviation for xagents a and b to illustrate the method for estimatinga performance function we will use a subset of the data from tables 3 and 4 shown in table 5table 5 represents the results quot we assume an additive performance function because it appears that k and the various cost factors ci are utility independent and additive independent it is possible however that user satisfaction data collected in future experiments would indicate otherwiseif so continuing use of an additive function might require a transformation of the data a reworking of the model shown in figure 1 or the inclusion of interaction terms in the model from a hypothetical experiment in which eight users were randomly assigned to communicate with agent a and eight users were randomly assigned to communicate with agent btable 5 shows user satisfaction ratings is number of utterances and number of repair utterances for each of these usersusers 5 and 11 correspond to the dialogues in figures 2 and 3 respectivelyto normalize ci for user 5 we determine that ft is 386 and cc is 189thus h is 083similarly 1v for user 11 is 151to estimate the performance function the weights a and wi must be solved forrecall that the claim implicit in figure 1 was that the relative contribution of task success and dialogue costs to performance should be calculated by considering their contribution to user satisfactionuser satisfaction is typically calculated with surveys that ask users to specify the degree to which they agree with one or more statements about the behavior or the performance of the systema single user satisfaction measure can be calculated from a single question or as the mean of a set of ratingsthe hypothetical user satisfaction ratings shown in table 5 range from a high of 6 to a low of 1given a set of dialogues for which user satisfaction is and the set of ci have been collected experimentally the weights a and wi can be solved for using multiple linear regressionmultiple linear regression produces a set of coefficients describing the relative contribution of each predictor factor in accounting for the variance in a predicted factorin this case on the basis of the model in figure 1 us is treated as the predicted factornormalization of the predictor factors to their z scores guarantees that the relative magnitude of the coefficients directly indicates the relative contribution of each factorregression on the table 5 data for both sets of users tests which factors k kat rep most strongly predicts usin this illustrative example the results of the regression with all factors included shows that only k and rep are significant in order to develop a performance function estimate that includes only significant factors and eliminates redundancies a second regression including only significant factors must then be donein this case a second regression yields the predictive equation ie a is 40 and w2 is 78the results also show rc is significant at p 0003 rep significant at p 0001 and the combination of is and rep account for 92 of the variance in us the external validation criterionthe factor utt was not a significant predictor of performance in part because utt and rep are highly redundantgiven these predictions about the relative contribution of different factors to performance it is then possible to return to the problem first introduced in section 1 given potentially conflicting performance criteria such as robustness and efficiency how can the performance of agent a and agent b be comparedgiven values for a and wi performance can be calculated for both agents using the equation abovethe mean performance of a is 44 and the mean performance of b is 44 suggesting that agent b may perform better than agent a overallthe evaluator must then however test these performance differences for statistical significancein this case a t test shows that differences are only significant at the p 07 level indicating a trend onlyin this case an evaluation over a larger subset of the user population would probably show significant differencessince both tc and ci can be calculated over subdialogues performance can also be calculated at the subdialogue level by using the values for a and wi as solved for abovethis assumes that the factors that are predictive of global performance based on us generalize as predictors of local performance ie within subdialogues defined by subtasks as defined by the attribute taggingi2 consider calculating the performance of the dialogue strategies used by train timetable agents a and b over the subdialogues that repair the value of departcitysegment s3 is an example of such a subdialogue with agent aas in the initial estimation of a performance function our analysis requires experimental data namely a set of values for and c and the application of the z score normalization function to this datahowever the values for rc and ci are now calculated at the subdialogue rather than the whole dialogue levelin addition only data from comparable strategies can be used to calculate the mean and standard deviation for normalizationinformally a comparable strategy is one which applies in the same state and has the same effectsfor example to calculate for agent a over the subdialogues that repair departcity p and p are computed using only the subpart of table 3 concerned with departcityfor agent a p 78 p 265 and frc 70then this value of is is normalized using data from comparable subdialogues with both agent a and agent bbased on the data in tables 3 and 4 the mean is 515 and a is 261 so that h for agent a is 71to calculate c2 for agent a assume that the average number of repair utterances for agent a subdialogues that repair departcity is 6 that the mean over all comparable repair subdialogues is 4 and the standard deviation is 279then h is 72let agent a repair dialogue strategy for subdialogues repairing departcity be ra and agent b repair strategy for departcity be rgthen using the performance equation above predicted performance for ra is for agent b using the appropriate subpart of table 4 to calculate lc assuming that the average number of departcity repair utterances is 138 and using similar i2this assumption has a sound basis in theories of dialogue structure but should be tested empirically calculations yields performance 40 71 78 94 045 thus the results of these experiments predict that when an agent needs to choose between the repair strategy that agent b uses and the repair strategy that agent a uses for repairing departcity it should use agent b strategy rb since the performance is predicted to be greater than the performancenote that the ability to calculate performance over subdialogues allows us to conduct experiments that simultaneously test multiple dialogue strategiesfor example suppose agents a and b had different strategies for presenting the value of departtime without the ability to calculate performance over subdialogues it would be impossible to test the effect of the different presentation strategies independently of the different confirmation strategieswe have presented the paradise framework and have used it to evaluate two hypothetical dialogue agents in a simplified train timetable task domainwe used paradise to derive a performance function for this task by estimating the relative contribution of a set of potential predictors to user satisfactionthe paradise methodology consists of the following steps note that all of these steps are required to develop the performance functionhowever once the weights in the performance function have been solved for user satisfaction ratings no longer need to be collectedinstead predictions about user satisfaction can be made on the basis of the predictor variables as illustrated in the application of paradise to subdialoguesgiven the current state of knowledge it is important to emphasize that researchers should be cautious about generalizing a derived performance function to other agents or tasksperformance function estimation should be done iteratively over many different tasks and dialogue strategies to see which factors generalizein this way the field can make progress on identifying the relationship between various factors and can move towards more predictive models of spoken dialogue agent performancein the previous section we used paradise to evaluate two confirmation strategies using as examples fairly simple information access dialogues in the train timetable domainin this section we demonstrate that paradise is applicable to a range of tasks domains and dialogues by presenting avms for two tasks involving more than information access and showing how additional dialogue phenomena can be tagged using avm attributesfirst consider an extension of the train timetable task where an agent can handle requests to reserve a seat or purchase a ticketthis task could be represented using the avm in table 6 where the agent must now acquire the value of the attribute requesttype in order to know what to do with the other information it has acquiredfigure 5 presents a hypothetical dialogue in this extended task domain and illustrates user utterance types and an agent dialogue strategy that are very different from those in figures 2 and 3first agent c in figure 5 uses a quotno confirmationquot dialogue strategy in contrast to the explicit and implicit confirmation strategies used in figures 2 and 3second figure 5 illustrates new types of user utterances that do not directly further the informational goals of the taskin u2 the user asks the agent a whquestion about the dr attribute itself rather than providing information about that attribute valuesince u2 satisfies a knowledge precondition related to answering cl u2 contributes to the dr goal and is tagged as suchin u3 the user similarly asks a yesno question that addresses a subgoal related to answering clfinally u5 illustrates a user request for an agent action and is tagged with the rt attributethe value of rt in the avm instantiation for the dialogue would be quotreservesecond consider the very different domain and task of diagnosing a fault and repairing a circuit figure 6 presents one dialogue from this domainsmith and gordon collected 144 dialogues for this task in which agent initiative was varied by using different dialogue strategies and tagged each dialogue according to the following subtask structure13 our informational analysis of this task results in the avm shown in table 7note that the attributes are almost identical to smith and gordon list of subtaskscircuitid corresponds to introduction correctcircuitbehavior and currentcircuitbehavior correspond to assessment faulttype corresponds to diagnosis faultcorrection corresponds to repair and test corresponds to testthe attribute names emphasize information exchange while the subtask names emphasize functionfigure 6 is tagged with the attributes from table 7smith and gordon tagging of this dialogue according to their subtask representation was as follows turns 14 were i turns 514 were a turns 1516 were d turns 1718 were r and turns 1935 were t note that there are only two differences between the dialogue structures yielded by the two tagging schemesfirst in our scheme the greetings are tagged with all the attributessecond smith and gordon single tag a corresponds to two attribute tags in table 7 which in our scheme defines an extra level of structure within assessment subdialoguesthis paper presented the paradise framework for evaluating spoken dialogue agentsparadise is a general framework for evaluating spoken dialogue agents that integrates and enhances previous workparadise supports comparisons among dialogue strategies with a task representation that decouples what an agent needs to achieve in terms of the task requirements from how the agent carries out the task via dialoguefurthermore this task representation supports the calculation of performance over subdialogues as well as whole dialoguesin addition because paradise success measure normalizes for task complexity it provides a basis for comparing agents performing different tasksthe paradise performance measure is a function of both task success and dialogue costs and has a number of advantagesfirst it allows us to evaluate performance at any level of a dialogue since k and ci can be calculated for any dialogue subtasksince performance can be measured over any subtask and since dialogue strategies can range over subdialogues or the whole dialogue we can associate performance with individual dialogue strategiessecond because our success measure k takes into account the complexity of the task comparisons can be made across dialogue tasksthird k allows us to measure partial success at achieving the taskfourth performance can combine both objective and subjective cost measures and specifies how to evaluate the relative contributions of those costs factors to overall performancefinally to our knowledge we are the first to propose using user satisfaction to determine weights on factors related to performancein addition this approach is broadly integrative incorporating aspects of transaction success concept accuracy multiple cost measures and user satisfactionin our framework transaction success is reflected in k corresponding to dialogues with a p of 1our performance measure also captures information similar to concept accuracy where low concept accuracy scores translate into either higher costs for acquiring information from the user or lower k scoresone limitation of the paradise approach is that the taskbased success measure does not reflect that some solutions might be better than othersfor example in the train timetable domain we might like our taskbased success measure to give higher ratings to agents that suggest express over local trains or that provide helpful information that was not explicitly requested especially since the better solutions might occur in dialogues with higher costsit might be possible to address this limitation by using the interval scaled data version of k another possibility is to simply substitute a domainspecific taskbased success measure in the performance model for k the evaluation model presented here has many applications in apoken dialogue processingwe believe that the framework is also applicable to other dialogue modalities and to humanhuman taskoriented dialoguesin addition while there are many proposals in the literature for algorithms for dialogue strategies that are cooperative collaborative or helpful to the user very few of these strategies have been evaluated as to whether they improve any measurable aspect of a dialogue interactionas we have demonstrated here any dialogue strategy can be evaluated so it should be possible to show that a cooperative response or other cooperative strategy actually improves task performance by reducing costs or increasing task successwe hope that this framework will be broadly applied in future dialogue researchwe would like to thank james allen jennifer chucarroll morena danieli wieland eckert giuseppe di fabbrizio don hindle julia hirschberg shri narayanan jay wilpon steve whittaker and three anonymous reviews for helpful discussion and comments on earlier versions of this paper
P97-1035
paradise a framework for evaluating spoken dialogue agentsthis paper presents paradise a general framework for evaluating spoken dialogue agentsthe framework decouples task requirements from an agent dialogue behaviors supports comparisons among dialogue strategies enables the calculation of performance over subdialogues and whole dialogues specifies the relative contribution of various factors to performance and makes it possible to compare agents performing different tasks by normalizing for task complexitywe identify three factors which carry an influence on the performance of sdss agent factors task factors and environmental factor eg factors related to the acoustic environment and the transmission channelwe aim to evaluate diaglogue agent strategies by relating overall user satisfaction to other metrics such as task success efficiency measure and qualitative measures
a trainable rulebased algorithm for word segmentation this paper presents a trainable rulebased algorithm for performing word segmentation the algorithm provides a simple languageindependent alternative to largescale lexicalbased segmenters requiring large amounts of knowledge engineering as a standalone segmenter we show our algorithm to produce high performance chinese segmentation in addition we show the transformationbased algorithm to be effective in improving the output of several existing word segmentation algorithms in three different languages this paper presents a trainable rulebased algorithm for performing word segmentationour algorithm is effective both as a highaccuracy standalone segmenter and as a postprocessor that improves the output of existing word segmentation algorithmsin the writing systems of many languages including chinese japanese and thai words are not delimited by spacesdetermining the word boundaries thus tokenizing the text is usually one of the first necessary processing steps making tasks such as partofspeech tagging and parsing possiblea variety of methods have recently been developed to perform word segmentation and the results have been published widelya major difficulty in evaluating segmentation algorithms is that there are no widelyaccepted guidelines as to what constitutes a word and there is therefore no agreement on how to quotcorrectlyquot segment a text in an unsegmented languageit is 1most published segmentation work has been done for chinesefor a discussion of recent chinese segmentation work see sproat et al frequently mentioned in segmentation papers that native speakers of a language do not always agree about the quotcorrectquot segmentation and that the same text could be segmented into several very different sets of words by different native speakerssproat et al and wu and fung give empirical results showing that an agreement rate between native speakers as low as 75 is commonconsequently an algorithm which scores extremely well compared to one native segmentation may score dismally compared to other equally quotcorrectquot segmentationswe will discuss some other issues in evaluating word segmentation in section 31one solution to the problem of multiple correct segmentations might be to establish specific guidelines for what is and is not a word in unsegmented languagesgiven these guidelines all corpora could theoretically be uniformly segmented according to the same conventions and we could directly compare existing methods on the same corporawhile this approach has been successful in driving progress in nlp tasks such as partofspeech tagging and parsing there are valid arguments against adopting it for word segmentationfor example since word segmentation is merely a preprocessing task for a wide variety of further tasks such as parsing information extraction and information retrieval different segmentations can be useful or even essential for the different tasksin this sense word segmentation is similar to speech recognition in which a system must be robust enough to adapt to and recognize the multiple speakerdependent quotcorrectquot pronunciations of wordsin some cases it may also be necessary to allow multiple quotcorrectquot segmentations of the same text depending on the requirements of further processing stepshowever many algorithms use extensive domainspecific word lists and intricate name recognition routines as well as hardcoded morphological analysis modules to produce a predetermined segmentation outputmodifying or retargeting an existing segmentation algorithm to produce a different segmentation can be difficult especially if it is not clear what and where the systematic differences in segmentation areit is widely reported in word segmentation papers2 that the greatest barrier to accurate word segmentation is in recognizing words that are not in the lexicon of the segmentersuch a problem is dependent both on the source of the lexicon as well as the correspondence between the text in question and the lexiconwu and fung demonstrate that segmentation accuracy is significantly higher when the lexicon is constructed using the same type of corpus as the corpus on which it is testedwe argue that rather than attempting to construct a single exhaustive lexicon or even a series of domainspecific lexica it is more practical to develop a robust trainable means of compensating for lexicon inadequaciesfurthermore developing such an algorithm will allow us to perform segmentation in many different languages without requiring extensive morphological resources and domainspecific lexica in any single languagefor these reasons we address the problem of word segmentation from a different directionwe introduce a rulebased algorithm which can produce an accurate segmentation of a text given a rudimentary initial approximation to the segmentationrecognizing the utility of multiple correct segmentations of the same text our algorithm also allows the output of a wide variety of existing segmentation algorithms to be adapted to different segmentation schemesin addition our rulebased algorithm can also be used to supplement the segmentation of an existing algorithm in order to compensate for an incomplete lexiconour algorithm is trainable and language independent so it can be used with any unsegmented hnguagethe key component of our trainable segmentation algorithm is transformationbased errordriven learning the corpusbased language processing method introduced by brill this technique provides a simple algorithm for learning a sequence of rules that can be applied to various nlp tasksit differs from other common corpusbased methods in several waysfor one it is weakly statistical but not probabilistic transformationbased approaches conseoiitly require far less training data than most stical approachesit is rulebased but relies on ee for example sproat et al machine learning to acquire the rules rather than expensive manual knowledge engineeringthe rules produced can be inspected which is useful for gaining insight into the nature of the rule sequence and for manual improvement and debugging of the sequencethe learning algorithm also considers the entire training set at all learning steps rather than decreasing the size of the training data as learning progresses such as is the case in decisiontree induction for a thorough discussion of transformationbased learning see ramshaw and marcus brill work provides a proof of viability of transformationbased techniques in the form of a number of processors including a partofspeech tagger a procedure for prepositional phrase attachment and a bracketing parser all of these provided performance comparable to or better than previous attemptstransformationbased learning has also been successfully applied to text chunking morphological disambiguation and phrase parsing word segmentation can easily be cast as a transformationbased problem which requires an initial model a goal state into which we wish to transform the initial model and a series of transformations to effect this improvementthe transformationbased algorithm involves applying and scoring all the possible rules to training data and determining which rule improves the model the mostthis rule is then applied to all applicable sentences and the process is repeated until no rule improves the score of the training datain this manner a sequence of rules is built for iteratively improving the initial modelevaluation of the rule sequence is carried out on a test set of data which is independent of the training dataif we treat the output of an existing segmentation algorithm as the initial state and the desired segmentation as the goal state we can perform a series of transformations on the initial state removing extraneous boundaries and inserting new boundaries to obtain a more accurate approximation of the goal statewe therefore need only define an appropriate rule syntax for transforming this initial approximathe quotexistingquot algorithm does not need to be a large or even accurate system the algorithm can be arbitrarily simple as long as it assigns some form of initial segmentation tion and prepare appropriate training datafor our experiments we obtained corpora which had been manually segmented by native or nearnative speakers of chinese and thaiwe divided the handsegmented data randomly into training and test setsroughly 80 of the data was used to train the segmentation algorithm and 20 was used as a blind test set to score the rules learned from the training datain addition to chinese and thai we also performed segmentation experiments using a large corpus of english in which all the spaces had been removed from the textsmost of our english experiments were performed using training and test sets with roughly the same 8020 ratio but in section 343 we discuss results of english experiments with different amounts of training dataunfortunately we could not repeat these experiments with chinese and thai due to the small amount of handsegmented data availablethere are three main types of transformations which can act on the current state of an imperfect segmentation in our syntax insert and delete transformations can be triggered by any two adjacent characters and one character to the left or right of the bigramslide transformations can be triggered by a sequence of one two or three characters over which the boundary is to be movedfigure 1 enumerates the 22 segmentation transformations we definewith the above algorithm in place we can use the training data to produce a rule sequence to augment an initial segmentation approximation in order to obtain a better approximation of the desired segmentationfurthermore since all the rules are purely characterbased a sequence can be learned for any character set and thus any languagewe used our rulebased algorithm to improve the word segmentation rate for several segmentation algorithms in three languagesdespite the number of papers on the topic the evaluation and comparison of existing segmentation algorithms is virtually impossiblein addition to the problem of multiple correct segmentations of the same texts the comparison of algorithms is difficult because of the lack of a single metric for reporting scorestwo common measures of performance are recall and precision where recall is defined as the percent of words in the handsegmented text identified by the segmentation algorithm and precision is defined as the percentage of words returned by the algorithm that also occurred in the handsegmented text in the same positionthe component recall and precision scores are then used to calculate an fmeasure where f priin this paper we will report all scores as a balanced fmeasure with 3 1 such that for our chinese experiments the training set consisted of 2000 sentences from a xinhua news agency corpus the test set was a separate set of 560 sentences from the same corpus5 we ran four experiments using this corpus with four different algorithms providing the starting point for the learning of the segmentation transformationsin each case the rule sequence learned from the training set resulted in a significant improvement in the segmentation of the test seta very simple initial segmentation for chinese is to consider each character a distinct wordsince the average word length is quite short in chinese with most words containing only 1 or 2 characters6 this characterasword segmentation correctly identified many onecharacter words and produced an initial segmentation score of f403while this is a low segmentation score this segmentation algorithm identifies enough words to provide a reasonable initial segmentation approximationin fact the caw algorithm alone has been shown to be adequate to be used successfully in chinese information retrievalour algorithm learned 5903 transformations from the 2000 sentence training setthe 5903 transformations applied to the test set improved the score from f403 to 781 a 633 reduction in the error ratethis is a very surprising and encouraging result in that from a very naive initial approximation using no lexicon except that implicit from the training data our rulebased algorithm is able to produce a series of transformations with a high segmentation accuracya common approach to word segmentation is to use a variation of the maximum matching algorithm frequently referred to as the quotgreedy algorithmquot the greedy algorithm starts at the first character in a text and using a word list for the language being segmented attempts to find the longest word in the list starting with that characterif a word is found the maximummatching algorithm marks a boundary at the end of the longest word then begins the same longest match search starting at the character following the matchif no match is found in the word list the greedy algorithm simply skips that character and begins the search starting at the next characterin this manner an initial segmentation can be obtained that is more informed than a simple characterasword approachwe applied the maximum matching algorithm to the test set using a list of 57472 chinese words from the nmsu chseg segmenter this greedy algorithm produced an initial score of f644a sequence of 2897 transformations was learned from the training set applied to the test set they improved the score from f644 to 849 a 578 error reductionfrom a simple chinese word list the rulebased algorithm was thus able to produce asegmentation score comparable to segmentation algorithms developed with a large amount of domain knowledge this score was improved further when combining the characterasword and the maximum matching algorithmsin the maximum matching algorithm described above when a sequence of characters occurred in the text and no subset of the sequence was present in the word list the entire sequence was treated as a single wordthis often resulted in words containing 10 or more characters which is very unlikely in chinesein this experiment when such a sequence of characters was encountered each of the characters was treated as a separate word as in the caw algorithm abovethis variation of the greedy algorithm using the same list of 57472 words produced an initial score of f829a sequence of 2450 transformations was learned from the training set applied to the test set they improved the score from f829 to 877 a 281 error reductionthe score produced using this variation of the maximum matching algorithm combined with a rule sequence is nearly equal to the score produced by the nmsu segmenter segmenter discussed in the next sectionthe previous three experiments showed that our rule sequence algorithm can produce excellent segmentation results given very simple initial segmentation algorithmshowever assisting in the adaptation of an existing algorithm to different segmentation schemes as discussed in section 1 would most likely be performed with an already accurate fullydeveloped algorithmin this experiment we demonstrate that our algorithm can also improve the output of such a systemthe chinese segmenter chseg developed at the computing research laboratory at new mexico state university is a complete system for highaccuracy chinese segmentation in addition to an initial segmentation module that finds words in a text based on a list of chinese words chseg additionally contains specific modules for recognizing idiomatic expressions derived words chinese person names and foreign proper namesthe accuracy of chseg on an 86mb corpus has been independently reported as f840 on our test set chseg produced a segmentation score of f879our rulebased algorithm learned a sequence of 1755 transformations from the training set applied to the test set they improved the score from 879 to 896 a 140 reduction in the error rateour rulebased algorithm is thus able to produce an improvement to an existing highperformance systemtable 1 shows a summary of the four chinese experimentswhile thai is also an unsegmented language the thai writing system is alphabetic and the average word length is greater than chinese7 we would therefore expect that our characterbased transformations would not work as well with thai since a context of more than one character is necessary in many cases to make many segmentation decisions in alphabetic languagesthe thai corpus consisted of texts from the thai news agency via nectec in thailandfor our experiment the training set consisted of 3367 sentences the test set was a separate set of 1245 sentences from the same corpusthe initial segmentation was performed using the maximum matching algorithm with a lexicon of 9933 thai words from the word separation filter in cite a thai language latex packagethis greedy algorithm gave an initial segmentation score of f482 on the test setour rulebased algorithm learned a sequence of 731 transformations which improved the score from 482 to 636 a 297 error reductionwhile the alphabetic system is obviously harder to segment we still see a significant reduction in the segmenter error rate using the transformationbased algorithmnevertheless it is doubtful that a segmentation with a score of 636 would be useful in too many applications and this result will need to be significantly improvedalthough english is not an unsegmented language the writing system is alphabetic like thai and the average word length is similarsince english language resources are more readily available it is instructive to experiment with a desegmented english corpus that is english texts in which the spaces have been removed and word boundaries are not explicitly indicatedthe following shows an example of an english sentence and its desegmented version about 20000 years ago the last ice age endedabout20000yearsagothelasticeageended the results of such experiments can help us determine which resources need to be compiled in order to develop a highaccuracy segmentation algorithm in unsegmented alphabetic languages such as thaiin addition we are also able to provide a more detailed error analysis of the english segmentation our english experiments were performed using a corpus of texts from the wall street journal the training set consisted of 2675 sentences in which all the spaces had been removed the test set was a separate set of 700 sentences from the same corpus for an initial experiment segmentation was performed using the maximum matching algorithm with a large lexicon of 34272 english words compiled from the wsjquot in contrast to the low initial thai score the greedy algorithm gave an initial english segmentation score of f732our rulebased algorithm learned a sequence of 800 transformations the average length of a word in our english data was 446 characters compared to 501 for thai and 160 for chinese10note that the portion of the wsj corpus used to compile the word list was independent of both the training and test sets used in the segmentation experiments which improved the score from 732 to 790 a 216 error reductionthe difference in the greedy scores for english and thai demonstrates the dependence on the word list in the greedy algorithmfor example an experiment in which we randomly removed half of the words from the english list reduced the performance of the greedy algorithm from 732 to 323 although this reduced english word list was nearly twice the size of the thai word list the longest match segmentation utilizing the list was much lower successive experiments in which we removed different random sets of half the words from the original list resulted in greedy algorithm performance of 392 351 and 355yet despite the disparity in initial segmentation scores the transformation sequences effect a significant error reduction in all cases which indicates that the transformation sequences are effectively able to compensate for weaknesses in the lexicontable 2 provides a summary of the results using the greedy algorithm for each of the three languagesas mentioned above lexical resources are more readily available for english than for thaiwe can use these resources to provide an informed initial segmentation approximation separate from the greedy algorithmusing our native knowledge of english as well as a short list of common english prefixes and suffixes we developed a simple algorithm for initial segmentation of english which placed boundaries after any of the suffixes and before any of the prefixes as well as segmenting punctuation charactersin most cases this simple approach was able to locate only one of the two necessary boundaries for recognizing full words and the initial score was understandably low f298nevertheless even from this flawed initial approximation our rulebased algorithm learned a sequence of 632 transformations which nearly doubled the word recall improving the score from 298 to 533 a 335 error reductionsince we had a large amount of english data we also performed a classic experiment to determine the effect the amount of training data had on the ability of the rule sequences to improve segmentationwe started with a training set only slightly larger than the test set 872 sentences and repeated the maximum matching experiment described in section 341we then incrementally increased the amount of training data and repeated the experimentthe results summarized in table 3 clearly indicate that more training sentences produce both a longer rule sequence and a larger error reduction in the test dataupon inspection of the english segmentation errors produced by both the maximum matching algorithm and the learned transformation sequences one major category of errors became clearmost apparent was the fact that the limited context transformations were unable to recover from many errors introduced by the naive maximum matching algorithmfor example because the greedy algorithm always looks for the longest string of characters which can be a word given the character sequence quoteconomicsituationquot the greedy algorithm first recognized quoteconomicsquot and several shorter words segmenting the sequence as quoteconomics it you at io nquotsince our transformations consider only a single character of context the learning algorithm was unable to patch the smaller segments back together to produce the desired output quoteconomic situationquotin some cases the transformations were able to recover some of the word but were rarely able to produce the full desired outputfor example in one case the greedy algorithm segmented quothumanactivityquot as quothumana c ti vi tyquotthe rule sequence was able to transform this into quothumana ctivityquot but was not able to produce the desired quothuman activityquotthis suggests that both the greedy algorithm and the transformation learning algorithm need to have a more global word model with the ability to recognize the impact of placing a boundary on the longer sequences of characters surrounding that pointthe results of these experiments demonstrate that a transformationbased rule sequence supplementing a rudimentary initial approximation can produce accurate segmentationin addition they are able to improve the performance of a wide range of segmentation algorithms without requiring expensive knowledge engineeringlearning the rule sequences can be achieved in a few hours and requires no languagespecific knowledgeas discussed in section 1 this simple algorithm could be used to adapt the output of an existing segmentation algorithm to different segmentation schemes as well as compensating for incomplete segmenter lexica without requiring modifications to segmenters themselvesthe rulebased algorithm we developed to improve word segmentation is very effective for segmenting chinese in fact the rule sequences combined with a very simple initial segmentation such as that from a maximum matching algorithm produce performance comparable to manuallydeveloped segmentersas demonstrated by the experiment with the nmsu segmenter the rule sequence algorithm can also be used to improve the output of an already highlyaccurate segmenter thus producing one of the best segmentation results reported in the literaturein addition to the excellent overall results in chinese segmentation we also showed the rule sequence algorithm to be very effective in improving segmentation in thai an alphabetic languagewhile the scores themselves were not as high as the chinese performance the error reduction was nevertheless very high which is encouraging considering the simple rule syntax usedthe current state of our algorithm in which only three characters are considered at a time will understandably perform better with a language like chinese than with an alphabetic language like thai where average word length is much greaterthe simple syntax described in section 22 can however be easily extended to consider larger contexts to the left and the right of boundaries this extension would necessarily come at a corresponding cost in learning speed since the size of the rule space searched during training would grow accordinglyin the future we plan to further investigate the application of our rulebased algorithm to alphabetic languagesacknowledgements this work would not have been possible without the assistance and encouragement of all the members of the mitre natural language groupthis paper benefited greatly from discussions with and comments from marc vilain lynette hirschman sam bayer and the anonymous reviewers
P97-1041
a trainable rulebased algorithm for word segmentationthis paper presents a trainable rulebased algorithm for performing word segmentationthe algorithm provides a simple languageindependent alternative to largescale lexicaibased segmenters requiring large amounts of knowledge engineeringas a standalone segmenter we show our algorithm to produce high performance chinese segmentationin addition we show the transformationbased algorithm to be effective in improving the output of several existing word segmentation algorithms in three different languagesour chinese segmenter makes use of only a manually segmented corpus without referring to any lexicon
a wordtoword model of translational equivalence many multilingual nlp applications need to translate words between different languages but cannot afford the computational expense of inducing or applying a full translation model for these applications we have designed a fast algorithm for estimating a partial translation model which accounts for translational equivalence only at the word level the model precisionrecall tradeoff can be directly controlled via one threshold parameter this feature makes the model more suitable for applications that are not fully statistical the model hidden parameters can be easily conditioned on information extrinsic to the model providing an easy way to integrate preexisting knowledge such as partofspeech dictionaries word order etc our model can link word tokens in parallel texts as well as other translation models in the literature unlike other translation models it can automatically produce dictionarysized translation lexicons and it can do so with over 99 accuracy over the past decade researchers at ibm have developed a series of increasingly sophisticated statistical models for machine translation however the ibm models which attempt to capture a broad range of translation phenomena are computationally expensive to applytable lookup using an explicit translation lexicon is sufficient and preferable for many multilingual nlp applications including quotcrummyquot mt on the world wide web certain machineassisted translation tools concordancing for bilingual lexicography computerassisted language learning corpus linguistics and crosslingual information retrieval in this paper we present a fast method for inducing accurate translation lexiconsthe method assumes that words are translated onetoonethis assumption reduces the explanatory power of our model in comparison to the ibm models but as shown in section 31 it helps us to avoid what we call indirect associations a major source of errors in other modelssection 31 also shows how the onetoone assumption enables us to use a new greedy competitive linking algorithm for reestimating the model parameters instead of more expensive algorithms that consider a much larger set of word correspondence possibilitiesthe model uses two hidden parameters to estimate the confidence of its own predictionsthe confidence estimates enable direct control of the balance between the model precision and recall via a simple thresholdthe hidden parameters can be conditioned on prior knowledge about the bitext to improve the model accuracywith the exception of previous methods for automatically constructing statistical translation models begin by looking at word cooccurrence frequencies in bitexts a bitext comprises a pair of texts in two languages where each text is a translation of the otherword cooccurrence can be defined in various waysthe most common way is to divide each half of the bitext into an equal number of segments and to align the segments so that each pair of segments si and ti are translations of each other then two word tokens are said to cooccur in theour translation model consists of the hidden parameters a and c and likelihood ratios lthe two hidden parameters are the probabilities of the model generating true and false positives in the datal represents the likelihood that you and v can be mutual translationsfor each cooccurring pair of word types you and v these likelihoods are initially set proportional to their cooccurrence frequency and inversely proportional to their marginal frequencies n and n 1 following 2when the l are reestimated the model hidden parameters come into playafter initialization the model induction algorithm iterates the competitive linking algorithm and its onetoone assumption are detailed in section 31section 31 explains how to reestimate the model parametersthe competitive linking algorithm is designed to overcome the problem of indirect associations illustrated in figure 1the sequences of you and v represent corresponding regions of a bitextif uk and vk cooccur much more often than expected by chance then any reasonable model will deem them likely to be mutual translationsif uk and vk are indeed mutual translations then their tendency to the cooccurrence frequency of a word type pair is simply the number of times the pair cooccurs in the corpushowever n ev n which is not the same as the frequency of you because each token of you can cooccur with several differentv2we could just as easily use other symmetric quotassociationquot measures such as 02 or the dice coefficient cooccur is called a direct associationnow suppose that uk and uki often cooccur within their languagethen vk and uki will also cooccur more often than expected by chancethe arrow connecting vk and uki in figure 1 represents an indirect association since the association between vk and uki arises only by virtue of the association between each of them and uk models of translational equivalence that are ignorant of indirect associations have quota tendency to be confused by collocatesquot fortunately indirect associations are usually not difficult to identify because they tend to be weaker than the direct associations on which they are based the majority of indirect associations can be filtered out by a simple competition heuristic whenever several word tokens ui in one half of the bitext cooccur with a particular word token v in the other half of the bitext the word that is most likely to be v translation is the one for which the likelihood l of translational equivalence is highestthe competitive linking algorithm implements this heuristic nb a and a need not sum to 1 because they are conditioned on different events would be the winners in any competitions involving you or v the competitive linking algorithm is more greedy than algorithms that try to find a set of link types that are jointly most probable over some segment of the bitextin practice our linking algorithm can be implemented so that its worstcase running time is 0 where 1 and m are the lengths of the aligned segmentsthe simplicity of the competitive linking algorithm depends on the onetoone assumption each word translates to at most one other wordcertainly there are cases where this assumption is falsewe prefer not to model those cases in order to achieve higher accuracy with less effort on the cases where the assumption is truethe purpose of the competitive linking algorithm is to help us reestimate the model parametersthe variables that we use in our estimation are summarized in figure 2the linking algorithm produces a set of links between word tokens in the bitextwe define a link token to be an ordered pair of word tokens one from each half of the bitexta link type is an ordered pair of word typeslet n be the cooccurrence frequency of you and v and k be the number of links between tokens of you and v3an note that k depends on the linking algorithm but n is a constant property of the bitext important property of the competitive linking algorithm is that the ratio kn tends to be very high if you and v are mutual translations and quite low if they are notthe bimodality of this ratio for several values of n is illustrated in figure 3this figure was plotted after the model first iteration over 300000 aligned sentence pairs from the canadian hansard bitextnote that the frequencies are plotted on a log scale the bimodality is quite sharpthe linking algorithm creates all the links of a given type independently of each other so the number k of links connecting word types you and v has a binomial distribution with parameters n and pif you and v are mutual translations then p tends to a relatively high probability which we will call aif you and v are not mutual translations then p tends to a very low probability which we will call a a and a correspond to the two peaks in the frequency distribution of kn in figure 2the two parameters can also be interpreted as the percentage of true and false positivesif the translation in the bitext is consistent and the model is accurate then a should be near 1 and ashould be near 0to find the most probable values of the hidden model parameters a and a we adopt the standard method of maximum likelihood estimation and find the values that maximize the probability of the link frequency distributionsthe onetoone assumption implies independence between different link types so that the factors on the righthand side of equation 1 can be written explicitly with the help of a mixture coefficientlet r be the probability that an arbitrary cooccurring pair of word types are mutual translationslet b denote the probability that k links are observed out of n cooccurrences where k has a binomial distribution with parameters n and p then the probability that you and v are linked k times out of n cooccurrences is a mixture of two binomials one more variable allows us to express t in terms of a and a let a be the probability that an arbitrary cooccuring pair of word tokens will be linked regardless of whether they are mutual translationssince t is constant over all word types it also represents the probability that an arbitrary cooccurring pair of word tokens are mutual translationstherefore a can also be estimated empiricallylet k be the total number of links in the bitext and let n be the total number of cooccuring word token pairs k equating the righthand sides of equations and and rearranging the terms we get since r is now a function of a and a only the latter two variables represent degrees of freedom in the modelthe probability function expressed by equations 1 and 2 has many local maximain practice these local maxima are like pebbles on a mountain invisible at low resolutionwe computed equation 1 over various combinations of a and a after the model first iteration over 300000 aligned sentence pairs from the canadian hansard bitextfigure 4 shows that the region of interest in the parameter space where 1 a a a 0 has only one clearly visible global maximumthis global maximum can be found by standard hillclimbing methods as long as the step size is large enough to avoid getting stuck on the pebblesgiven estimates for a and a we can compute b and bthese are probabilities that kuv links were generated by an algorithm that generates correct links and by an algorithm that generates incorrect links respectively out of n cooccurrencesthe ratio of these probabilities is the likelihood ratio in favor of you and v being mutual translations for all you and vin the basic wordtoword model the hidden parameters a and a depend only on the distributions of link frequencies generated by the competitive linking algorithmmore accurate models can be induced by taking into account various features of the linked tokensfor example frequent words are translated less consistently than rare words to account for this difference we can estimate separate values of a and a for different ranges of nsimilarly the hidden parameters can be conditioned on the linked parts of speechword order can be taken into account by conditioning the hidden parameters on the relative positions of linked word tokens in their respective sentencesjust as easily we can model links that coincide with entries in a preexisting translation lexicon separately from those that do notthis method of incorporating dictionary information seems simpler than the method proposed by brown et al for their models when the hidden parameters are conditioned on different link classes the estimation method does not change it is just repeated for each link classa wordtoword model of translational equivalence can be evaluated either over types or over tokensit is impossible to replicate the experiments used to evaluate other translation models in the literature because neither the models nor the programs that induce them are generally availablefor each kind of evaluation we have found one case where we can come closewe induced a twoclass wordtoword model of translational equivalence from 13 million words of the canadian hansards aligned using the method in one class represented contentword links and the other represented functionword links4link types with negative loglikelihood were discarded after each iterationboth classes parameters converged after six iterationsthe value of classbased models was demonstrated by the differences between the hidden parameters for the two classesc converged at for contentclass links and at for functionclass linksthe most direct way to evaluate the link types in a wordlevel model of translational equivalence is to treat each link type as a candidate translation lexicon entry and to measure precision and recallthis evaluation criterion carries much practical import because many of the applications mentioned in section 1 depend on accurate broadcoverage translation lexiconsmachine readable bilingual dictionaries even when they are available have only limited coverage and rarely include domainspecific terms we define the recall of a wordtoword translation model as the fraction of the bitext vocabulary represented in the modeltranslation model precision is a more thorny issue because people disagree about the degree to which context should play a role in judgements of translational equivalencewe handevaluated the precision of the link types in our model in the context of the bitext from which the model 4since function words can be identified by table lookup no postagger was involved was induced using a simple bilingual concordancera link type was considered correct if you and v ever cooccurred as direct translations of each otherwhere the onetoone assumption failed but a link type captured part of a correct translation it was judged quotincompletequot whether incomplete links are correct or incorrect depends on the applicationwe evaluated five random samples of 100 link types each at three levels of recallfor our bitext recall of 36 46 and 90 corresponded to translation lexicons containing 32274 43075 and 88633 words respectivelyfigure 5 shows the precision of the model with 95 confidence intervalsthe upper curve represents precision when incomplete links are considered correct and the lower when they are considered incorrecton the former metric our model can generate translation lexicons with precision and recall both exceeding 90 as well as dictionarysized translation lexicons that are over 99 correctthough some have tried it is not clear how to extract such accurate lexicons from other published translation modelspart of the difficulty stems from the implicit assumption in other models that each word has only one senseeach word is assigned the same unit of probability mass which the model distributes over all candidate translationsthe correct translations of a word that has several correct translations will be assigned a lower probability than the correct translation of a word that has only one correct translationthis imbalance foils thresholding strategies clever as they might be the likelihoods in the wordtoword model remain unnormalized so they do not competethe wordtoword model maintains high precision even given much less training dataresnik melamed report that the model produced translation lexicons with 94 precision and 30 recall when trained on frenchenglish software manuals totaling about 400000 wordsthe model was also used to induce a translation lexicon from a 6200word corpus of frenchenglish weather reportsnasr reported that the translation lexicon that our model induced from this tiny bitext accounted for 30 of the word types with precision between 84 and 90recall drops when there is less training data because the model refuses to make predictions that it cannot make with confidencefor many applications this is the desired behaviorthe most detailed evaluation of link tokens to date was performed by who trained brown et al model 2 on 74 million words of the canadian hansardsthese authors kindly provided us with the links generated by that model in 51 aligned sentences from a heldout test setwe generated links in the same 51 sentences using our twoclass wordtoword model and manually evaluated the contentword links from both modelsthe ibm models are directional ie they posit the english words that gave rise to each french word but ignore the distribution of the english wordstherefore we ignored english words that were linked to nothingthe errors are classified in table 1the quotwrong linkquot and quotmissing linkquot error categories should be selfexplanatoryquotpartial linksquot are those where one french word resulted from multiple english words but the model only links the french word to one of its english sourcesquotclass conflictquot errors resulted from our model refusal to link content words with function wordsusually this is the desired behavior but words like english auxiliary verbs are sometimes used as content words giving rise to content words in frenchsuch errors could be overcome by a model that classifies each word token for example using a partofspeech tagger instead of assigning the same class to all tokens of a given typethe bitext preprocessor for our wordtoword model split hyphenated words but macklovitch hannan preprocessor did notin some cases hyphenated words were easier to link correctly in other cases they were more difficultboth models made some errors because of this tokenization problem albeit in different placesthe quotparaphrasequot category covers all link errors that resulted from paraphrases in the translationneither ibm model 2 nor our model is capable of linking multiword sequences to multiword sequences and this was the biggest source of error for both modelsthe test sample contained only about 400 content words5 and the links for both models were evaluated posthoc by only one evaluatornevertheless it appears that our wordtoword model with only two link classes does not perform any worse than ibm model 2 even though the wordtoword model was trained on less than one fifth the amount of data that was used to train the ibm modelsince it does not store indirect associations our wordtoword model contained an average of 45 french words for every english wordsuch a compact model requires relatively little computational effort to induce and to applyin addition to the quantitative differences between the wordtoword model and the ibm model there is an important qualitative difference illustrated in figure 6as shown in table 1 the most common kind of error for the wordtoword model was a missing link whereas the most common error for ibm model 2 was a wrong linkmissing links are more informative they indicate where the model has failedthe level at which the model trusts its own judgement can be varied directly by changing the likelihood cutoff in step 1 of the competitive linking algorithmeach application of the wordtoword model can choose its own balance between link token precision and recallan application that calls on the wordtoword model to link words in a bitext could treat unlinked words differently from linked words and avoid basing subsequent decisions on uncertain inputsit is not clear how the precisionrecall tradeoff can be controlled in the ibm modelsone advantage that brown et al model 1 has over our wordtoword model is that their objective function has no local maximaby using the them algorithm they can guarantee convergence towards the globally optimum parameter setin contrast the dynamic nature of the competitive linking algorithm changes the pr in a nonmonotonic fashionwe have adopted the simple heuristic that the model quothas convergedquot when this probability stops increasingmany multilingual nlp applications need to translate words between different languages but cannot afford the computational expense of modeling the full range of translation phenomenafor these applications we have designed a fast algorithm for estimating wordtoword models of translational equivalencethe estimation method uses a pair of hidden parameters to measure the model uncertainty and avoids making decisions that it is not likely to make correctlythe hidden parameters can be conditioned on information extrinsic to the model providing an easy way to integrate preexisting knowledgeso far we have only implemented a twoclass model to exploit the differences in translation consistency between content words and function wordsthis relatively simple twoclass model linked word tokens in parallel texts as accurately as other translation models in the literature despite being trained on only one fifth as much dataunlike other translation models the wordtoword model can automatically produce dictionarysized translation lexicons and it can do so with over 99 accuracyeven better accuracy can be achieved with a more finegrained link class structurepromising features for classification include part of speech frequency of cooccurrence relative word position and translational entropy another interesting extension is to broaden the definition of a quotwordquot to include multiword lexical units if such units can be identified a priori their translations can be estimated without modifying the wordtoword modelin this manner the model can account for a wider range of translation phenomenathe frenchenglish software manuals were provided by gary adams of sun microsystems laboratoriesthe weather bitext was prepared at the university of montreal under the direction of richard kittredgethanks to alexis nasr for handevaluating the weather translation lexiconthanks also to mike collins george foster mitch marcus lyle ungar and three anonymous reviewers for helpful commentsthis research was supported by an equipment grant from sun microsystems and by arpa contract n6600194c6043
P97-1063
a wordtoword model of translational equivalencemany multilingual nlp applications need to translate words between different languages but cannot afford the computational expense of inducing or applying a full translation modelfor these applications we have designed a fast algorithm for estimating a partial translation model which accounts for translational equivalence only at the word level the model precisionrecall tradeoff can be directly controlled via one threshold parameterthis feature makes the model more suitable for applications that are not fully statisticalthe model hidden parameters can be easily conditioned on information extrinsic to the model providing an easy way to integrate preexisting knowledge such as partofspeech dictionaries word order etcour model can link word tokens in parallel texts as well as other translation models in the literatureunlike other translation models it can automatically produce dictionarysized translation lexicons and it can do so with over 99 accuracywe propose the competitive linking algorithm for linking word pairs and a method which calculates the optimized correspondence level between the word pairs by hill climbingone problem that arises in wordtoword alignment is as follows if e1 is the translation of f1 and f2 has a strong monolingual association with f1 e1 and f2 will also have a strong correlation
a memorybased approach to learning shallow natural language patterns recognizing shallow linguistic patterns such as basic syntactic relationships between words is a common task in applied natural language and text processing the common practice for approaching this task is by tedious manual definition of possible pattern structures often in the form of regular expressions or finite automata this paper presents a novel memorybased learning method that recognizes shallow patterns in new text based on a bracketed training corpus the training data are stored asis in efficient suffixtree data structures generalization is performed online at recognition time by comparing subsequences of the new text to positive and negative evidence in the corpus this way no information in the training is lost as can happen in other learning systems that construct a single generalized model at the time of training the paper presents experimental results for recognizing noun phrase subjectverb and verbobject patterns in english since the learning approach enables easy porting to new domains we plan to apply it to syntactic patterns in other languages and to sublanguage patterns for information extraction identifying local patterns of syntactic sequences and relationships is a fundamental task in natural language processing such patterns may correspond to syntactic phrases like noun phrases or to pairs of words that participate in a syntactic relationship like the heads of a verbobject relationsuch patterns have been found useful in various application areas including information extraction text summarization and bilingual alignmentsyntactic patterns are useful also for many basic computational linguistic tasks such as statistical word similarity and various disambiguation problemsone approach for detecting syntactic patterns is to obtain a full parse of a sentence and then extract the required patternshowever obtaining a complete parse tree for a sentence is difficult in many cases and may not be necessary at all for identifying most instances of local syntactic patternsan alternative approach is to avoid the complexity of full parsing and instead to rely only on local informationa variety of methods have been developed within this framework known as shallow parsing chunking local parsing etcthese works have shown that it is possible to identify most instances of local syntactic patterns by rules that examine only the pattern itself and its nearby contextoften the rules are applied to sentences that were tagged by partofspeech and are phrased by some form of regular expressions or finite state automatamanual writing of local syntactic rules has become a common practice for many applicationshowever writing rules is often tedious and time consumingfurthermore extending the rules to different languages or sublanguage domains can require substantial resources and expertise that are often not availableas in many areas of nlp a learning approach is appealingsurprisingly though rather little work has been devoted to learning local syntactic patterns mostly noun phrases this paper presents a novel general learning approach for recognizing local sequential patterns that may be perceived as falling within the memorybased learning paradigmthe method utilizes a partofspeech tagged training corpus in which all instances of the target pattern are marked the training data are stored asis in suffixtree data structures which enable linear time searching for subsequences in the corpusthe memorybased nature of the presented algorithm stems from its deduction strategy a new instance of the target pattern is recognized by examining the raw training corpus searching for positive and negative evidence with respect to the given test sequenceno model is created for the training corpus and the raw examples are not converted to any other representationconsider the following examplel suppose we want to decide whether the candidate sequence is a noun phrase by comparing it to the training corpusa good match would be if the entire sequence appears asis several times in the corpushowever due to data sparseness an exact match cannot always be expecteda somewhat weaker match may be obtained if we consider subparts of the candidate sequence for example suppose the corpus contains noun phrase instances with the following structures the first structure provides positive evidence that the sequence quotdt adj adj nmquot is a possible np prefix while the second structure provides evidence for quotadj nn nnpquot being an np suffixtogether these two training instances provide positive evidence that covers the entire candidateconsidering evidence for subparts of the pattern enables us to generalize over the exact structures that are present in the corpussimilarly we also consider the negative evidence for such subparts by noting where they occur in the corpus without being a corresponding part of a target instancethe proposed method as described in detail in the next section formalizes this type of reasoningit searches specialized data structures for both positive and negative evidence for subparts of the candidate structure and considers additional factors such as context and evidence overlapsection 3 presents experimental results for three target syntactic patterns in english and section 4 describes related workthe input to the memorybased sequence learning algorithm is a sentence represented as a sequence of pos tags and its output is a bracketed sentence indicating which subsequences of the sentence are to be considered instances of the target pattern mbsl determines the bracketing by first considering each subsequence of the sentence as a candidate to be a target instanceit computes a score for each candidate by comparing it to the training corpus which consists of a set of prebracketed sentencesthe algorithm then finds a consistent bracketing for the input sentence giving preference to high scoring subsequencesin the remainder of this section we describe the scoring and bracketing methods in more detailwe first describe the mechanism for scoring an individual candidatethe input is a candidate subsequence along with its context ie the other tags in the input sentencethe method is presented at two levels a general memorybased learning schema and a particular instantiation of itfurther instantiations of the schema are expected in future workthe mbsl scoring algorithm works by considering situated candidatesa situated candidate is a sentence containing one pair of brackets indicating a candidate to be a target instancethe portion of the sentence between the brackets is the candidate while the portion before and after the candidate is its contextthis subsection describes how to compute the score of a situated candidate from the training corpusthe idea of the mbsl scoring algorithm is to construct a tiling of subsequences of a situated candidate which covers the entire candidatewe consider as tiles subsequences of the situated candidate which contain a bracketeach tile is assigned a score based on its occurrence in the training memorysince brackets correspond to the boundaries of potential target instances it is important to consider how the bracket positions in the tile correspond to those in the training memoryfor example consider the training sentence nn vb adj nn nn adv pp nn we may now examine the occurrence in this sentence of several possible tiles vb adj nn occurs positively in the sentence and nn nn adv also occurs positively while nn nn adv occurs negatively in the training sentence since the bracket does not correspondthe positive evidence for a tile is measured by its positive count the number of times the tile occurs in the training memory with corresponding bracketssimilarly the negative evidence for a tile is measured by its negative count the number of times that the pos sequence of the tile occurs in the training memory with noncorresponding brackets the total count of a tile is its positive count plus its negative count that is the total count of the pos sequence of the tile regardless of bracket positionthe score f of a tile t is a function of its positive and negative countsthe overall score of a situated candidate is generally a function of the scores of all the tiles for the candidate as well as the relations between the tiles positionsthese relations include tile adjacency overlap between tiles the amount of context in a tile and so onin our instantiation of the mbsl schema we define the score f of a tile t as the ratio of its positive count pos and its total count total for a predefined threshold 0tiles with a score of 1 and so with sufficient positive evidence are called matching tileseach matching tile gives supporting evidence that a part of the candidate can be a part of a target instancein order to combine this evidence we try to cover the entire candidate by a set of matching tiles with no gapssuch a covering constitutes evidence that the entire candidate is a target instancefor example consider the matching tiles shown for the candidate in figure 1the set of matching tiles 2 4 and 5 covers the candidate as does the set of tiles 1 and 5also note that tile 1 constitutes a cover on its ownto make this precise we first say that a tile t1 connects to a tile 72 if 22 starts after t1 starts there is no gap between the end of t1 and the start of 72 and t2 ends after t1 for example tiles 2 and 4 in the figure connect while tiles 2 and 5 do not and neither do tiles 1 and 4 a cover for a situated candidate c is a sequence of matching tiles which collectively cover the entire candidate including the boundary brackets and possibly some context such that each tile connects to the following onea cover thus provides positive evidence for the entire sequence of tags in the candidatethe set of all the covers for a candidate summarizes all of the evidence for the candidate being a target instancewe therefore compute the score of a candidate as a function of some statistics of the set of all its coversfor example if a candidate has many different covers it is more likely to be a target instance since many different pieces of evidence can be brought to bearwe have empirically found several statistics of the cover set to be usefulthese include for each cover the number of tiles it contains the total number of context tags it contains and the number of positions which more than one tile covers we thus compute for the set of all covers of a candidate c the each of these items gives an indication regarding the overall strength of the coverbased evidence for the candidatethe score of the candidate is a linear function of its statistics if candidate c has no covers we set f 0note that minsize is weighted negatively since a cover with fewer tiles provides stronger evidence for the candidatein the current implementation the weights were chosen so as to give a lexicographic ordering preferring first candidates with more covers then those with covers containing fewer tiles then those with larger contexts and finally when all else is equal preferring candidates with more overlap between tileswe plan to investigate in the future a datadriven approach for optimal selection and weighting of statistical features of the scorewe compute a candidate statistics efficiently by performing a depthfirst traversal of the cover graph of the candidatethe cover graph is a directed acyclic graph whose nodes represent matching tiles of the candidate such that an arc exists between nodes n and n if tile n connects to na special start node is added as the root of the dag that connects to all of the nodes that contain an open bracketthere is a cover corresponding to each path from the start node to a node that contains a close bracketthus the statistics of all the covers may be efficiently computed by traversing the cover graphthe mbsl scoring algorithm searches the training corpus for each subsequence of the sentence in order to find matching tilesimplementing this search efficiently is therefore of prime importancewe do so by encoding the training corpus using suffix trees which provide string searching in time which is linear in the length of the searched stringinspired by satta we build two suffix trees for retrieving the positive and total counts for a tilethe first suffix tree holds all pattern instances from the training corpus surrounded by bracket symbols and a fixed amount of contextsearching a given tile in this tree yields the positive count for the tilethe second suffix tree holds an unbracketed version of the entire training corpusthis tree is used for searching the pos sequence of a tile with brackets omitted yielding the total count for the tile after the above procedure each situated candidate is assigned a scorein order to select a bracketing for the input sentence we assume that target instances are nonoverlapping we use a simple constraint propagation algorithm that finds the best choice of nonoverlapping candidates in an input sentence ber of patterns and average length in the training datawe have tested our algorithm in recognizing three syntactic patterns noun phrase sequences verbobject and subjectverb relationsthe np patterns were delimited by p and symbols at the borders of the phrasefor vo patterns we have put the starting delimiter before the main verb and the ending delimiter after the object head thus covering the whole noun phrase comprising the object for example investigators started to view the lower price levels 7 as attractive we used a similar policy for sv patterns defining the start of the pattern at the start of the subject noun phrase and the end at the first verb encountered for example argue that the yous should regulate the class the subject and object nounphrase borders were those specified by the annotators phrases which contain conjunctions or appositives were not further analyzedthe training and testing data were derived from the penn treebankwe used the np data prepared by ramshaw and marcus hereafter rm95the sv and vo data were obtained using t scripts2 table 1 summarizes the sizes of the training and test data sets and the number of examples in eachthe t scripts did not attempt to match dependencies over very complex structures since we are concerned with shallow or local patternstable 2 shows the distribution of pattern length in the train datawe also did not attempt to extract passivevoice vo relationsthe test procedure has two parameters maximum context size of a candidate which limits what queries are performed on the memory and the threshold 9 used for establishing a matching tile which determines how to make use of the query resultsrecall and precision figures were obtained for various parameter valuesfo a common measure in information retrieval was used as a singlefigure measure of performance we use 1 which gives no preference to either recall or precisiontable 3 summarizes the optimal parameter settings and results for np vo and sv on the test setin order to find the optimal values of the context size and threshold we tried 01 9 095 and maximum context sizes of 12 and 3our experiments used 5fold crossvalidation on the training data to determine the optimal parameter settingsin experimenting with the maximum context size parameter we found that the difference between the values of fo for context sizes of 2 and 3 is less than 05 for the optimal thresholdscores for a context size of 1 yielded fo values smaller by more than 1 than the values for the larger contextsfigure 2 shows recallprecision curves for the three data sets obtained by varying 9 while keeping the maximum context size at its optimal valuethe difference between fo1 values for different thresholds was always less than 2performance may be measured also on a wordby word basis counting as a success any word which was identified correctly as being part of the target patternthat method was employed along with recallprecision by rm95we preferred to measure performance by recall and precision for complete patternsmost errors involved identifications of slightly shifted shorter or longer sequencesgiven a pattern consisting of five words for example identifying only a fourword portion of this pattern would yield both a recall and precision errorstagassignment scoring on the other hand will give it a score of 80we hold the view that such an identification is an error rather than a partial successwe used the datasets created by rm95 for np learning their results are shown in table 33 the fo difference is small yet they use a richer feature set which incorporates lexical information as wellthe method of ramshaw and marcus makes a decision per word relying on predefined rule templatesthe method presented here makes decisions on sequences and uses sequences as its memory thereby attaining a dynamic perspective of the last line shows the results of ramshaw and marcus with the same traintest datathe optimal parameters were obtained by 5fold crossvalidation pattern structurewe aim to incorporate lexical information as well in the future it is still unclear whether that will improve the resultsfigure 3 shows the learning curves by amount of training examples and number of words in the training data for particular parameter settingstwo previous methods for learning local syntactic patterns follow the transformationbased paradigm introduced by brill vilain and day identify name phrases such as company names locations etcramshaw and marcus detect noun phrases by classifying each word as being inside a phrase outside or on the boundary between phrasesfinite state machines are a natural formalism for learning linear sequencesit was used for learning linguistic structures other than shallow syntaxgold showed that learning regular languages from positive examples is undecidable in the limitrecently however several learning methods have been proposed for restricted classes of fsmostia learns a subsequential transducer in the limitthis algorithm was used for naturallanguage tasks by vilar marzal and vidal for learning translation of a limiteddomain language as well as by gildea and jurafsky for learning phonological rulesahonen et al describe an algorithm for learning contextual regular languages which they use for learning the structure of sgml documentsapart from deterministic fsms there are a number of algorithms for learning stochastic models eg these algorithms differ mainly by their statemerging strategies used for generalizing from the training dataa major difference between the abovementioned learning methods and our memorybased approach is that the former employ generalized models that were created at training time while the latter uses the training corpus asis and generalizes only at recognition timemuch work aimed at learning models for full parsing ie learning hierarchical structureswe refer here only to the dop method which like the present work is a memorybased approachthis method constructs parse alternatives for a sentence based on combinations of subtrees in the training corpusthe mbsl approach may be viewed as a linear analogy to dop in that it constructs a cover for a candidate based on subsequences of training instancesother implementations of the memorybased paradigm for nlp tasks include daelemans et al for pos tagging cardie for syntactic and semantic tagging and stanfill and waltz for word pronunciationin all these works examples are represented as sets of features and the deduction is carried out by finding the most similar casesthe method presented here is radically different in that it makes use of the raw sequential form of the data and generalizes by reconstructing test examples from different pieces of the training datawe have presented a novel general schema and a particular instantiation of it for learning sequential patternsapplying the method to three syntactic patterns in english yielded positive results suggesting its applicability for recognizing local linguistic patternsin future work we plan to investigate a datadriven approach for optimal selection and weighting of statistical features of candidate scores as well as to apply the method to syntactic patterns of hebrew and to domainspecific patterns for information extractionthe authors wish to thank yoram singer for his collaboration in an earlier phase of this research project and giorgio satta for helpful discussionswe aiso thank the anonymous reviewers for their instructive commentsthis research was supported in part by grant 498951 from the israel science foundation and by grant 8560296 from the israeli ministry of science
P98-1010
a memorybased approach to learning shallow natural language patternsrecognizing shallow linguistic patterns such as basic syntactic relationships between words is a common task in applied natural language and text processingthe common practice for approaching this task is by tedious manual definition of possible pattern structures often in the form of regular expressions or finite automatathis paper presents a novel memorybased learning method that recognizes shallow patterns in new text based on a bracketed training corpusthe training data are stored asis in efficient suffixtree data structuresgeneralization is performed online at recognition time by comparing subsequences of the new text to positive and negative evidence in the corpusthis way no information in the training is lost as can happen in other learning systems that construct a single generalized model at the time of trainingthe paper presents experimental results for recognizing noun phrase subjectverb and verbobject patterns in englishsince the learning approach enables easy porting to new domains we plan to apply it to syntactic patterns in other languages and to sublanguage patterns for information extractionwe segment the pos sequence of a multiword into small pos titles count tile frequency in the new word and nonnewword on the training set respectively and detect new words using these counts
entitybased crossdocument core f erencing using the vector space model crossdocument coreference occurs when the same person place event or concept is discussed in more than one text source computer recognition of this phenomenon is important because it helps break quotthe document boundaryquot by allowing a user to examine information about a particular entity from multiple text sources at the same time in this paper we describe a crossdocument coreference resolution algorithm which uses the vector space model to resolve ambiguities between people having the same name in addition we also describe a scoring algorithm for evaluating the crossdocument coreference chains produced by our system and we compare our algorithm to the scoring algorithm used in the muc 6 coreference task crossdocument coreference occurs when the same person place event or concept is discussed in more than one text sourcecomputer recognition of this phenomenon is important because it helps break quotthe document boundaryquot by allowing a user to examine information about a particular entity from multiple text sources at the same timein particular resolving crossdocument coreferences allows a user to identify trends and dependencies across documentscrossdocument coreference can also be used as the central tool for producing summaries from multiple documents and for information fusion both of which have been identified as advanced areas of research by the tipster phase iii programcrossdocument coreference was also identified as one of the potential tasks for the sixth message understanding conference but was not included as a formal task because it was considered too ambitious in this paper we describe a highly successful crossdocument coreference resolution algorithm which uses the vector space model to resolve ambiguities between people having the same namein addition we also describe a scoring algorithm for evaluating the crossdocument coreference chains produced by our system and we compare our algorithm to the scoring algorithm used in the muc6 coreference taskcrossdocument coreference is a distinct technology from named entity recognizers like isoquest netowl and ibm textract because it attempts to determine whether name matches are actually the same individual neither netowl or textract have mechanisms which try to keep samenamed individuals distinct if they are different peoplecrossdocument coreference also differs in substantial ways from withindocument coreferencewithin a document there is a certain amount of consistency which cannot be expected across documentsin addition the problems encountered during within document coreference are compounded when looking for coreferences across documents because the underlying principles of linguistics and discourse context no longer apply across documentsbecause the underlying assumptions in crossdocument coreference are so distinct they require novel approachesfigure 1 shows the architecture of the crossdocument system developedthe system is built upon the university of pennsylvania within document coreference system camp which participated in the seventh message understanding conference within document coreference task our system takes as input the coreference processed documents output by campit then passes these documents through the sentenceextractor module which extracts for each document all the sentences relevant to a particular entity of interestthe vsmdisambiguate module then uses a vector space model algorithm to compute similarities between the sentences extracted for each pair of documentsoliver quotbiffquot kelly of weymouth succeeds john perry as president of the massachusetts golf associationquotwe will have continued growth in the futurequot said kelly who will serve for two yearsquotthere is been a lot of changes and there will be continued changes as we head into the year 2000quot details about each of the main steps of the crossdocument coreference algorithm are given below consider the two extracts in figures 2 and 4the coreference chains output by camp for the two extracts are shown in figures 3 and 5 tractor module produces a quotsummaryquot of the article with respect to the entity of interestthese summaries are a special case of the query sensitive techniques being developed at penn using camptherefore for doc36 since at least one of the three noun phrases in the coreference chain of interest appears in each of the three sentences in the extract the summary produced by sentenceextractor is the extract itselfon the other hand the summary produced by sentenceextractor for the coreference chain of interest in doc38 is only the first sentence of the extract because the only element of the coreference chain appears in this sentencethe university of pennsylvania camp system resolves within document coreferences for several different classes including pronouns and proper names it ranked among the top systems in the coreference task during the muc6 and the muc7 evaluationsthe coreference chains output by camp enable us to gather all the information about the entity of interest in an articlethis information about the entity is gathered by the sentenceextractor module and is used by the vsmdisambiguate module for disambiguation purposesconsider the extract for doc36 shown in figure 2we are able to include the fact that the john perry mentioned in this article was the president of the massachusetts golf association only because camp recognized that the quothequot in the second sentence is coreferent with quotjohn perryquot in the firstand it is this fact which actually helps vsmdisambiguate decide that the two john perrys in doc36 and doc38 are the same personthe vector space model used for disambiguating entities across documents is the standard vector space model used widely in information retrieval in this model each summary extracted by the sentenceextractor module is stored as a vector of termsthe terms in the vector are in their morphological root form and are filtered for stopwords if si and s2 are the vectors for the two summaries extracted from documents d1 and d21 then their similarity is computed as where tj is a term present in both si and s2 w1j is the weight of the term t3 in s1 and w23 is the weight of ti in s2the weight of a term tj in the vector st for a summary is given by where t f is the frequency of the term t3 in the summary n is the total number of documents in the collection being examined and df is the number of documents in the collection that the term tj occurs inmi 42 4n is the cosine normalization factor and is equal to the euclidean length of the vector sithe vsmdisambiguate module for each summary si computes the similarity of that summary with each of the other summariesif the similarity computed is above a predefined threshold then the entity of interest in the two summaries are considered to be coreferentthe crossdocument coreference system was tested on a highly ambiguous test set which consisted of 197 articles from 1996 and 1997 editions of the new york timesthe sole criteria for including an article in the test set was the presence or the absence of a string in the article which matched the quotjohnsmithr regular expressionin other words all of the articles either contained the name john smith or contained some variation with a middle initialnamethe system did not use any new york times data for training purposesthe answer keys regarding the crossdocument chains were manually created but the scoring was completely automatedthere were 35 different john smiths mentioned in the articlesof these 24 of them only had one article which mentioned themthe other 173 articles were regarding the 11 remaining john smithsthe background of these john smiths and the number of articles pertaining to each varied greatlydescriptions of a few of the john smiths are chairman and ceo of general motors assistant track coach at ucla the legendary explorer and the main character in disney pocahontas former president of the labor party of britainin order to score the crossdocument coreference chains output by the system we had to map the crossdocument coreference scoring problem to a withindocument coreference scoring problemthis was done by creating a meta document consisting of the file names of each of the documents that the system was run onassuming that each of the documents in the data set was about a single john smith the crossdocument coreference chains produced by the system could now be evaluated by scoring the corresponding withindocument coreference chains in the meta documentwe used two different scoring algorithms for scoring the outputthe first was the standard algorithm for withindocument coreference chains which was used for the evaluation of the systems participating in the muc6 and the muc7 coreference tasksthe shortcomings of the muc scoring algorithm when used for the crossdocument coreference task forced us to develop a second algorithmdetails about both these algorithms followthe muc algorithm computes precision and recall statistics by looking at the number of links identified by a system compared to the links in an answer keyin the modeltheoretic description of the algorithm that follows the term quotkeyquot refers to the manually annotated coreference chains while the term quotresponsequot refers to the coreference chains output by a systeman equivalence set is the transitive closure of a coreference chainthe algorithm developed by computes recall in the following wayfirst let s be an equivalence set generated by the key and let r1 r be equivalence classes generated by the responsethen we define the following functions over s fully reunite any components of the p partitionwe note that this is simply one fewer than the number of elements in the partition that is m i 1 looking in isolation at a single equivalence class in the key the recall error for that class is just the number of missing links divided by the number of correct links ie precision is computed by switching the roles of the key and response in the above formulationwhile the provides intuitive results for coreference scoring it however does not work as well in the context of evaluating cross document coreferencethere are two main reasons1the algorithm does not give any credit for separating out singletons from other chains which have been identifiedthis follows from the convention in coreference annotation of not identifying those entities that are markable as possibly coreferent with other entities in the textrather entities are only marked as being coreferent if they actually are coreferent with other entities in the textthis shortcoming could be easily enough overcome with different annotation conventions and with minor changes to the algorithm but it is worth noting2all errors are considered to be equalthe muc scoring algorithm penalizes the precision numbers equally for all types of errorsit is our position that for certain tasks some coreference errors do more damage than othersconsider the following examples suppose the truth contains two large coreference chains and one small one and suppose figures 7 and 8 show two different responseswe will explore two different precision errorsthe first error will connect one of the large coreference chains with the small one the second error occurs when the two large coreference chains are related by the errant coreferent link it is our position that the second error is more damaging because compared to the first error the second error makes more entities coreferent that should not bethis distinction is not reflected in the scorer which scores both responses as having a precision score of 90 imagine a scenario where a user recalls a collection of articles about john smith finds a single article about the particular john smith of interest and wants to see all the other articles about that individualin commercial systems with news data precision is typically the desired goal in such settingsas a result we wanted to model the accuracy of the system on a perdocument basis and then build a more global score based on the sum of the user experiencesconsider the case where the user selects document 6 in figure 8this a good outcome with all the relevant documents being found by the system and no extraneous documentsif the user selected document 1 then there are 5 irrelevant documents in the systems output precision is quite low thenthe goal of our scoring algorithm then is to model the precision and recall on average when looking for more documents about the same person based on selecting a single documentinstead of looking at the links produced by a system our algorithm looks at the presenceabsence of entities from the chains producedtherefore we compute the precision and recall numbers for each entity in the documentthe numbers computed with respect to each entity in the document are then combined to produce final precision and recall numbers for the entire outputfor an entity i we define the precision and recall with respect to that entity in figure 10the final precision and recall numbers are computed by the following two formulae final precision e wi precisioni final recall e wi recalls where n is the number of entities in the document and wi is the weight assigned to entity i in the documentfor all the examples and the experiments in this paper we assign equal weights to each entity ie wi 1nwe have also looked at the possibilities of using other weighting schemesfurther details about the bcubed algorithm including a model theoretic version of the algorithm can be found in consider the response shown in figure 7using the bcubed algorithm the precision for entity6 in the document equals 27 because the chain output for the entity contains 7 elements 2 of which are correct namely 67the recall for entity6 however is 22 because the chain output for the entity has 2 correct elements in it and the quottruthquot chain for the entity only contains those 2 elementsfigure 9 shows the final precision and recall numbers computed by the bcubed algorithm for the examples shown in figures 7 and 8the figure also shows the precision and recall numbers for each entity the bcubed algorithm does overcome the the two main shortcomings of the muc scoring algorithm discussed earlierit implicitly overcomes the first number of correct elements in the output chain containing entityi number of elements in the output chain containing entity number of correct elements in the output chain containing entity number of elements in the truth chain containing entity figure 10 definitions for precision and recall for an entity i shortcoming of the muc6 algorithm by calculating the precision and recall numbers for each entity in the document consider the responses shown in figures 7 and 8we had mentioned earlier that the error of linking the the two large chains in the second response is more damaging than the error of linking one of the large chains with the smaller chain in the first responseour scoring algorithm takes this into account and computes a final precision of 58 and 76 for the two responses respectivelyin comparison the muc algorithm computes a precision of 90 for both the responses figure 11 shows the precision recall and fmeasure using the bcubed scoring algorithmthe vector space model in this case constructed the space of terms only from the summaries extracted by sentenceextractorin comparison figure 12 shows the results when the vector space model constructed the space of terms from the articles input to the system the importance of using camp to extract summaries is verified by comparing the highest fmeasures achieved by the system for the two casesthe highest fmeasure for the former case is 846 while the highest fmeasure for the latter case is 780in comparison for this task namedentity tools like netowl and textract would mark all the john smiths the sametheir performance using our threshold figure 11 precision recall and fmeasure using the bcubed algorithm with training on the summaries scoring algorithm is 23 precision and 100 recallfigures 13 and 14 show the precision recall and fmeasure calculated using the muc scoring algorithmalso the baseline case when all the john smiths are considered to be the same person achieves 83 precision and 100 recallthe high initial precision is mainly due to the fact that the muc algorithm assumes that all errors are equalwe have also tested our system on other classes of crossdocument coreference like names of companies and eventsdetails about these experiments can be found in as a novel research problem cross document coreference provides an different perspective from related phenomenon like named entity recognition and within document coreferenceour system takes summaries about an entity of interest and uses various information retrieval metrics to rank the similarity of the summarieswe found it quite challenging to arrive at a scoring metric that satisfied our intuitions about what was good system output vs bad but we have developed a scoring algorithm that is an improvement for this class of data over other within document coreference scoring algorithmsour results are quite encouraging with potential performance being as good as 846 the first author was supported in part by a fellowship from ibm corporation and in part by the institute for research in cognitive science at the university of pennsylvania
P98-1012
entitybased crossdocument core f erencing using the vector space modelcrossdocument coreference occurs when the same person place event or concept is discussed in more than one text sourcecomputer recognition of this phenomenon is important because it helps break the document boundary by allowing a user to examine information about a particular entity from multiple text sources at the same timein this paper we describe a crossdocument coreference resolution algorithm which uses the vector space model to resolve ambiguities between people having the same namein addition we also describe a scoring algorithm for evaluating the crossdocument coreference chains produced by our system and we compare our algorithm to the scoring algorithm used in the muc6 coreference taskwe proposed entitybased crossdocument coreferencing which uses coreference chains of each document to generate its summary and then use the summary rather than the whole article to select informative words to be the features of the document
the berkeley framenet project is a nsfsupported project in corpusbased computational lexicography now in its second year the project key features are a commitment to corpus evidence for semantic and syntactic generalizations and the representation of the valences of its target words in which the semantic portion makes use of frame semantics the resulting database will contain descriptions of the semantic frames underlying the meanings of the words described and the valence representation of several thousand words and phrases each accompanied by a representative collection of annotated corpus attestations which jointly exemplify the observed linkings between quotframe elementsquot and their syntactic realizations this report will present the project goals and workflow and information about the computational tools that have been adapted or created inhouse for this work the berkeley framenet project is producing framesemantic descriptions of several thousand english lexical items and backing up these descriptions with semantically annotated attestations from contemporary english corpora2these descriptions are based on handtagged semantic annotations of example sentences extracted from large text corpora and systematic analysis of the semantic patterns they exemplify by lexicographers and linguiststhe primary emphasis of the project therefore is the encoding by humans of semantic knowledge in machinereadable formthe intuition of the lexicographers is guided by and constrained by the results of corpusbased research using highperformance software toolsthe semantic domains to be covered are health care chance perception communication transaction time space body motion life stages social context emotion and cognitionthe results of the project are a lexical resource called the framenet database3 and associated software toolsthe database has three major components links to the frame database and to other machinereadable resources such as wordnet and comlex marked up to exemplify the semantic and morphosyntactic properties of the lexical itemsthese sentences provide empirical support for the lexicographic analysis provided in the frame database and lexicon entriesthese three components form a highly relational and tightly integrated whole elements in each may point to elements in the other twothe database will also contain estimates of the relative frequency of senses and complementation patterns calculated by matching the senses and patterns in the handtagged examples against the entire bnc corpusthe framenet work is in some ways similar to efforts to describe the argument structures of lexical items in terms of caseroles or thetaroles5 but in framenet the role names are local to particular conceptual structures some of these are quite general while others are specific to a small family of lexical itemsfor example the transportation frame within the domain of motion provides movers means of transportation and paths6 6the semantic frames for individual lexical units are typically quotblendsquot of more than one basic frame from our point of view the socalled quotlinkingquot patterns proposed in lfg hpsg and construction grammar operate on higherlevel frames of action motion and location and experience etcin some but not all cases the assignment of syntactic correlates to frame elements could be mediated by mapping them to the roles of one of the more abstract frames6a detailed study of motion predicates would require a finergrained analysis of the path element separating out source and goal and perhaps direction and area but for a basic study of the transportation predicates such refined analysis is not necessaryin any case our subframes associated with individual words inherit all of these while possibly adding some of their ownfig1 shows some of the subframes as discussed belowthe driving frame for example specifies a driver a vehicle and potentially cargo or rider as secondary moversin this frame the driver initiates and controls the movement of the vehiclefor most verbs in this frame driver or vehicle can be realized as subjects vehicle rider or cargo can appear as direct objects and path and vehicle can appear as oblique complementssome combinations of frame elements or frame element groups for some real corpus sentences in the driving frame are shown in fig2a riding_1 frame has the primary mover role as rider and allows as vehicle those driven by others7 in grammatical realizations of this frame the rider can be the subject the vehicle can appear as a direct object or an oblique complement and the path is generally realized as an obliquethe framenet entry for each of these verbs will include a concise formula for all semanwork includes the separate analysis of the frame semantics of directional and locational expressions tic and syntactic combinatorial possibilities together with a collection of annotated corpus sentences in which each possibility is exemplifiedthe syntactic positions considered relevant for lexicographic description include those that are internal to the maximal projection of the target word and those that are external to the maximal projection under precise structural conditions the subject in the case of vp and the subject of support verbs in the case of ap and np8 used in nlp the framenet database should make it possible for a system which finds a valencebearing lexical item in a text to know where its individual arguments are likely to be foundfor example once a parser has found the verb drive and its direct object np the link to the driving frame will suggest some semantics for that np eg that a person as direct object probably represents the rider while a nonhuman proper noun is probably the vehiclefor practical lexicography the contribution of the framenet database will be its presentation of the full range of use possibilities for individual words documented with corpus data the model examples for each use and the statistical information on relative frequencythe computational side of the framenet project is directed at efficiently capturing human insights into semantic structurethe majority of the work involved is marking text with semantic tags specifying the structure of the frames to be treated and writing dictionarystyle entries based the results of annotation and a priori descriptionswith the exception of the example sentence extraction component all the software modules are highly interactive and have substantial user interface requirementsmost of this functionality is provided by wwwbased programs written in perlfour processing steps are required produce the framenet database of frame semantic representations generating initial descriptions of semantic and syntactic patterns for use in corpus queries and annotation extracting good example sentences marking the constituents of interest and building a database of lexical semantic representations based on the annotations and other data these are discussed briefly below and shown in fig3as work on the project has progressed we have defined several explicit roles which project participants play in the various steps these roles are referred to as vanguard annotators and rearguard these are purely functional designations the same person may play different roles at different times9 pares the initial descriptions of frames including lists of frames and frame elements and adds these to the frame database using the frame description tool the vanguard also selects the major vocabulary items for the frame and the syntactic patterns that need to be checked for each word which are entered in the lexical database by means of the lexical database tool 2subcorpus extractionbased on the vanguard work the subcorpus extraction tools produce a representative collection of sentences containing these wordsthis selection of examples is achieved through a hybrid process partially controlled by the preliminary lexical description of each lemmasentences containing the lemma are extracted from from a corpus and classified into subcorpora by syntactic pattern using a cascade filter representing a partial regularexpression grammar of english over partofspeech tags formatted for annotation and automatically sampled down to an appropriate number and the fegs extracted from them and builds both the entries for the lemmas in the lexical database and the frame descriptions in the frame database using the entry writing tools we are building a quotconstituent type identifierquot which will semiautomatically assign grammatical function and phrase type attributes to these femarked constituents eliminating the need for annotators to mark thesethe data structures described above are implemented in sgml11 each is described by a dtd and these dtds are structured to provide the necessary links between the componentsthe software suite currently supporting database development is an aggregate of existing software tools held together with perlcgibased quotgluequotin order to get the project started we have depended on offtheshelf software which in some cases is not ideal for our purposesnevertheless using these programs allowed us to get the project up and running within just a few monthswe describe below in approximate order of application the programs used and their state of completionquoteventually we plan to migrate to an xml data model which appears to provide more flexibility while reducing complexityalso the framenet software is being developed on unix but we plan to provide crossplatform capabilities by making our tool suite webbased and xmlcompatiblesgml files into html for convenient viewing on the web etc are being written in perlrcs maintains version control over most filesat the time of writing there is something in place for each of the major software components though in some cases these are little more than stubs or quottoyquot implementationsnearly 10000 sentences exemplifying just under 200 lemmas have been annotated there are over 20000 frame element tokens marked in these example sentencesabout a dozen frames have been specified which refer to 47 named frame elementsmost of these annotations have been accomplished in the last few months since the software for corpus extraction frame description and annotation became operationalwe expect the inventory to increase rapidlyif the proportions cited hold constant as the framenet database grows the final database of 5000 lexical units may contain 250000 annotated sentences and over half a million tokens of frame elements
P98-1013
the berkeley framenet projectframenet is a threeyear nsfsupported project in corpusbased computational lexicography now in its second year the project key features are a commitment to corpus evidence for semantic and syntactic generalizations and the representation of the valences of its target words in which the semantic portion makes use of frame semanticsthe resulting database will contain descriptions of the semantic frames underlying the meanings of the words described and the valence representation of several thousand words and phrases each accompanied by a representative collection of annotated corpus attestations which jointly exemplify the observed linkings between frame elements and their syntactic realizations this report will present the project goals and workflow and information about the computational tools that have been adapted or created inhouse for this workwe present the framenet project in which we havee been developing a framesemantic lexicon for the core vocabulary of english
classifier combination for improved lexical disambiguation one of the most exciting recent directions in machine learning is the discovery that the combination of multiple classifiers often results in significantly better performance than what can be achieved with a single classifier in this paper we first show that the errors made from three different state of the art part of speech taggers are strongly complementary next we show how this complementary behavior can be used to our advantage by using contextual cues to guide tagger combination we are able to derive a new tagger that achieves performance significantly greater than any of the individual taggers part of speech tagging has been a central problem in natural language processing for many yearssince the advent of manually tagged corpora such as the brown corpus and the penn treebank marcus the efficacy of machine learning for training a tagger has been demonstrated using a wide array of techniques including markov models decision trees connectionist machines transformations nearestneighbor algorithms and maximum entropy black schmid brilldaelemansratnaparkhiall of these methods seem to achieve roughly comparable accuracythe fact that most machinelearningbased taggers achieve comparable results could be attributed to a number of causesit is possible that the 8020 rule of engineering is applying a certain number of tagging instances are relatively simple to disambiguate and are therefore being successfully tagged by all approaches while another percentage is extremely difficult to disambiguate requiring deep linguistic knowledge thereby causing all taggers to erranother possibility could be that all of the different machine learning techniques are essentially doing the same thingwe know that the features used by the different algorithms are very similar typically the words and tags within a small window from the word being taggedtherefore it could be possible that they all end up learning the same information just in different formsin the field of machine learning there have been many recent results demonstrating the efficacy of combining classifiersin this paper we explore whether classifier combination can result in an overall improvement in lexical disambiguation accuracythe experiments described in this paper are based on four popular tagging algorithms all of which have readily available implementationsthese taggers are described belowthis is by far the simplest of tagging algorithmsevery word is simply assigned its most likely part of speech regardless of the context in which it appearssurprisingly this simple tagging method achieves fairly high accuracyaccuracies of 9094 are typicalin the unigram tagger used in our experiments for words that do not appear in the lexicon we use a i see dietterich for a good summary of these techniques collection of simple manuallyderived heuristics to guess the proper tag for the wordngram part of speech taggers church weischedel are perhaps the most widely used of tagging algorithmsthe basic model is that given a word sequence w we try to find the tag sequence t that maximizes pthis can be done using the viterbi algorithm to find the t that maximizes ppin our experiments we use a standard trigram tagger using deleted interpolation and used suffix information for handling unseen words in transformationbased tagging every word is first assigned an initial tag this tag is the most likely tag for a word if the word is known and is guessed based upon properties of the word if the word is not knownthen a sequence of rules are applied that change the tags of words based upon the contexts they appear inthese rules are applied deterministically in the order they appear in the listas a simple example if race appears in the corpus most frequently as a noun it will initially be mistagged as a noun in the sentence we can race all day longthe rule change a tag from noun to verb if the previous tag is a modal would be applied to the sentence resulting in the correct taggingthe environments used for changing a tag are the words and tags within a window of three wordsfor our experiments we used a publicly available implementation of transformationbased tagging2 retrained on our training setthe maximumentropy framework is a probabilistic framework where a model is found that is consistent with the observed data and is maximally agnostic with respect to all parameters for which no data existsit is a nice framework for combining multiple constraintswhereas the transformationbased tagger enforces multiple constraints by having multiple rules fire the maximumentropy tagger can have all of these constraints play a role at setting the probability estimates for the model parametersin ratnaparkhi a maximum entropy tagger is presentedthe tagger uses essentially the same parameters as the transformationbased tagger but employs them in a different modelfor our experiments we used a publicly available implementation of maximumentropy tagging3 retrained on our training setall experiments presented in this paper were run on the penn treebank wall street journal corpus the corpus was divided into approximately 80 training and 20 testing giving us approximately 11 million words of training data and 265000 words of test datathe test set was not used in any way in training so the test set does contain unknown wordsin figure 1 we show the relative accuracies of the four taggersin parentheses we include tagger accuracy when only ambiguous and unknown words are considered4 next we examine just how different the errors of the taggers arewe define the complementary rate of taggers a and b as ambiguity resolution over all words including words that are unambiguouscorrectly tagging words that only can have one label contributes to the accuracywe see in figure 1 that when accuracy is measured on truly ambiguous words the numbers are lowerin this paper we stick to the convention of giving results for all words including unambiguous ones of errors in a only in other words comp measures the percentage of time when tagger a is wrong that tagger b is correctin figure 2 we show the complementary rates between the different taggersfor instance when the maximum entropy tagger is wrong the transformationbased tagger is right 377 of the time and when the transformationbased tagger is wrong the maximum entropy tagger is right 417 of the timethe complementary rates are quite high which is encouraging since this sets the upper bound on how well we can do in combining the different classifiersif all taggers made the same errors or if the errors that loweraccuracy taggers made were merely a superset of higheraccuracy tagger errors then combination would be futilein addition a tagger is much more likely to have misclassified the tag for a word in instances where there is disagreement with at least one of the other classifiers than in the case where all classifiers agreein figure 3 we see for instance that while the overall error rate for the maximum entropy tagger is 317 in cases where there is disagreement between the four taggers the maximum entropy tagger error rate jumps to 271and discarding the unigram tagger which is significantly less accurate than the others when there is disagreement between the maximum entropy transformationbased and trigram taggers the maximum entropy tagger error rate jumps up to 437these cases account for 58 of the total errors the maximum entropy tagger makes next we check whether tagger complementarity is additivein figure 4 the first row shows the additive error rate an oracle could achieve on the test set if the oracle could pick between the different outputs of the taggersfor example when the oracle can examine the output of the maximum entropy transformationbased and trigram taggers it could achieve an error rate of 162the second row shows the additive error rate reduction the oracle could achieveif the oracle is allowed to choose between all four taggers a 555 error rate reduction is obtained over the maximum entropy tagger error rateif the unigram output is discarded the oracle improvement drops down to 488 over maximum entropy tagger error ratefrom these results we can conclude that there is at least hope that improvments can be gained by combining the output of different taggerswe can also conclude that the improvements we expect are somewhat additive meaning the more taggers we combine the better results we should expectthe fact that the errors the taggers make are strongly complementary is very encouragingif all taggers made the exact same errors there would obviously be no chance of improving accuracy through classifier combinationhowever note that the high complementary rate between tagger errors in itself does not necessarily imply that there is anything to be gained by classifier combinationwe ran experiments to determine whether the outputs of the different taggers could be effectively combinedwe first explored combination via simple majoritywins votingnext we attempted to automatically acquire contextual cues that learned both which tagger to believe in which contexts and what tags are indicated by different patterns of tagger outputsboth the word environments and the tagger outputs for the word being tagged and its neighbors are used as cues for predicting the proper tagthe simplest combination scheme is to have the classifiers votethe part of speech that appeared as the choice of the largest number of classifiers is picked as the answer with some method being specified for breaking tieswe tried simple voting using the maximum entropy transformationbased and trigram taggersin case of ties the maximum entropy tagger output is chosen since this tagger had the highest overall accuracy the results are shown in figure 5simple voting gives a net reduction in error of 69 over the best of the three taggersthis difference is significant at a 99 confidence levelnext we try to exploit the idiosyncracies of the different taggersalthough the maximum entropy transformationbased and trigram taggers use essentially the same types of contextual information for disambiguation this information is exploited differently in each caseour hope is that there is some regularity to these differences which would then allow us to learn what conditions suggest that we should trust one tagger output over anotherwe used a version of examplebased learning to determine whether these tagger differences could be exploited5 to determine 5 examplebased learning has also been applied succesfully in building a single part of speech tagger the tag of a word we use the previous word current word next word and the output of each tagger for the previous current and next wordsee figure 6for each such context in the training set we store the probabilities of what correct tags appeared in that contextwhen the tag distribution for a context has low entropy it is a very good predictor of the correct tag when the identical environment occurs in unseen datathe problem is that these environments are very specific and will have low overall recall in a novel corpusto account for this we must back off to more general contexts when we encounter an environment in the test set that did not occur in the training setthis is done by specifying an order in which fields should be ignored until a match is foundthe backoff ordering is learned automaticallywe ran two variants of this experimentin the first case given an instance in the test set we find the most specific matching example in the training set using the prespecified backoff ordering and see what the most probable tag was in the training set for that environmentthis is then chosen as the tag for the wordnote that this method is capable of learning to assign a tag that none of the taggers assignedfor instance it could be the case that when the unigram tagger thinks the tag should be x and the trigram and maximum entropy taggers think it should be y then the true tag is most frequently zin the second experiment we use contexts to specify which tagger to trust rather than which tag to outputagain the most specific context is found but here we check which tagger has the highest probability of being correct in this particular contextfor instance we may learn that the trigram tagger is most accurate at tagging the word up or that the unigrarn tagger does best at tagging the word race when the word that follows is andthe results are given in figure 7we see that while simple voting achieves an error reduction of 69 using contexts to choose a tag gives an error reduction of 98 and using contexts to choose a tagger gives an error reduction of 104in this paper we showed that the error distributions for three popular state of the art part of speech taggers are highly complementarynext we described experiments that demonstrated that we can exploit this complementarity to build a tagger that attains significantly higher accuracy than any of the individual taggersin the future we plan to expand our repertoire of base taggers to determine whether performance continues to improve as we add additional systemswe also plan to explore different methods for combining classifier outputswe suspect that the features we have chosen to use for combination are not the optimal set of featureswe need to carefully study the different algorithms to find possible cues that can indicate where a particular tagger performs wellwe hope that by following these general directions we can further exploit differences in classifiers to improve accuracy in lexical disambiguation
P98-1029
classifier combination for improved lexical disambiguationone of the most exciting recent directions in machine learning is the discovery that the combination of multiple classifiers often results in significantly better performance than what can be achieved with a single classifierin this paper we first show that the errors made from three different state of the art part of speech taggers are strongly complementarynext we show how this complementary behavior can be used to our advantageby using contextual cues to guide tagger combination we are able to derive a new tagger that achieves performance significantly greater than any of the individual taggerswe define the complementarity between two learners in order to quantify the percentage of time when one system is wrong while another system is correct therefore providing an upper bound on combination accuracy
errordriven pruning of treebank grammars for base noun phrase identification finding simple nonrecursive base noun phrases is an important subtask for many natural language processing applications while previous empirical methods for base np identification have been rather complex this paper instead proposes a very simple algorithm that is tailored to the relative simplicity of the task in particular we present a corpusbased approach for finding base nps by matching partofspeech tag sequences the training phase of the algorithm is based on two successful techniques first the base np grammar is read from a quottreebankquot corpus then the grammar is improved by selecting rules with high quotbenefitquot scores using this simple algorithm with a naive heuristic for matching rules we achieve surprising accuracy in an evaluation on the finding base noun phrases is a sensible first step for many natural language processing tasks accurate identification of base noun phrases is arguably the most critical component of any partial parser in addition information retrieval systems rely on base noun phrases as the main source of multiword indexing terms furthermore the psycholinguistic studies of gee and grosjean indicate that text chunks like base noun phrases play an important role in human language processingin this work we define base nps to be simple nonrecursive noun phrases noun phrases that do not contain other noun phrase descendantsthe bracketed portions of figure 1 for example show the base nps in one sentence from the penn treebank wall street journal corpus thus the string the sunny confines of resort towns like boca raton and hot springs is too complex to be a base np instead it contains four simpler noun phrases each of which is considered a base np the sunny confines resort towns boca raton and hot springsprevious empirical research has addressed the problem of base np identificationseveral algorithms identify quotterminological phrasesquot certain when it is time for their biannual powwow the nation manufacturing titans typically jet off to the sunny confines of resort towns like boca raton and hot springs base noun phrases with initial determiners and modifiers removed justeson katz look for repeated phrases bourigault uses a handcrafted noun phrase grammar in conjunction with heuristics for finding maximal length noun phrases voutilainen nptool uses a handcrafted lexicon and constraint grammar to find terminological noun phrases that include phrasefinal prepositional phraseschurch parts program on the other hand uses a probabilistic model automatically trained on the brown corpus to locate core noun phrases as well as to assign parts of speechmore recently ramshaw marcus apply transformationbased learning to the problemunfortunately it is difficult to directly compare approacheseach method uses a slightly different definition of base npeach is evaluated on a different corpusmost approaches have been evaluated by hand on a small test set rather than by automatic comparison to a large test corpus annotated by an impartial third partya notable exception is the ramshaw marcus work which evaluates their transformationbased learning approach on a base np corpus derived from the penn treebank wsj and achieves precision and recall levels of approximately 93this paper presents a new algorithm for identifying base nps in an arbitrary textlike some of the earlier work on base np identification ours is a trainable corpusbased algorithmin contrast to other corpusbased approaches however we hypothesized that the relatively simple nature of base nps would permit their accurate identification using correspondingly simple methodsassume for example that we use the annotated text of figure 1 as our training corpusto identify base nps in an unseen text we could simply search for all occurrences of the base nps seen during training it time their biannual powwow hot springs and mark them as base nps in the new texthowever this method would certainly suffer from data sparsenessinstead we use a similar approach but back off from lexical items to parts of speech we identify as a base np any string having the same partofspeech tag sequence as a base np from the training corpusthe training phase of the algorithm employs two previously successful techniques like charniak statistical parser our initial base np grammar is read from a quottreebankquot corpus then the grammar is improved by selecting rules with high quotbenefitquot scoresour benefit measure is identical to that used in transformationbased learning to select an ordered set of useful transformations using this simple algorithm with a naive heuristic for matching rules we achieve surprising accuracy in an evaluation on two base np corpora of varying complexity both derived from the penn treebank wsjthe first base np corpus is that used in the ramshaw marcus workthe second espouses a slightly simpler definition of base np that conforms to the base nps used in our empire sentence analyzerthese simpler phrases appear to be a good starting point for partial parsers that purposely delay all complex attachment decisions to later phases of processingoverall results for the approach are promisingfor the empire corpus our base np finder achieves 94 precision and recall for the ramshaw marcus corpus it obtains 91 precision and recall which is 2 less than the best published resultsramshaw marcus however provide the learning algorithm with wordlevel information in addition to the partofspeech information used in our base np finderby controlling for this disparity in available knowledge sources we find that our base np algorithm performs comparably achieving slightly worse precision and slightly better recall than the ramshaw marcus approachmoreover our approach offers many important advantages that make it appropriate for many nlp tasks note also that the treebank approach to base np identification obtains good results in spite of a very simple algorithm for quotparsingquot base npsthis is extremely encouraging and our evaluation suggests at least two areas for immediate improvementfirst by replacing the naive match heuristic with a probabilistic base np parser that incorporates lexical preferences we would expect a nontrivial increase in recall and precisionsecond many of the remaining base np errors tend to follow simple patterns these might be corrected using localized learnable repair rulesthe remainder of the paper describes the specifics of the approach and its evaluationthe next section presents the training and application phases of the treebank approach to base np identification in more detailsection 3 describes our general approach for pruning the base np grammar as well as two instantiations of that approachthe evaluation and a discussion of the results appear in section 4 along with techniques for reducing training time and an initial investigation into the use of local repair heuristicsfigure 2 depicts the treebank approach to base np identificationfor training the algorithm requires a corpus that has been annotated with base npsmore specifically we assume that the training corpus is a sequence of words wi w2 along with a set of base np annotations b where b indicates that the np brackets words i through j nr wi wjthe goal of the training phase is to create a base np grammar from this training corpus the resulting quotgrammarquot can then be used to identify base nps in a novel textnot this yearnational association of manufacturers settled on the hoosier capital of indianapolis for its next meetingand the city decided to treat its guests more like royalty or rock stars than factory ownersnot this yearnational association of manufacturers settled on the hoosier capital of indianapolis for i its next meetingand the city decided to treat its guests more like royalty or frock stars than factory ownerstraining corpus when it is time for their biannual powwow the nation manufacturing titans typically jet off to the sunny confines of resort towns like boca raton and hot springs3if there are multiple rules that match beginning at ti use the longest matching rule r add the new base noun phrase t to the set of base npscontinue matching at titriwith the rules stored in an appropriate data structure this greedy quotparsingquot of base nps is very fastin our implementation for example we store the rules in a decision tree which permits base np identification in time linear in the length of the tagged input text when using the longest match heuristicunfortunately there is an obvious problem with the algorithm described abovethere will be many unhelpful rules in the rule set extracted from the training corpusthese quotbadquot rules arise from four sources bracketing errors in the corpus tagging errors unusual or irregular linguistic constructs and inherent ambiguities in the base nps in spite of their simplicityfor example the rule which was extracted from manufacturingvbg titansnns in the example text is ambiguous and will cause erroneous bracketing in sentences such as the execs squeezed in a few meetings before boardingvbg busesnns againin order to have a viable mechanism for identifying base nps using this algorithm the grammar must be improved by removing problematic rulesthe next section presents two such methods for automatically pruning the base np grammaras described above our goal is to use the base np corpus to extract and select a set of noun phrase rules that can be used to accurately identify base nps in novel textour general pruning procedure is shown in figure 3first we divide the base np corpus into two parts a training corpus and a pruning corpusthe initial base np grammar is extracted from the training corpus as described in section 2next the pruning corpus is used to evaluate the set of rules and produce a ranking of the rules in terms of their utility in identifying base npsmore specifically we use the rule set and the longest match heuristic to find all base nps in the pruning corpusperformance of the rule set is measured in terms of labeled precision we then assign to each rule a score that denotes the quotnet benefitquot achieved by using the rule during np parsing of the improvement corpusthe benefit of rule r is given by br cr er where cr is the number of nps correctly identified by r and er is the number of precision errors for which r is responsible1 a rule is considered responsible for an error if it was the first rule to bracket part of a reference np ie an np in the base np training corpusthus rules that form erroneous bracketings are not penalized if another rule previously bracketed part of the same reference npfor example suppose the fragment containing base nps boca raton hot springs and palm beach is bracketed as shown below resort towns like np bocannp ratonnnp hotnnp np springsnnp and np palmnnp beachnnp1 rule brackets npi brackets np2 and brackets np3rule incorrectly identifies boca raton hot as a noun phrase so its score is 1rule incorrectly identifies springs but it is not held responsible for the error because of the previous error by on the same original np hot springs so its score is 0finally rule receives a score of 1 for correctly identifying palm beach as a base npthe benefit scores from evaluation on the pruning corpus are used to rank the rules in the grammarwith such a ranking we can improve the rule set by discarding the worst rulesthus far we have investigated two iterative approaches for discarding rules a thresholding approach and an incremental approachwe describe each in turn in the subsections belowthis same benefit measure is also used in the rm study but it is used to rank transformations rather than to rank np rulesgiven a ranking on the rule set the threshold algorithm simply discards rules whose score is less than a predefined threshold r for all of our experiments we set are 1 to select rules that propose more correct bracketings than incorrectthe process of evaluating ranking and discarding rules is repeated until no rules have a score less than r for our evaluation on the wsj corpus this typically requires only four to five iterationsthresholding provides a very coarse mechanism for pruning the np grammarin particular because of interactions between the rules during bracketing thresholding discards rules whose score might increase in the absence of other rules that are also being discardedconsider for example the boca raton fragments given earlierin the absence of the rule would have received a score of three for correctly identifying all three npsas a result we explored a more finegrained method of discarding rules each iteration of incremental pruning discards the n worst rules rather than all rules whose rank is less than some thresholdin all of our experiments we set n 10as with thresholding the process of evaluating ranking and discarding rules is repeated this time until precision of the current rule set on the pruning corpus begins to dropthe rule set that maximized precision becomes the final rule setin the experiments below we compare the thresholding and incremental methods for pruning the np grammar to a rule set that was pruned by handwhen the training corpus is large exhaustive review of the extracted rules is not practicalthis is the case for our initial rule set culled from the wsj corpus which contains approximately 4500 base np rulesrather than identifying and discarding individual problematic rules our reviewer identified problematic classes of rules that could be removed from the grammar automaticallyin particular the goal of the human reviewer was to discard rules that introduced ambiguity or corresponded to overly complex base npswithin our partial parsing framework these nps are better identified by more informed components of the nlp systemour reviewer identified the following classes of rules as possibly troublesome rules that contain a preposition period or colon rules that contain wh tags rules that beginend with a verb or adverb rules that contain pronouns with any other tags rules that contain misplaced commas or quotes rules that end with adjectivesrules covered under any of these classes were omitted from the humanpruned rule sets used in the experiments of section 4to evaluate the treebank approach to base np identification we created two base np corporaeach is derived from the penn treebank wsjthe first corpus attempts to duplicate the base nps used the ramshaw marcus studythe second corpus contains slightly less complicated base nps base nps that are better suited for use with our sentence analyzer empireby evaluating on both corpora we can measure the effect of noun phrase complexity on the treebank approach to base np identificationin particular we hypothesize that the treebank approach will be most appropriate when the base nps are sufficiently simplefor all experiments we derived the training pruning and testing sets from the 25 sections of wall street journal distributed with the penn treebank iiall experiments employ 5fold cross validationmore specifically in each of five runs a different fold is used for testing the final pruned rule set three of the remaining folds comprise the training corpus and the final partition is the pruning corpus all results are averages across the five foldsperformance is measured in terms of precision and recallprecision was described earlier it is a standard measure of accuracyrecall on the other hand is an attempt to measure coverage throughout the table we see the effects of base np complexity the base nps of the rm corpus are substantially more difficult for our approach to identify than the simpler nps of the empire corpusfor the rm corpus we lag the best published results by approximately 3this straightforward comparison however is not entirely appropriateramshaw marcus allow their learning algorithm to access wordlevel information in addition to partofspeech tagsthe treebank approach on the other hand makes use only of partofspeech tagstable 2 compares ramshaw marcus results with and without lexical knowledgethe first column reports their performance when using lexical templates the second when lexical templates are not used the third again shows the treebank approach using incremental pruningthe treebank approach and the rm approach without lecial templates are shown to perform comparably lexicalization of our base np finder will be addressed in section 41finally note the relatively small difference between the threshold and incremental pruning methods in table 1for some applications this minor drop in performance may be worth the decrease in training timeanother effective technique to speed up training is motivated by charniak observation that the benefit of using rules that only occurred once in training is marginalby discarding these rules before pruning we reduce the size of the initial grammar and the time for incremental pruning by 60 with a performance drop of only 03p01rit is informative to consider the kinds of errors made by the treebank approach to bracketingin particular the errors may indicate options for incorporating lexical information into the base np findergiven the increases in performance achieved by ramshaw marcus by including wordlevel cues we would hope to see similar improvements by exploiting lexical information in the treebank approachfor each corpus we examined the first 100 or so errors and found that certain linguistic constructs consistently because trouble of nps in the annotated text table 1 summarizes the performance of the treebank approach to base np identification on the rm and empire corpora using the initial and pruned rule setsthe first column of results shows the performance of the initial unpruned base np grammarthe next two columns show the performance of the automatically pruned rule setsthe final column indicates the performance of rule sets that had been pruned using the handcrafted pruning heuristicsas expected the initial rule set performs quite poorlyboth automated approaches provide significant increases in both recall and precisionin addition they outperform the rule set pruned using handcrafted pruning heuristicsmany errors appear to stem from four underlying causesfirst close to 20 can be attributed to errors in the treebank and in the base np corpus bringing the effective performance of the algorithm to 942p959r and 915p927r for the empire and rm corpora respectivelyfor example neither corpus includes whphrases as base npswhen the bracketer correctly recognizes these nps they are counted as errorspartofspeech tagging errors are a second becausethird many nps are missed by the bracketer because it lacks the appropriate rulefor example household products business is bracketed as householdnn productsnns businessnmfourth idiomatic and specialized expressions especially time date money and numeric phrases also account for a substantial portion of the errorsthese last two categories of errors can often be detected because they produce either recognizable patterns or unlikely linguistic constructsconsecutive nps for example usually denote bracketing errors as in householdnn productsnns businessnmmerging consecutive nps in the correct contexts would fix many such errorsidiomatic and specialized expressions might be corrected by similarly local repair heuristicstypical examples might include changing effectivejj mondaynnp to effective monday changing thedt balancenn duejj to the balance due and changing werevbp intrb thedt onlyrb losersnns to were nt the only losersgiven these observations we implemented three local repair heuristicsthe first merges consecutive nps unless either might be a time expressionthe second identifies two simple date expressionsthe third looks for quantifiers preceding of npthe first heuristic for example merges household products business to form household products business but leaves increased 15 last friday untouchedthe second heuristic merges june 5 1995 into june 5 1995 and june 1995 into june 1995the third finds examples like some of the companies and produces some of the companiesthese heuristics represent an initial exploration into the effectiveness of employing lexical information in a postprocessing phase rather than during grammar induction and bracketingwhile we are investigating the latter in current work local repair heuristics have the advantage of keeping the training and bracketing algorithms both simple and fastthe effect of these heuristics on recall and precision is shown in table 3we see consistent improvements for both corpora and both pruning methods achieving approximately 94pr for the empire corpus and approximately 91pr for the rm corpusnote that these are the final results reported in the introduction and conclusionalthough these experiments represent only an initial investigation into the usefulness of local repair heuristics we are very encouraged by the resultsthe heuristics uniformly boost precision without harming recall they help the rm corpus even though they were designed in response to errors in the empire corpusin addition these three heuristics alone recover 12 to 13 of the improvements we can expect to obtain from lexicalization based on the rm resultsthis paper presented a new method for identifying base npsour treebank approach uses the simple technique of matching partofspeech tag sequences with the intention of capturing the simplicity of the corresponding syntactic structureit employs two existing corpusbased techniques the initial noun phrase grammar is extracted directly from an annotated corpus and a benefit score calculated from errors on an improvement corpus selects the best subset of rules via a coarse or finegrained pruning algorithmthe overall results are surprisingly good especially considering the simplicity of the methodit achieves 94 precision and recall on simple base npsit achieves 91 precision and recall on the more complex nps of the rainshaw marcus corpuswe believe however that the base np finder can be improved furtherfirst the longestmatch heuristic of the noun phrase bracketer could be replaced by more sophisticated parsing methods that account for lexical preferencesrule application for example could be disambiguated statistically using distributions induced during trainingwe are currently investigating such extensionsone approach closely related to ours weighted finitestate transducers might provide a principled way to do thiswe could then consider applying our errordriven pruning strategy to rules encoded as transducerssecond we have only recently begun to explore the use of local repair heuristicswhile initial results are promising the full impact of such heuristics on overall performance can be determined only if they are systematically learned and tested using available training datafuture work will concentrate on the corpusbased acquisition of local repair heuristicsin conclusion the treebank approach to base nps provides an accurate and fast bracketing method running in time linear in the length of the tagged textthe approach is simple to understand implement and trainthe learned grammar is easily modified for use with new corpora as rules can be added or deleted with minimal interaction problemsfinally the approach provides a general framework for developing other treebank grammars in addition to these for base npsacknowledgmentsthis work was supported in part by nsf grants iri9624639 and ger9454149we thank mitre for providing their partofspeech tagger
P98-1034
errordriven pruning of treebank grammars for base noun phrase identificationfinding simple nonrecursive base noun phrases is an important subtask for many natural language processing applicationswhile previous empirical methods for base np identification have been rather complex this paper instead proposes a very simple algorithm that is tailored to the relative simplicity of the taskin particular we present a corpusbased approach for finding base nps by matching partofspeech tag sequencesthe training phase of the algorithm is based on two successful techniques first the base np grammar is read from a treebank corpus then the grammar is improved by selecting rules with high benefit scoresusing this simple algorithm with a naive heuristic for matching rules we achieve surprising accuracy in an evaluation on the penn treebank wall street journalwe store pos tag sequences that make up complete chunks and use these sequences as rules for classifying unseen data
exploiting syntactic structure for language modeling the paper presents a language model that develops syntactic structure and uses it to extract meaningful information from the word history thus enabling the use of long distance dependencies the model assigns probability to every joint sequence of wordsbinaryparsestructure with headword annotation and operates in a lefttoright manner therefore usable for automatic speech recognition the model its probabilistic parameterization and a set of experiments meant to evaluate its predictive power are presented an improvement over standard trigram modeling is achieved the main goal of the present work is to develop a language model that uses syntactic structure to model longdistance dependenciesduring the summer96 dod workshop a similar attempt was made by the dependency modeling groupthe model we present is closely related to the one investigated in however different in a few important aspects our model operates in a lefttoright manner allowing the decoding of word lattices as opposed to the one referred to previously where only whole sentences could be processed thus reducing its applicability to nbest list rescoring the syntactic structure is developed as a model component our model is a factored version of the one in thus enabling the calculation of the joint probability of words and parse structure this was not possible in the previous case due to the huge computational complexity of the modelour model develops syntactic structure incrementally while traversing the sentence from left to rightthis is the main difference between our approach and other approaches to statistical natural language parsingour parsing strategy is similar to the incremental syntax ones proposed relatively recently in the linguistic community the probabilistic model its parameterization and a few experiments that are meant to evaluate its potential for speech recognition are presentedconsider predicting the word after in the sentence the contract ended with a loss of 7 cents after trading as low as 89 centsa 3gram approach would predict after from whereas it is intuitively clear that the strongest predictor would be ended which is outside the reach of even 7gramsour assumption is that what enables humans to make a good prediction of after is the syntactic structure in the pastthe linguistically correct partial parse of the word history when predicting after is shown in figure 1the word ended is called the headword of the constituent and ended is an exposed headword when predicting after topmost headword in the largest constituent that contains itthe syntactic structure in the past filters out irrelevant words and points to the important ones thus enabling the use of long distance information when predicting the next wordour model will attempt to build the syntactic structure incrementally while traversing the sentence lefttorightthe model will assign a probability p to every sentence w with every possible postag assignment binary branching parse nonterminal label and headword annotation for every constituent of t let w be a sentence of length n words to which we have prepended and appended so that wo and w1 let wk be the word kprefix wo wk of the sentence and wk tk the wordparse kprefixto stress this point a wordparse kprefix contains for a given parse only those binary subtrees whose span is completely included in the word kprefix excluding wo single words along with their postag can be regarded as rootonly treesfigure 2 shows a wordparse kprefix h_o h_m are the exposed heads each head being a pair or in the case of a rootonly treea complete parse figure 3 is any binary parse of the sequence with the restriction that is the only allowed headnote that need not be a constituent but for the parses where it is there is no restriction on which of its words is the headword or what is the nonterminal label that accompanies the headwordthe model will operate by means of three modules or until it passes control to the predictor by taking a null transitionntlabel is the nonterminal label assigned to the newly built constituent and left right specifies where the new headword is inherited fromthe operations performed by the parser are illustrated in figures 46 and they ensure that all possible binary branching parses with all possible headword and nonterminal label assignments for the w1 wk word sequence can be generatedthe following algorithm formalizes the above description of the sequential generation of a sentence with a complete parsethe unary transition is allowed only when the most recent exposed head is a leaf of the tree a regular word along with its postag hence it can be taken at most once at a given position in the input word stringthe second subtree in figure 2 provides an example of a unary transition followed by a null transitionit is easy to see that any given word sequence with a possible parse and headword annotation is generated by a unique sequence of model actionsthis will prove very useful in initializing our model parameters from a treebank see section 35the probability p of a word sequence w and a complete parse t can be broken into where as can be seen is one of the nk wordparse kprefixes wktk at position k in the sentence i i nkto ensure a proper probabilistic model we have to make sure that and are well defined conditional probabilities and that the model halts with probability oneconsequently certain parser and wordpredictor probabilities must be given specific values pwktk 1 if h_o and h_1 word 0 ensure that the parse generated by our model is consistent with the definition of a complete parse the wordpredictor model predicts the next word based on the preceding 2 exposed heads thus making the following equivalence classification after experimenting with several equivalence classifications of the wordparse prefix for the tagger model the conditioning part of model was reduced to using the word to be tagged and the tags of the two most recent exposed heads model assigns probability to different parses of the word kprefix by chaining the elementary operations described abovethe workings of the parser module are similar to those of spatter the equivalence classification of the wktk wordparse we used for the parser model was the same as the one used in it is worth noting that if the binary branching structure developed by the parser were always rightbranching and we mapped the postag and nonterminal label vocabularies to a single type then our model would be equivalent to a trigram language modelall model components wordpredictor tagger parser are conditional probabilistic models of the type p where yx1x2xn belong to a mixed bag of words postags nonterminal labels and parser operations for simplicity the modeling method we chose was deleted interpolation among relative frequency estimates of different orders f using a recursive mixing scheme as can be seen the context mixing scheme discards items in the context in righttoleft orderthe a coefficients are tied based on the range of the count cthe approach is a standard one which does not require an extensive description given the literature available on it since the number of parses for a given word prefix wk grows exponentially with kitk1 0 the state space of our model is huge even for relatively short sentences so we had to use a search strategy that prunes itour choice was a synchronous multistack search algorithm which is very similar to a beam searcheach stack contains hypotheses partial parses that have been constructed by the same number of predictor and the same number of parser operationsthe hypotheses in each stack are ranked according to the ln score highest on topthe width of the search is controlled by two parameters above pruning strategy proved to be insufficient so we chose to also discard all hypotheses whose score is more than the logprobability threshold below the score of the topmost hypothesisthis additional pruning step is performed after all hypotheses in stage k have been extended with the null parser transition and thus prepared for scanning a new wordthe conditional perplexity calculated by assigning to a whole sentence the probability where t argmaxtp is not valid because it is not causal when predicting wki we use t which was determined by looking at the entire sentenceto be able to compare the perplexity of our model with that resulting from the standard trigram approach we need to factor in the entropy of guessing the correct parse i before predicting wki based solely on the word prefix wkthe probability assignment for the word at position k 1 in the input sentence is made using which ensures a proper probability over strings w where sk is the set of all parses present in our stacks at the current stage k another possibility for evaluating the word level perplexity of our model is to approximate the probability of a whole sentence where t is one of the quotnbestquot in the sense defined by our search parses for w this is a deficient probability assignment however useful for justifying the model parameter reestimationthe two estimates and are both consistent in the sense that if the sums are carried over all possible parses we get the correct value for the word level perplexity of our modelthe major problem we face when trying to reestimate the model parameters is the huge state space of the model and the fact that dynamic programming techniques similar to those used in hmm parameter reestimation cannot be used with our modelour solution is inspired by an hmm reestimation technique that works on pruned nbest trelliseslet k 1 n be the set of hypotheses that survived our pruning strategy until the end of the parsing process for sentence w each of them was produced by a sequence of model actions chained together as described in section 2 let us call the sequence of model actions that produced a given the derivationlet an elementary event in the derivation the probability associated with each model action is determined as described in section 31 based on counts c x one set for each model componentassuming that the deleted interpolation coefficients and the count ranges used for tying them stay fixed these counts are the only parameters to be reestimated in an eventual reestimation procedure indeed once a set of counts c x is specified for a given model m we can easily calculate in ky xn for all context orders n 0 maximumorder this is all we need for calculating the probability of an elementary event and then the probability of an entire derivationone training iteration of the reestimation procedure we propose is described by the following algorithm nbest parse development data countsei prepare countse for each model component c gather_counts development model_c in the parsing stage we retain for each quotnbestquot hypothesis k 1 n only the quantity 0 p nn1 p kw7 t and its derivationwe then scan all the derivations in the quotdevelopment setquot and for each occurrence of the elementary event x in derivation we accumulate the value 0 in the c x counter to be used in the next iterationthe intuition behind this procedure is that 0 is an approximation to the pw probability which places all its mass on the parses that survived the parsing process the above procedure simply accumulates the expected values of the counts om x under the 0 conditional distributionas explained previously the om x counts are the parameters defining our model making our procedure similar to a rigorous them approach a particular and very interesting case is that of events which had count zero but get a nonzero count in the next iteration caused by the quotnbestquot nature of the reestimation processconsider a given sentence in our quotdevelopmentquot setthe quotnbestquot derivations for this sentence are trajectories through the state space of our modelthey will change from one iteration to the other due to the smoothing involved in the probability estimation and the change of the parameters event counts defining our model thus allowing new events to appear and discarding others through purging low probability events from the stacksthe higher the number of trajectories per sentence the more dynamic this change is expected to bethe results we obtained are presented in the experiments sectionall the perplexity evaluations were done using the lefttoright formula for which the perplexity on the quotdevelopment setquot is not guaranteed to decrease from one iteration to anotherhowever we believe that our reestimation method should not increase the approximation to perplexity based on again on the quotdevelopment setquot we rely on the consistency property outlined at the end of section 33 to correlate the desired decrease in l2rpeople with that in sumpeopleno claim can be made about the change in either l2rpeople or sumpeople on test dataeach model component wordpredictor tagger parser is trained initially from a set of parsed sentences after each parse tree undergoes these are the initial parameters used with the reestimation procedure described in the previous sectionin order to get initial statistics for our model components we needed to binarize the upenn treebank parse trees and percolate headwordsthe procedure we used was to first percolate headwords using a contextfree rulebased approach and then binarize the parses by using a rulebased approach againthe headword of a phrase is the word that best represents the phrase all the other words in the phrase being modifiers of the headwordstatistically speaking we were satisfied with the output of an enhanced version of the procedure described in also known under the name quotmagerman sz black headword percolation rulesquotonce the position of the headword within a constituent equivalent with a cf production of the type z 4 y1 y2 where z yr are nonterminal labels or postags is identified to be k we binarize the constituent as follows depending on the z identity a fixed rule is used to decide which of the two binarization schemes in figure 8 to applythe intermediate nodes created by the above binarization schemes receive the nonterminal label zdue to the low speed of the parser 200 wdsmin for stack depth 10 and logprobability threshold 691 nats we could carry out the reestimation technique described in section 34 on only 1 mwds of training datafor convenience we chose to work on the upenn treebank corpusthe vocabulary sizes were the training data was split into quotdevelopmentquot set 929564wds and quotcheck setquot 73760wds the test set size was 82430wds the quotcheckquot set has been used for estimating the interpolation weights and tuning the search parameters the quotdevelopmentquot set has been used for gatheringestimating counts the test set has been used strictly for evaluating model performancetable 1 shows the results of the reestimation technique presented in section 34we achieved a reduction in testdata perplexity bringing an improvement over a deleted interpolation trigram model whose perplexity was 16714 on the same trainingtest data the reduction is statistically significant according to a sign testsimple linear interpolation between our model and the trigram model yielded a further improvement in people as shown in table 2the interpolation weight was estimated on check data to be a 036an overall relative reduction of 11 over the trigram model has been achievedthe large difference between the perplexity of our model calculated on the quotdevelopmentquot set used for model parameter estimation and quottestquot set unseen data shows that the initial point we choose for the parameter values has already captured a lot of information from the training datathe same problem is encountered in standard ngram language modeling however our approach has more flexibility in dealing with it due to the possibility of reestimating the model parameterswe believe that the above experiments show the potential of our approach for improved language modelsour future plans includethis research has been funded by the nsf iri19618874 grant the authors would like to thank sanjeev khudanpur for his insightful suggestionsalso to harry printz eric ristad andreas stolcke dekai wu and all the other members of the dependency modeling group at the summer96 dod workshop for useful comments on the model programming support and an extremely creative environmentalso thanks to eric brill sanjeev khudanpur david yarowsky radu florian lidia mangu and jun wu for useful input during the meetings of the people working on our stimulate grant
P98-1035
exploiting syntactic structure for language modelingthe paper presents a language model that develops syntactic structure and uses it to extract meaningful information from the word history thus enabling the use of long distance dependenciesthe model assigns probability to every joint sequence of wordsbinaryparsestructure with headword annotation and operates in a lefttoright manner therefore usable for automatic speech recognitionthe model its probabilistic parameterization and a set of experiments meant to evaluate its predictive power are presented an improvement over standard trigram modeling is achievedwe choose the lexical heads of the two previous constituents as determined by a shiftreduce parser and find that this works better than a trigram modelwe condition on linguistically relevant words by assigning partial phrase structure to the history and percolating headwords
investigating regular sense extensions based on intersective levin classes in this paper we specifically address questions of polysemy with respect to verbs and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases we see verb classes as the key to making generalizations about regular extensions of meaning current approaches to english classification levin classes and wordnet have limitations in their applicability that impede their utility as general classification schemes we present a refinement of levin classes intersective sets which are a more finegrained classification and have more coherent sets of syntactic frames and associated semantic components we have preliminary indications that the membership of our intersective sets will be more compatible with wordnet than the original levin classes we also have begun to examine related classes in portuguese and find that these verbs demonstrate similarly coherent syntactic and semantic properties the difficulty of achieving adequate handcrafted semantic representations has limited the field of natural language processing to applications that can be contained within welldefined subdomainsthe only escape from this limitation will be through the use of automated or semiautomated methods of lexical acquisitionhowever the field has yet to develop a clear consensus on guidelines for a computational lexicon that could provide a springboard for such methods although attempts are being made the authors would like to acknowledge the support of darpa grant n6600194c6043 aro grant daah0494g0426 and capes grant 0914952one of the most controversial areas has to do with polysemywhat constitutes a clear separation into senses for any one verb and how can these senses be computationally characterized and distinguishedthe answer to this question is the key to breaking the bottleneck of semantic representation that is currently the single greatest limitation on the general application of natural language processing techniquesin this paper we specifically address questions of polysemy with respect to verbs and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phraseswe base these regular extensions on a finegrained variation on levin classes intersective levin classes as a source of semantic components associated with specific adjunctswe also examine similar classes in portuguese and the predictive powers of alternations in this language with respect to the same semantic componentsthe difficulty of determining a suitable lexical representation becomes multiplied when more than one language is involved and attempts are made to map between thempreliminary investigations have indicated that a straightforward translation of levin classes into other languages is not feasible however we have found interesting parallels in how portuguese and english treat regular sense extensionstwo current approaches to english verb classifications are wordnet and levin classes wordnet is an online lexical database of english that currently contains approximately 120000 sets of noun verb adjective and adverb synonyms each representing a lexicalized concepta synset contains besides all the word forms that can refer to a given concept a definitional gloss and in most cases an example sentencewords and synsets are interrelated by means of lexical and semanticconceptual links respectivelyantonymy or semantic opposition links individual words while the supersubordinate relation links entire synsetswordnet was designed principally as a semantic network and contains little syntactic informationlevin verb classes are based on the ability of a verb to occur or not occur in pairs of syntactic frames that are in some sense meaning preserving the distribution of syntactic frames in which a verb can appear determines its class membershipthe fundamental assumption is that the syntactic frames are a direct reflection of the underlying semanticslevin classes are supposed to provide specific sets of syntactic frames that are associated with the individual classesthe sets of syntactic frames associated with a particular levin class are not intended to be arbitrary and they are supposed to reflect underlying semantic components that constrain allowable argumentsfor example break verbs and cut verbs are similar in that they can all participate in the transitive and in the middle construction john broke the window glass breaks easily john cut the bread this loaf cuts easilyhowever only break verbs can also occur in the simple intransitive the window broke the bread cutin addition cut verbs can occur in the conative john valiantly cuthacked at the frozen loaf but his knife was too dull to make a dent in it whereas break verbs cannot john broke at the windowthe explanation given is that cut describes a series of actions directed at achieving the goal of separating some object into piecesit is possible for these actions to be performed without the end result being achieved but where the cutting manner can still be recognized ie john cut at the loafwhere break is concerned the only thing specified is the resulting change of state where the object becomes separated into piecesif the result is not achieved there are no attempted breaking actions that can still be recognizedit is not clear how much wordnet synsets should be expected to overlap with levin classes and preliminary indications are that there is a wide discrepancy however it would be useful for the wordnet senses to have access to the detailed syntactic information that the levin classes contain and it would be equally useful to have more guidance as to when membership in a levin class does in fact indicate shared semantic componentsof course some levin classes such as braid are clearly not intended to be synonymous which at least partly explains the lack of overlap between levin and wordnetthe association of sets of syntactic frames with individual verbs in each class is not as straightforward as one might supposefor instance carry verbs are described as not taking the conative the mother carried at the baby and yet many of the verbs in the carry class are also listed in the pushpull class which does take the conativethis listing of a verb in more than one class is left open to interpretation in levindoes it indicate that more than one sense of the verb is involved or is one sense primary and the alternations for that class should take precedence over the alternations for the other classes in which the verb is listedthe grounds for deciding that a verb belongs in a particular class because of the alternations that it does not take are elusive at bestwe augmented the existing database of levin semantic classes with a set of intersective classes which were created by grouping together subsets of existing classes with overlapping membersall subsets were included which shared a minimum of three membersif only one or two verbs were shared between two classes we assumed this might be due to homophony an idiosyncrasy involving individual verbs rather than a systematic relationship involving coherent sets of verbsthis filter allowed us to reject the potential intersective class that would have resulted from combining the remove verbs with the scribble verbs for examplethe sole member of this intersection is the verb mantic classes such that ci n n cal c where e is a relevance cutoffwe then reclassified the verbs in the database as followsa verb was assigned membership in an intersective class if it was listed in each of the existing classes that were combined to form the new intersective classsimultaneously the verb was removed from the membership lists of those existing classessome of the large levin classes comprise verbs that exhibit a wide range of possible semantic components and could be divided into smaller subclassesthe split verbs do not obviously form a homogeneous semantic classinstead in their use as split verbs each verb manifests an extended sense that can be paraphrased as quotseparate by vingquot where quotvquot is the basic meaning of that verb many of the verbs that do not have an inherent semantic component of quotseparatingquot belong to this class because of the component of force in their meaningthey are interpretable as verbs of splitting or separating only in particular syntactic frames the branch but not i pulled the twig and the branchthe adjunction of the apart adverb adds a change of state semantic component with respect to the object which is not present otherwisethese fringe split verbs appear in several other intersective classes that highlight the force aspect of their meaningfigure 2 depicts the intersection of split carry and pushpullfigure 2 intersective class formed from levin carry pushpull and split verbs verbs in are not listed by levin in all the intersecting classes but participate in all the alternations the intersection between the pushpull verbs of exerting force the carry verbs and the split verbs illustrates how the force semantic component of a verb can also be used to extend its meaning so that one can infer a causation of accompanied motiondepending on the particular syntactic frame in which they appear members of this intersective class can be used to exemplify any one of the the component levin classes for each levin verb class is not always complete so to check if a particular verb belongs to a class it is better to check that the verb exhibits all the alternations that define the classsince intersective classes were built using membership lists rather than the set of defining alternations they were similarly incompletethis is an obvious shortcoming of the current implementation of intersective classes and might affect the choice of 3 as a relevance cutoff in later implementations although the levin classes that make up an intersective class may have conflicting alternations this does not invalidate the semantic regularity of the intersective classas a verb of exerting force push can appear in the conative alternation which emphasizes its force semantic component and ability to express an quotattemptedquot action where any result that might be associated with the verb is not necessarily achieved as a carry verb push cannot take the conative alternation which would conflict with the core meaning of the carry verb class the critical point is that while the verb meaning can be extended to either quotattemptedquot action or directed motion these two extensions cannot cooccur they are mutually exclusivehowever the simultaneous potential of mutually exclusive extensions is not a problemit is exactly those verbs that are triplelisted in the splitpushcarry intersective class that can take the conativethe carry verbs that are not in the intersective class are more quotpurequot examples of the carry class and always imply the achievement of causation of motionthus they cannot take the conative alternationeven though the levin verb classes are defined by their syntactic behavior many reflect semantic distinctions made by wordnet a classification hierarchy defined in terms of purely semantic word relations when examining in detail the intersective classes just described which emphasize not only the individual classes but also their relation to other classes we see a rich semantic lattice much like wordnetthis is exemplified by the levin cut verbs and the intersective class formed by the cut verbs and split verbsthe original intersective class exhibits alternations of both parent classes and has been augmented with chip clip slash snip since these cut verbs also display the syntactic properties of split verbswordnet distinguishes two subclasses of cut differentiated by the type of result this distinction appears in the secondorder levin classes as membership vs nonmembership in the intersective class with splitlevin verb classes are based on an underlying lattice of partial semantic descriptions which are manifested indirectly in diathesis alternationswhereas high level semantic relations are represented directly in wordnet they can sometimes be inferred from the intersection between levin verb classes as with the cutsplit classhowever other intersective classes such as the splitpushcarry class are no more consistent with wordnet than the original levin classesthe most specific hypernym common to all the verbs in this intersective class is move displace which is also a hypernym for other carry verbs not in the intersectionin addition only one verb has a wordnet sense corresponding to the change of state separation semantic component associated with the split classthe fact that the split sense for these verbs does not appear explicitly in wordnet is not surprising since it is only an extended sense of the verbs and separation is inferred only when the verb occurs with an appropriate adjunct such as aparthowever apart can also be used with other classes of verbs including many verbs of motionto explicitly list separation as a possible sense for all these verbs would be extravagant when this sense can be generated from the combination of the adjunct with the force or motion semantic component of the verbwordnet does not currently provide a consistent treatment of regular sense extension it would be straightforward to augment it with pointers indicating which senses are basic to a class of verbs and which can be generated automatically and include corresponding syntactic informationfigure 3 shows intersective classes involving two classes of verbs of manner of motion and a class of verbs of existence roll and run verbs have semantic components describing a manner of motion that typically though not necessarily involves change of locationin the absence of a goal or path adjunct they do not specify any direction of motion and in some cases require the adjunct to explicitly specify any displacement at allthe two classes differ in that roll verbs relate to manners of motion characteristic of inanimate entities while run verbs describe manners in which animate entities can movesome manner of motion verbs allow a transitive alternation in addition to the basic intransitivewhen a roll verb occurs in the transitive the subject physically causes the object to move whereas the subject of a transitive run verb merely induces the object to move some verbs can be used to describe motion of both animate and inanimate objects and thus appear in both roll and run verb classesthe slide class partitions this rollrun intersection into verbs that can take the transitive alternation and verbs that cannot verbs in the sliderollrun intersection are also allowed to appear in the dative alternation in which the sense of change of location is extended to change of possessionwhen used intransitively with a path prepositional phrase some of the manner of motion verbs can take on a sense of pseudomotional existence in which the subject does not actually move but has a shape that could describe a path for the verb these verbs are listed in the intersective classes with meander verbs of existencethe portuguese verbs we examined behaved much more similarly to their english counterparts than we expectedmany of the verbs participate in alternations that are direct translations of the english alternationshowever there are some interesting differences in which sense extensions are allowedwe have made a preliminary study of the portuguese translation of the carry verb classas in english these verbs seem to take different alternations and the ability of each to participate in an alternation is related to its semantic contenttable 1 shows how these portuguese verbs naturally cluster into two different subclasses based on their ability to take the conative and apart alternations as well as path prepositionsthese subclasses correspond very well to the english subclasses created by the intersective classthe conative alternation in portuguese is mainly contra and the apart alternation is mainly separando for example eu puxei o ramo e o galho separandoos and ele empurrou contra a parede we also investigated the portuguese translation of some intersective classes of motion verbswe selected the sliderollrun meanderroll and rollrun intersective classesmost verbs have more than one translation into portuguese so we chose the translation that best described the meaning or that had the same type of arguments as described in levin verb classesthe elements of the sliderollrun class are rebater flutuar rolar and deslizar the resultative in portuguese cannot be expressed in the same way as in englishit takes a gerund plus a reflexive as in a porta deslizou abrindose transitivity is also not always preserved in the translationsfor example flutuar does not take a direct object so some of the alternations that are related to its transitive meaning are not presentfor these verbs we have the induced action alternation by using the light verb fazer before the verb as in maria fez o barco flutuar as can be seen in table 2 the alternations for the portuguese translations of the verbs in this intersective class indicate that they share similar properties with the english verbs including the causativeinchoativethe exception to this as just noted is flutuar the result of this is that flutuar should move out of the slide class which puts it with derivar and planar in the closely related rollrun classas in english derivar and planar are not externally controllable actions and thus do not take the causativeinchoative alternation common to other verbs in the roll classplanar does not take a direct object in portuguese and it shows the induced action alternation the same way as flutuar derivar is usually said as quotestar a derivaquot showing its noncontrollable action more explicitlywe have presented a refinement of levin classes intersective classes and discussed the potential for mapping them to wordnet senseswhereas each wordnet synset is hierarchicalized according to only one aspect levin recognizes that verbs in a class may share many different semantic features without designating one as primaryintersective levin sets partition these classes according to more coherent subsets of features in effect highlighting a lattice of semantic features that determine the sense of a verbgiven the incompleteness of the list of members of levin classes each verb must be examined to see whether it exhibits all the alternations of a classthis might be approximated by automatically extracting the syntactic frames in which the verb occurs in corpus data rather than manual analysis of each verb as was done in this studywe have also examined a mapping between the english verbs that we have discussed and their portuguese translations which have several of the same properties as the corresponding verbs in englishmost of these verbs take the same alternations as in english and by virtue of these alternations achieve the same regular sense extensionsthere are still many questions that require further investigationfirst since our experiment was based on a translation from english to portuguese we can expect that other verbs in portuguese would share the same alternations so the classes in portuguese should by no means be considered completewe will be using resources such as dictionaries and online corpora to investigate potential additional members of our classessecond since the translation mappings may often be manytomany the alternarebater flutuar rolar deslizar derivar planar dative yes yes yes conative no no no causinch yes yes yes middle yes yes yes accept coref yes yes yes causinch yes yes yes yes yes yes resultative yes yes yes yee yes yee adject part yes yes yes ind action yes yes yes yes no yes locat invers yes yes yes yes yes yes measure yee yes yes yee yes yes adj perf no no no no no no cogn object no no no no no no zero nom yes yes no yes yes yes tions may depend on which translation is chosen potentially giving us different clusters but it is uncertain to what extent this is a factor and it also requires further investigationin this experiment we have tried to choose the portuguese verb that is most closely related to the description of the english verb in the levin classwe expect these crosslinguistic features to be useful for capturing translation generalizations between languages as discussed in the literature in pursuing this goal we are currently implementing features for motion verbs in the english treeadjoining grammar tag tags have also been applied to portuguese in previous work resulting in a small portuguese grammar we intend to extend this grammar building a more robust tag grammar for portuguese that will allow us to build an englishportuguese transfer lexicon using these features
P98-1046
investigating regular sense extensions based on intersective levin classesin this paper we specifically address questions of polysemy with respect to verbs and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phraseswe see verb classes as the key to making generalizations about regular extensions of meaningcurrent approaches to english classification levin classes and wordnet have limitations in their applicability that impede their utility as general classification schemeswe present a refinement of levin classes intersective sets which are a more finegrained classification and have more coherent sets of syntactic frames and associated semantic componentswe have preliminary indications that the membership of our intersective sets will be more compatible with wordnet than the original levin classeswe also have begun to examine related classes in portuguese and find that these verbs demonstrate similarly coherent syntactic and semantic propertieswe show that multiple listings could in some cases be interpreted as regular sense extensions and defined intersective levin classes which are a more finegrained syntactically and semantically coherent refinement of basic levin classeswe argue that the use of syntactic frames and verb classes can simplify the definition of different verb senses
an ir approach for translating new words from nonparallel comparable texts in recent years there is a phenomenal growth in the amount of online text material available from the greatest information repository known as the world wide webvarious traditional information retrieval techniques combined with natural language processing techniques have been retargeted to enable efficient access of the wwwsearch engines indexing relevance feedback query term and keyword weighting document analysis document classification etcmost of these techniques aim at efficient online search for information already on the webmeanwhile the corpus linguistic community regards the www as a vast potential of corpus resourcesit is now possible to download a large amount of texts with automatic tools when one needs to compute for example a list of synonyms or download domainspecific monolingual texts by specifying a keyword to the search engine and then use this text to extract domainspecific termsit remains to be seen how we can also make use of the multilingual texts as nlp resourcesin the years since the appearance of the first papers on using statistical models for bilingual lexicon compilation and machine translation large amount of human effort and time has been invested in collecting parallel corpora of translated textsour goal is to alleviate this effort and enlarge the scope of corpus resources by looking into monolingual comparable textsthis type of texts are known as nonparallel corporasuch nonparallel monolingual texts should be much more prevalent than parallel textshowever previous attempts at using nonparallel corpora for terminology translation were constrained by the inadequate availability of samedomain comparable texts in electronic formthe type of nonparallel texts obtained from the ldc or university libraries were often restricted and were usually outofdate as soon as they became availablefor new word translation the timeliness of corpus resources is a prerequisite so is the continuous and automatic availability of nonparallel comparable texts in electronic formdata collection effort should not inhibit the actual translation effortfortunately nowadays the world wide web provides us with a daily increase of fresh uptodate multilingual material together with the archived versions all easily downloadable by software tools running in the backgroundit is possible to specify the url of the online site of a newspaper and the start and end dates and automatically download all the daily newspaper materials between those datesin this paper we describe a new method which combines ir and nlp techniques to extract new word translation from automatically downloaded englishchinese nonparallel newspaper textsto improve the performance of a machine translation system it is often necessary to update its bilingual lexicon either by human lexicographers or statistical methods using large corporaup until recently statistical bilingual lexicon compilation relies largely on parallel corporathis is an undesirable constraint at timesin using a broadcoverage englishchinese mt system to translate some text recently we discovered that it is unable to translate mel liougan which occurs very frequently in the textother words which the system cannot find in its 20000entry lexicon include proper names such as the taiwanese president lee tenghui and the hong kong chief executive tung cheehwato our disappointment we cannot locate any parallel texts which include such words since they only start to appear frequently in recent monthsa quick search on the web turned up archives of multiple local newspapers in english and chineseour challenge is to find the translation of 1133 i liougan and other words from this online nonparallel comparable corpus of newspaper materialswe choose to use issues of the english newspaper hong kong standard and the chinese newspaper mingpao from dec1297 to dec3197 as our corpusthe english text contains about 3 mb of text whereas the chinese text contains 88 mb of 2 byte character textsso both texts are comparable in sizesince they are both local mainstream newspapers it is reasonable to assume that their contents are comparable as wellunlike in parallel texts the position of a word in a text does not give us information about its translation in the other language suggest that a content word is closely associated with some words in its contextas a tutorial example we postulate that the words which appear in the context ofmet iliougan should be similar to the words appearing in the context of its english translation fluwe can form a vector space model of a word in terms of its context word indices similar to the vector space model of a text in terms of its constituent word indices the value of the ith dimension of a word vector w is f if the ith word in the lexicon appears f times in the same sentences as w left columns in table 1 and table 2 show the list of content words which appear most frequently in the context of flu and africa respectivelythe right column shows those which occur most frequently in the context of bvgwe can see that the context of at is more similar to that of flu than to that of africaso the first clue to the similarity between a word and its translation number of common words in their contextsin a bilingual corpus the quotcommon wordquot is actually a bilingual word pairwe use the lexicon of the mt system to quotbridgequot all bilingual word pairs in the corporathese word pairs are used as seed wordswe found that the contexts of flu and mi iliougan share 233 quotcommonquot context words whereas the contexts of africa and wiff iliougan share only 121 common words even though the context of flu has 491 unique words and the context of africa has 328 wordsin the vector space model wflu and wliougan has 233 overlapping dimensions whereas there are 121 overlapping dimensions between wflu and w africathe flu example illustrates that the actual ranking of the context word frequencies provides a second clue to the similarity between a bilingual word pairfor example virus ranks very high for both flu and me iliougan and is a strong quotbridgequot between this bilingual word pairthis leads us to use the term frequency measurethe tf of a context word is defined as the frequency of the word in the context of w however the tf of a word is not independent of its general usage frequencyin an extreme case the function word the appears most frequently in english texts and would have the highest tf in the context of any w in our hkstandardmingpao corpus hong kong is the most frequent content word which appears everywhereso in the flu example we would like to reduce the significance of hong kong tf while keeping that of virusa common way to account for this difference is by using the inverse document frequencyamong the variants of idf we choose the following representation from next a ranking algorithm is needed to match the unknown word vectors to their counterparts in the other languagea ranking algorithm selects the best target language candidate for a source language word according to direct comparison of some similarity measures we modify the similarity measure proposed by into the following so where maxn ni the maximum frequency of any word in the corpus the total number of occurrences of word i in the corpus so where wic wie veliwic2 x eliwie2 tfic tfie the idf of virus is 181 and that of hong kong is 123 in the english textthe idf of ws is 192 and that of hong kong is 083 in chineseso in both cases virus is a stronger quotbridgequot for biglliougan than hong konghence for every context seed word i we assign a word weighting factor w tfiw x idfi where tfiw is the tf of word i in the context of word w the updated vector space model of word w has wi in its ith dimensionthe ranking of the 20 words in the contexts of wf3 iliougan is rearranged by this weighting factor as shown in table3variants of similarity measures such as the above have been used extensively in the ir community they are mostly based on the cosine measure of two vectorsfor different tasks the weighting factor might varyfor example if we add the idf into the weighting factor we get the following measure si where wic tfic x idf wie tfie x idfi in addition the dice and jaccard coefficients are also suitable similarity measures for document comparison we also implement the dice coefficient into similarity measure s2 where wic tfie x idfi si is often used in comparing a short query with a document text whereas s2 is used in comparing two document textsreasoning that our objective falls somewhere in betweenwe are comparing segments of a document we also multiply the above two measures into a third similarity measure s3in using bilingual seed words such asnet virus as quotbridgesquot for terminology translation the quality of the bilingual seed lexicon naturally affects the system outputin the case of european language pairs such as frenchenglish we can envision using words sharing common cognates as these quotbridgesquotmost importantly we can assume that the word boundaries are similar in french and englishhowever the situation is messier with english and chinesefirst segmentation of the chinese text into words already introduces some ambiguity of the seed word identitiessecondly englishchinese translations are complicated by the fact that the two languages share very little stemming properties or partofspeech set or word orderthis property causes every english word to have many chinese translations and vice versain a sourcetarget language translation scenario the translated text can be quotrearrangedquot and cleaned up by a monolingual language model in the target languagehowever the lexicon is not very reliable in establishing quotbridgesquot between nonparallel englishchinese textsto compensate for this ambiguity in the seed lexicon we introduce a confidence weighting to each bilingual word pair used as seed wordsif a word ie is the kth candidate for word ic then wite wite kithe similarity scores then become s4 and s5 and s6 s4 x s5 we also experiment with other combinations of the similarity scores such as s7 so x s5all similarity measures s3 s7 are used in the experiment for finding a translation for me in order to apply the above algorithm to find the translation for blgt i liougan from the hkstandardmingpao corpus we first use a script to select the 118 english content words which are not in the lexicon as possible candidatesusing similarity measures s3 s7 the highest ranking candidates of ms are shown in table 6s6 and s7 appear to be the best similarity measureswe then test the algorithm with s7 on more chinese words which are not found in the lexicon but which occur frequently enough in the mingpao textsa statistical new word extraction tool can be used to find these wordsthe unknown chinese words and their english counterparts as well as the occurrence frequencies of these words in hkstandardmingpao are shown in table 4frequency numbers with a indicates that this word does not occur frequent enough to be foundchinese words with a indicates that it is a word with segmentation and translation ambiguitiesfor example 44 could be a family name or part of another word meaning forestwhen it is used as a family name it could be transliterated into lam in cantonese or lin in mandarindisregarding all entries with a in the above table we apply the algorithm to the rest of the chinese unknown words and the 118 english unknown words from hkstandardthe output is ranked by the similarity scoresthe highest ranking translated pairs are shown in table 5the only chinese unknown words which are not correctly translated in the above list are ik6 fl lunar and mr yeltsin 1 tungcheehwa is a pair of collocates which is actually the full name of the chief executivepoultry in chinese is closely related to flu because the chinese name for bird flu is poultry fluin fact almost all unambiguous chinese new words find their translations in the first 100 of the ranked listsix of the chinese words have correct translation as their first candidateusing vector space model and similarity measures for ranking is a common approach in ir for querytext and texttext comparisons this approach has also been used by for sense disambiguation between multiple usages of the same wordsome of the early statistical terminology translation methods are these algorithms all require parallel translated texts as inputattempts at exploring nonparallel corpora for terminology translation are very few among these proposes that the association between a word and its close collocate is preserved in any language and suggests that the associations between a word and many seed words are also preserved in another languagein this paper we have demonstrated that the associations between a word and its context seed words are wellpreserved in nonparallel comparable texts of different languagesour algorithm is the first to have generated a collocation bilingual lexicon albeit small from a nonparallel comparable corpuswe have shown that the algorithm has good precision but the recall is low due to the difficulty in extracting unambiguous chinese and english wordsbetter results can be obtained when the following changes are made we will test the precision and recall of the algorithm on a larger set of unknown wordswe have devised an algorithm using context seed word tfidf for extracting bilingual lexicon from nonparallel comparable corpus in englishchinesethis algorithm takes into account the reliability of bilingual seed words and is language independentthis algorithm can be applied to other language pairs such as englishfrench or englishgermanin these cases since the languages are more similar linguistically and the seed word lexicon is more reliable the algorithm should yield better resultsthis algorithm can also be applied in an iterative fashion where highranking bilingual word pairs can be added to the seed word list which in turn can yield more new bilingual word pairs
P98-1069
an ir approach for translating new words from nonparallel comparable textswe demonstrate that the associations between a word and its context seed words are preserved in comparable texts of different languageswe propose to represent the contexts of a word or phrase with a realvalued vector which one element corresponds to one word in the contexts
improving data driven wordclass tagging by system combination in this paper we examine how the differences in modelling between different data driven systems performing the same nlp task can be exploited to yield a higher accuracy than the best individual system we do this by means of an experiment involving the task of morphosyntactic wordclass tagging four wellknown tagger generators are trained on the same corpus data after comparison their outputs are combined using several voting strategies and second stage classifiers all combination taggers outperform their best component with the best combination showing a 191 lower error rate than the best individual tagger in all natural language processing systems we find one or more language models which are used to predict classify andor interpret language related observationstraditionally these models were categorized as either rulebasedsymbolic or corpusbasedprobabilisticrecent work has demonstrated clearly that this categorization is in fact a mixup of two distinct categorization systems on the one hand there is the representation used for the language model and on the other hand the manner in which the model is constructed data driven methods appear to be the more popularthis can be explained by the fact that in general hand crafting an explicit model is rather difficult especially since what is being modelled natural language is not wellunderstoodwhen a data driven method is used a model is automatically learned from the implicit structure of an annotated training corpusthis is much easier and can quickly lead to a model which produces results with a reasonably good qualityobviously reasonably good quality is not the ultimate goalunfortunately the quality that can be reached for a given task is limited and not merely by the potential of the learning method usedother limiting factors are the power of the hard and software used to implement the learning method and the availability of training materialbecause of these limitations we find that for most tasks we are faced with a ceiling to the quality that can be reached with any available machine learning systemhowever the fact that any given system cannot go beyond this ceiling does not mean that machine learning as a whole is similarly limiteda potential loophole is that each type of learning method brings its own inductive bias to the task and therefore different methods will tend to produce different errorsin this paper we are concerned with the question whether these differences between models can indeed be exploited to yield a data driven model with superior performancein the machine learning literature this approach is known as ensemble stacked or combined classifiersit has been shown that when the errors are uncorrelated to a sufficient degree the resulting combined classifier will often perform better than all the individual systems the underlying assumption is twofoldfirst the combined votes will make the system more robust to the quirks of each learner particular biasalso the use of information about each individual method behaviour in principle even admits the possibility to fix collective errorswe will execute our investigation by means of an experimentthe nlp task used in the experiment is morphosyntactic wordclass taggingthe reasons for this choice are severalfirst of all tagging is a widely researched and wellunderstood task 1998second current performance levels on this task still leave room for improvement tate of the art performance for data driven automatic wordclass taggers is 9697 correctly tagged wordsfinally a number of rather different methods are available that generate a fully functional tagging system from annotated textin 1992 van halteren combined a number of taggers by way of a straightforward majority vote since the component taggers all used ngram statistics to model context probabilities and the knowledge representation was hence fundamentally the same in each component the results were limitednow there are more varied systems available a variety which we hope will lead to better combination effectsfor this experiment we have selected four systems primarily on the basis of availabilityeach of these uses different features of the text to be tagged and each has a completely different representation of the language modelthe first and oldest system uses a traditional trigram model based on context statistics p and lexical statistics p directly estimated from relative corpus frequenciesthe viterbi algorithm is used to determine the most probable tag sequencesince this model has no facilities for handling unknown words a memorybased system is used to propose distributions of potential tags for words not in the lexiconthe second system is the transformation based learning system as described by brill this 1 brill system is available as a collection of c programs and perl scripts at ftp ftp cs jhu edupubbrillprograms rule_based_tagger_v 1 14 tarz system starts with a basic corpus annotation and then searches through a space of transformation rules in order to reduce the discrepancy between its current annotation and the correct one during tagging these rules are applied in sequence to new textof all the four systems this one has access to the most information contextual information as well as lexical information however the actual use of this information is severely limited in that the individual information items can only be combined according to the patterns laid down in the rule templatesthe third system uses memorybased learning as described by daelemans et al during the training phase cases containing information about the word the context and the correct tag are stored in memoryduring tagging the case most similar to that of the focus word is retrieved from the memory which is indexed on the basis of the information gain of each feature and the accompanying tag is selectedthe system used here has access to information about the focus word and the two positions before and after at least for known wordsfor unknown words the single position before and after three suffix letters and information about capitalization and presence of a hyphen or a digit are usedthe fourth and final system is the mxpost system as described by ratnaparkhi it uses a number of word and context features rather similar to system m and trains a maximum entropy model that assigns a weighting parameter to each featurevalue and combination of features that is relevant to the estimation of the probability pa beam search is then used to find the highest probability tag sequenceboth this system and brill system are used with the default settings that are suggested in their documentationthe data we use for our experiment consists of the tagged lob corpus the corpus comprises about one million words divided over 500 samples of 2000 words from 15 text typesits tagging which was manually checked and corrected is generally accepted to be quite accuratehere we use a slight adaptation of the tagsetthe changes are mainly cosmetic eg nonalphabetic characters such as quot8quot in tag names have been replacedhowever there has also been some retokenization genitive markers have been split off and the negative marker quotntquot has been reattachedan example sentence tagged with the resulting tagset is the ati singular or plural lord npt article major npt singular titular extended vbd noun an at singular titular invitation nn noun to in past tense of verb all abn singular article the ati singular common parliamentary jj noun candidates nns preposition sper prequantifier singular or plural article adjective plural common noun period the tagset consists of 170 different tags and has an average ambiguity of 269 tags per wordformthe difficulty of the tagging task can be judged by the two baseline measurements in table 2 below representing a completely random choice from the potential tags for each token and selection of the lexically most likely tag for our experiment we divide the corpus into three partsthe first part called train consists of 80 of the data constructed 3ditto tags are used for the components of multitoken units eg if quotas well asquot is taken to be a coordination conjunction it is tagged quotas_cc1 well_cc2 as_cc3quot using three related but different ditto tags by taking the first eight utterances of every tenthis part is used to train the individual taggersthe second part tune consists of 10 of the data and is used to select the best tagger parameters where applicable and to develop the combination methodsthe third and final part test consists of the remaining 10 and is used for the final performance measurements of all taggersboth tune and test contain around 25 new tokens and a further 02 known tokens with new tagsthe data in train and tune is to be the only information used in tagger construction all components of all taggers are to be entirely data driven and no manual adjustments are to be donethe data in test is never to be inspected in detail but only used as a benchmark tagging for quality measurementquotin order to see whether combination of the component taggers is likely to lead to improvements of tagging quality we first examine the results of the individual taggers when applied to tuneas far as we know this is also one of the first rigorous measurements of the relative quality of different tagger generators using a single tagset and dataset and identical circumstancesthe quality of the individual taggers certainly still leaves room for improvement although tagger e surprises us with an accuracy well above any results reported so far and makes us less confident about the gain to be accomplished with combinationhowever that there is room for improvement is not enoughas explained above for combination to lead to improvement the component taggers must differ in the errors that they makethat this is indeed the case can be seen in table 1it shows that for 9922 of tune at least one tagger selects the correct taghowever it is unlikely that we will be able to identify this terns between the brackets give the distribution of correctincorrect tags over the systems tag in each casewe should rather aim for optimal selection in those cases where the correct tag is not outvoted which would ideally lead to correct tagging of 9821 of the words there are many ways in which the results of the component taggers can be combined selecting a single tag from the set proposed by these taggersin this and the following sections we examine a number of themthe accuracy measurements for all of them are listed in table 25 the most straightforward selection method is an nway voteeach tagger is allowed to vote for the tag of its choice and the tag with the highest number of votes is selected6 the question is how large a vote we allow each taggerthe most democratic option is to give each tagger one vote however it appears more useful to give more weight to taggers which have proved their qualitythis can be general quality eg each tagger votes its overall precision or quality in relation to the current situation eg each tagger votes its precision on the suggested tag the information about each tagger quality is derived from an inspection of its results on tune5for any tag x precision measures which percentage of the tokens tagged x by the tagger are also tagged x in the benchmark and recall measures which percentage of the tokens tagged x in the benchmark are also tagged x by the taggerwhen abstracting away from individual tags precision and recall are equal and measure how many tokens are tagged correctly in this case we also use the more generic term accuracy6in our experiment a random selection from among the winning tags is made whenever there is a tiebut we have even more information on how well the taggers performwe not only know whether we should believe what they propose but also know how often they fail to recognize the correct tag this information can be used by forcing each tagger also to add to the vote for tags suggested by the opposition by an amount equal to 1 minus the recall on the opposing tag as it turns out all voting systems outperform the best single tagger e7 also the best voting system is the one in which the most specific information is used precisionrecallhowever specific information is not always superior for totprecision scores higher than tagprecisionthis might be explained by the fact that recall information is missing even the worst combinator majority is significantly better than e using mcnemar chisquare p0so far we have only used information on the performance of individual taggersa next step is to examine them in pairswe can investigate all situations where one tagger suggests t1 and the other t2 and estimate the probability that in this situation the tag should actually be tx eg if e suggests dt and t suggests cs the probabilities for the appropriate tag are cs subordinating conjunction 03276 dt determiner 06207 ql quantifier 00172 wpr whpronoun 00345 when combining the taggers every tagger pair is taken in turn and allowed to vote for each possible tag ie not just the ones suggested by the component taggersif a tag pair t1t2 has never been observed in tune we fall back on information on the individual taggers viz the probability of each tag tx given that the tagger suggested tag tinote that with this method a tag suggested by a minority of the taggers still has a chance to winin principle this could remove the restriction of gain only in 22 and 1111 casesin practice the chance to beat a majority is very slight indeed and we should not get our hopes up too high that this should happen very oftenwhen used on test the pairwise voting strategy clearly outperforms the other voting strategies8 but does not yet approach the level where all tying majority votes are handled correctly from the measurements so far it appears that the use of more detailed information leads to a better accuracy improvementit ought therefore to be advantageous to step away from the underlying mechanism of voting and to model the situations observed in tune more closelythe practice of feeding the outputs of a number of classifiers as features for a next learner is usually called stacking the second stage can be provided with the first level outputs and with additional information eg about the original input patternthe first choice for this is to use a memorybased second level learnerin the basic version each case consists of the tags suggested by the component taggers and the correct tagin the more advanced versions we also add information about the word in question and the tags suggested by all taggers for the previous and the next position for the first two the similarity metric used during tagging is a straightforward overlap count for the third we need to use an information gain weighting surprisingly none of the memorybased based methods reaches the quality of tagpair9 the explanation for this can be found when we examine the differences within the memorybased general strategy the more feature information is stored the higher the accuracy on tune but the lower the accuracy on testthis is most likely an overtraining effect tune is probably too small to collect case bases which can leverage the stacking effect convincingly especially since only 751 of the second stage material shows disagreement between the featured tagsto examine if the overtraining effects are specific to this particular second level classifier we also used the c50 system a commercial version of the wellknown program c45 for the induction of decision trees on the same training materia119 because c50 prunes the decision tree the overfitting of training material is less than with memorybased learning but the results on test are also worsewe conjecture that pruning is not beneficial when the interesting cases are very rareto realise the benefits of stacking either more data is needed or a second stage classifier that is better suited to this type of problemthe relation between the accuracy of combinations and that of the individual taggers is shown in table 3the most important observation is that every combination outperforms the combination of any strict subset of its componentsalso of note is the improvement yielded by the best combinationthe pairwise voting system using all four individual taggers scores 9792 correct on test a 191 reduction in error rate over the best individual system viz the maximum entropy tagger a major factor in the quality of the combination results is obviously the quality of the best component all combinations with e score higher than those without e after that the decisive factor appears to be the difference in language model t is generally a better combiner than m and r12 even though it has the lowest accuracy when operating alonea possible criticism of the proposed combi11by a margin at the edge of significance p0060812although not significantly better eg the differences within the group meeret are not significant nation scheme is the fact that for the most successful combination schemes one has to reserve a nontrivial portion of the annotated data to set the parameters for the combinationto see whether this is in fact a good way to spend the extra data we also trained the two best individual systems on a concatenation of train and tune so that they had access to every piece of data that the combination had seenit turns out that the increase in the individual taggers is quite limited when compared to combinationthe more extensively trained e scored 9751 correct on test and m 9707 our experiment shows that at least for the task at hand combination of several different systems allows us to raise the performance ceiling for data driven systemsobviously there is still room for a closer examination of the differences between the combination methods eg the question whether memorybased combination would have performed better if we had provided more training data than just tune and of the remaining errors eg the effects of inconsistency in the data regardless of such closer investigation we feel that our results are encouraging enough to extend our investigation of combination starting with additional component taggers and selection strategies and going on to shifts to other tagsets andor languagesbut the investigation need not be limited to wordclass tagging for we expect that there are many other nlp tasks where combination could lead to worthwhile improvementsour thanks go to the creators of the tagger generators used here for making their systems available
P98-1081
improving data driven wordclass tagging by system combinationin this paper we examine how the differences in modelling between different data driven systems performing the same nlp task can be exploited to yield a higher accuracy than the best individual systemwe do this by means of an experiment involving the task of morphosyntactic wordclass taggingfour wellknown tagger generators are trained on the same corpus dataafter comparison their outputs are combined using several voting strategies and second stage classifiersall combination taggers outperform their best component with the best combination showing a 191 lower error rate than the best individual taggerwe suggest three voting strategies equal vote where each classifier vote is weighted equally overall accuracy where the weight depends on the overall accuracy of a classifier and pairwise voting
pseudoprojectivity a polynomially parsable nonprojective dependency grammar dependency grammar has a long tradition in syntactic theory dating back to at least tesniere work from the thirtiesrecently it has gained renewed attention as empirical methods in parsing are discovering the importance of relations between words which is what dependency grammars model explicitly do but contextfree phrasestructure grammars do notone problem that has posed an impediment to more widespread acceptance of dependency grammars is the fact that there is no computationally tractable version of dependency grammar which is not restricted to projective analyseshowever it is well known that there are some syntactic phenomena that require nonprojective analysesin this paper we present a form of projectivity which we call pseudoprojectivity and we present a generative stringrewriting formalism that can generate pseudoprojective analyses and which is polynomially parsablethe paper is structured as followsin section 2 we introduce our notion of pseudoprojectivitywe briefly review a previously proposed formalization of projective dependency grammars in section 3in section 4 we extend this formalism to handle pseudoprojectivitywe informally present a parser in section 5we will use the following terminology and notation in this paperthe hierarchical order between the nodes of a tree t will be represented with the symbol t and whenever they are unambiguous the notations and will be usedwhen x y we will say that x is a descendent of y and y an ancestor of xthe projection of a node x belonging to a tree t is the set of the nodes y of t such that y xan arc between two nodes y and x of a tree t directed from y to x will be noted either or the node x will be referred to as the dependent and y as the governorthe latter will be noted when convenient xt the notations t and x are unambiguous because a node x has at most one governor in a treeas usual an ordered tree is a tree enriched with a linear order over the set of its nodesfinally if 1 is an arc of an ordered tree t then supp represents the support of 1 ie the set of the nodes of t situated between the extremities of 1 extremities includedwe will say that the elements of supp are covered by 1the notion of projectivity was introduced by and has received several different definitions since thenthe definition given here is borrowed from and definition an arc t is projective if and only if for every y covered by y x a tree t is projective if and only if every arc of t is projective a projective tree has been represented in figure 1a projective dependency tree can be associated with a phrase structure tree whose constituents are the projections of the nodes of the dependency treeprojectivity is therefore equivalent in phrase structure markers to continuity of constituentthe strong constraints introduced by the projectivity property on the relationship between hierarchical order and linear order allow us to describe word order of a projective dependency tree at a local level in order to describe the linear position of a node it is sufficient to describe its position towards its governor and sister nodesthe domain of locality of the linear order rules is therefore limited to a subtree of depth equal to oneit can be noted that this domain of locality is equal to the domain of locality of subcategorization rulesboth rules can therefore be represented together as in or separately as will be proposed in 3although most linguistic structures can be represented as projective trees it is well known that projectivity is too strong a constraint for dependency trees as shown by the example of figure 2 which includes a nonprojective arc who do you think she invited the non projective structures found in linguistics represent a small subset of the potential non projective structureswe will define a property weaker than projectivity called pseudoprojectivity which describes a subset of the set of ordered dependency trees containing the nonprojective linguistic structuresin order to define pseudoprojectivity we introduce an operation on dependency trees called liftingwhen applied to a tree this operation leads to the creation of a second tree a lift of the first onean ordered tree t is a lift of the ordered tree t if and only if t and t have the same nodes in the same order and for every node x xt txt we will say that the node x has been lifted from xt to xti recall that the linear position of a node in a projective tree can be defined relative to its governor and its sistersin order to define the linear order in a non projective tree we will use a projective lift of the treein this case the position of a node can be defined only with regards to its governor and sisters in the lift ie its linear governor and sistersdefinition an ordered tree t is said pseudoprojective if there exists a lift t of tree t which is projectiveif there is no restriction on the lifting the previous definition is not very interesting since we can in fact take any nonprojective tree and lift all nodes to the root node and obtain a projective treewe will therefore constrain the lifting by a set of rules called lifting rulesconsider a set of categoriesthe following definitions make sense only for trees whose nodes are labeled with categoriesthe lifting rules are of the following form this rule says that a node of category ld can be lifted from its syntactic governor of category sg to its linear governor of category lg through a path consisting of nodes of category c1 cn where the string belongs to levery set of lifting rules defines a particular property of pseudoprojectivity by imposing particular constraints on the liftinga 21t is possible to define pseudoprojectivity purely structurally for example we can impose that each node x is lifted to the highest ancestor of x covered by t the resulting pseudoprojectivity is a fairly weak extension to projectivity which nevertheless covers major nonprojective linguistic structureshowever we do not pursue a purely structural definition of pseudoprojectivity in this paper linguistic example of lifting rule is given in section 4the idea of building a projective tree by means of lifting appears in and is used by and this idea can also be compared to the notion of word order domain to the slash feature of gpsg and hpsg to the functional uncertainty of lfg and to the movea of gb theorywe define a projective dependency grammar as a stringrewriting system3 by giving a set of categories such as n v and adv4 a set of distinguished start categories a mapping from strings to categories and two types of rules dependency rules which state hierarchical order and lp rules which state linear orderthe dependency rules are further subdivided into subcategorization rules and modification rules here are some sample srules lp rules are represented as regular expressions associated with each categorywe use the hash sign to denote the position of the governor for example 3we follow throughout this paper by modeling a dependency grammar with a stringrewriting systemhowever we will identify a derivation with its representation as a tree and we will sometimes refer to symbols introduced in a rewrite step as quotdependent nodesquotfor a model of a dg based on treerewriting see we will call this system generative dependency grammar or gdg for shortderivations in gdg are defined as followsin a rewrite step we choose a multiset of dependency rules which contains exactly one srule and zero or more mrulesthe lefthand side nonterminal is the same as that we want to rewritecall this multiset the rewritemultisetin the rewriting operation we introduce a multiset of new nonterminals and exactly one terminal symbol the rewriting operation then must meet the following three conditions as an example consider a grammar containing the three dependency rules di d2 and d3 as well as the lp rule pi in addition we have some lexical mappings and the start symbol is vfinitea sample derivation is shown in figure 3 with the sentential form representation on top and the corresponding tree representation belowusing this kind of representation we can derive a bottomup parser in the following straightforward manner5 since syntactic and linear governors coincide we can derive deterministic finitestate machines which capture both the dependency and the lp rules for a given governor categorywe will refer to these fsms as rulefsms and if the governor is of category c we will refer to a crulefsmin a rulefsm the transitions are labeled by categories and the transition corresponding to the governor labeled by its category and a special mark this transition is called the quothead transitionquotthe entries in the parse matrix m are of the form where in is a rulefsm and q a state of it except for the entries in squares m 1 j n which also contain category labelslet wo wn be the input wordwe initialize the parse matrix as followslet c be a category of word wifirst we add c to mthen we add to m every pair such that m is a rulefsm with a transition labeled c from a start state and q the state reached after that transition6 embedded in the usual three loops on i j k we add an entry to m if is in m is in m q2 is a final state of m2 m2 is a crulefsm and mi transitions from qi to q on c there is a special case for the head transitions in mi if k i 1 c is in m mi is a crulefsm and there is a head transition from qi to q in ml then we add to mthe time complexity of the algorithm is 0 where g is the number of rulefsms derived from the dependency and lp rules in the grammar and qmax is the maximum number of states in any of the rulefsmsrecall that in a pseudoprojective tree we make a distinction between a syntactic governor and a linear governora node can be quotliftedquot along a lifting path from being a dependent of its syntactic governor to being a dependent of its linear this type of parser has been proposed previouslysee for example who also discuss earlystyle parsers for projective dependency grammarswe can use precomputed topdown prediction to limit the number of pairs added governor which must be an ancestor of the governorin defining a formal rewriting system for pseudoprojective trees we will not attempt to model the quotliftingquot as a transformational step in the derivationrather we will directly derive the quotliftedquot version of the tree where a node is dependent of its linear governorthus the derived structure resembles more a unistratal dependency representation like those used by than the multistratal representations of for example however from a formal point of view the distinction is not significantin order to capture pseudoprojectivity we will interpret rules of the form and as introducing syntactic dependents which may lift to a higher linear governoran lp rule of the form orders all linear dependents of the linear governor no matter whose syntactic dependents they arein addition we need a third type of rule namely a lifting rule or 1rule the 1rule can be rewrited on the following form this rule resembles normal dependency rules but instead of introducing syntactic dependents of a category it introduces a lifted dependentbesides introducing a linear dependent ld a 1rule should make sure that the syntactic governor of ld will be introduced at a later stage of the derivation and prevent it to introduce ld as its syntactic dependent otherwise non projective nodes would be introduced twice a first time by their linear governor and a second time by their syntactic governorthis condition is represented in the rule by means of a constraint on the categories found along the lifting paththis condition which we call the lifting condition is represented by the regular expression lg w sgthe regular expression representing the lifting condition is enriched with a dot separating on its left the part of the lifting path which has already been introduced during the rewriting and on its right the part which is still to be introduced for the rewriting to be validthe dot is an unperfect way of representing the current state in a finite state automaton equivalent to the regular expressionwe can further notice that the lifting condition ends with a repetition of ld for reasons which will be made clear when discussing the rewriting processa sentential form contains terminal strings and categories paired with a multiset of lifting conditions called the lift multisetthe lift multiset associated to a category c contains transiting lifting conditions introduced by ancestors of c and passing across c three cases must be distinguished when rewriting a category c and its lifting multiset lm lm contains a single lifting condition which dot is situated to its right lg w sg cin such a case c must be rewritten by the empty stringthe situation of the dot at the right of the lifting condition indicates that c has been introduced by its syntactic governor although it has already been introduced by its linear governor earlier in the rewriting processthis is the reason why c has been added at the end of the lifting condition of the 1rules used in the rewriting operation3the lifting conditions contained in the lift multiset of all the newly introduced dependents d should be compatible with d with the dot advanced appropriatelyin addition we require that when we rewrite a category as a terminal the lift multiset is emptylet us consider an examplesuppose we have have a grammar containing the dependency rules di d2 and d3 the lp rule pi and p2 this rule says that an objective whnoun with feature top which depends on a verb with no further restrictions can raise to any verb that dominates its immediate governor as long as the raising paths contains only verb with feature bridge ie bridge verbsa sample derivation is shown in figure 4 with the sentential form representation on top and the corresponding tree representation belowwe start our derivation with the start symbol klause and rewrite it using dependency rules d2 and d3 and the lifting rule 1 which introduces an objective np argumentthe lifting condition of i is passed to the v dependent but the dot remains at the left of vbridge because of the kleene starwhen we rewrite the embedded v we choose to rewrite again with klause and the lifting condition is passed on to the next verbthis verb is a vrans which requires a nobjthe lifting condition is passed to nobj and the dot is moved to the right of the regular expression therefore nobj is rewritten as the empty stringin this section we show that pseudoprojective dependency grammars as defined in section 23 are polynomially parsablewe can extend the bottomup parser for gdg to a parser for ppgdg in the following mannerin ppgdg syntactic and linear governors do not necessarily coincide and we must keep track separately of linear precedence and of lifting the entries in the parse matrix m are of the form where m is a rulefsm q a state of m and lm is a multiset of lifting conditions as defined in section 4an entry in a square m of the parse matrix means that the subword wi wj of the entry can be analyzed by in up to state q but that nodes corresponding to the lifting rules in lm are being lifted from the subtrees spanning wz wjput differently in this bottomup view lm represents the set of nodes which have a syntactic governor in the subtree spanning id w3 and a lifting rule but are still looking for a linear governorsuppose we have an entry in the parse matrix m of the form as we traverse the crulefsm m we recognize one by one the linear dependents of a node of category c call this governor n the action of adding a new entry to the parse matrix corresponds to adding a single new linear dependent to n each new dependent 71 brings with it a multiset of nodes being lifted from the subtree it is the root ofcall this multiset lmthe new entry will be therefore when we have reached the final state of a rulefsm we must connect up all nodes and lifting conditions before we can proceed to put an entry in the parse matrixthis involves these steps 1for every lifting condition in lm we ensure that it is compatible with the category of 77this is done by moving the dot leftwards in accordance with the category of the obvious special provisions deal with the kleene star and optional elementsif the category matches a catgeory with kleene start in the lifting condition we do not move the dotif the category matches a category which is to the left of an optional category or to the left of category with kleene star then we can move the dot to the left of that categoryif the dot cannot be placed in accordance with the category of ij then no new entry is made in the parse matrix for n 2we then choose a multiset of s m and 1rules whose lefthand side is the category of n for every dependent of n introduced by an 1rule the dependent must be compatible with an instance of a lifting condition in lm the lifting condition is then removed from l 3if after the above repositioning of the dot and the linking up of all linear dependents to lifting conditions there are still lifting conditions in lm such that the dot is at the beginning of the lifting condition then no new entry is made in the parse matrix for n 4for every syntactic dependent of rh we determine if it is a linear dependent of n which has not yet been identified as liftedfor each syntactic dependents which is not also a linear dependent we check whether there is an applicable lifting ruleif not no entry is made in the parse matrix for 77if yes we add the lifting rule to lmthis procedure determines a new multiset lm so we can add entry in the parse matrixthe parse is complete if there is an entry in square m of the parse matrix where in is a crulefsm for a start category and qm is a final state of m if we keep backpointers at each step in the algorithm we have a compact representation of the parse forestthe maximum number of entries in each square of the parse matrix is 0 where g is the number of rulefsms corresponding to lp rules in the grammar q is the maximum number of states in any of the rulefsms and l is the maximum number of states that the lifting rules can be in note that the exponent is a grammar constant but this number can be rather small since the lifting rules are not lexicalized they are constructionspecific not lexemespecificthe time complexity of the algorithm is therefore 0
P98-1106
pseudoprojectivity a polynomially parsable nonprojective dependency grammarthe pseudoprojective grammar we propose can be parsed in polynomial time and captures nonlocal dependencies through a form of gapthreading but the structures generated by the grammar are strictly projective
role of verbs in document analysis representations of context for the detection and corof malapropisms an electronic lexical database and some of its applications we present techniques to characterize document type and event by using semantic classification of verbsthe intuition motivating our research is illustrated by an examination of the role of nouns and verbs in documentsthe listing below shows the ontological categories which express the fundamental conceptual components of propositions using the framework of jackendoff each category permits the formation of a whquestion eg for thing quotwhat did you buyquot can be answered by the noun quota fishquotthe whquestions for action and event can only be answered by verbal constructions eg in the question quotwhat did you doquot where the response must be a verb eg jog write fall etcthing direction action place manner event amount the distinction in the ontological categories of nouns and verbs is reflected in information extraction systemsfor example given the noun phrases fares and us air that occur within a particular article the reader will know what the story is about ie fares and us airhowever the reader will not know the event ie what happened to the fares or to us airdid airfare prices rise fall or stabilizethese are the verbs most typically applicable to prices and which embody the eventmany natural language analysis systems focus on nouns and noun phrases in order to identify information on who what and wherefor example in summarization barzilay and elhadad and lin and hovy focus on multiword noun phrasesfor information extraction tasks such as the darpasponsored message understanding conferences only a few projects use verb phrases egappelt et al lin in contrast the named entity task which identifies nouns and noun phrases has generated numerous projects as evidenced by a host of papers in recent conferences although rich information on nominal participants actors and other entities is provided the named entity task provides no information on what happened in the document ie the event or actionless progress has been made on ways to utilize verbal information efficientlyin earlier systems with stemming many of the verbal and nominal forms were conflated sometimes erroneouslywith the development of more sophisticated tools such as part of speech taggers more accurate verb phrase identification is possiblewe present in this paper an effective way to utilize verbal information for document type discriminationour initial observations suggested that both occurrence and distribution of verbs in news articles provide meaningful insights into both article type and contentexploratory analysis of parsed wall street journal data2 suggested that articles characterized by movement verbs such as drop plunge or fall have a different event profile from articles with a high percentage of communication verbs such as report say comment or complainhowever without associated nominal arguments it is impossible to know whether the thing that drops refers to airfare prices or projected earningsin this paper we assume that the set of verbs in a document when considered as a whole can be viewed as part of the conceptual map of the events and action in a document in the same way that the set of nouns has been used as a concept map for entitiesthis paper reports on two methods using verbs to determine an event profile of the document while also reliably categorizing documents by typeintuitively the event profile refers to the classification of an article by the kind of eventfor example the article could be a discussion event a reporting event or an argument eventto illustrate consider a sample article from wsj of average length with a high percentage of communication verbsthe profile of the article shows that there are 19 verbs 11 are communication verbs including add report say and tellother verbs include be skeptical carry produce and closerepresentative nouns include polaroid corp michael ellmann wertheim schroder ey co prudentialbache savings operating results gain revenue cuts profit loss sales analyst and spokesmanin this case the verbs clearly contribute information that this article is a report with more opinions than new factsthe preponderance of communication verbs coupled with proper noun subjects and human nouns suggest a discussion articleif verbs are ignored this fact would be overlookedmatches on frequent nouns like gain and loss do not discriminate this article from one which announces a gain or loss as breaking news indeed according to our results a breaking news article would feature a higher percentage of motion verbs rather than verbs of communicationverbs are an important factor in providing an event profile which in turn might be used in categorizing articles into different genresturning to the literature in genre classification biber outlines five dimensions which can be used to characterize genreproperties for distinguishing dimensions include verbal features such as tense agentless passives and infinitivesbiber also refers to three verb classes private public and suasive verbskarlgren and cutting take a computationally tractable set of these properties and use them to compute a score to recognize text genre using discriminant analysisthe only verbal feature used in their study is presenttense verb countas karlgren and cutting show their techniques are effective in genre categorization but they do not claim to show how genres differkessler et al discuss some of the complexities in automatic detection of genre using a set of computationally efficient cues such as punctuation abbreviations or presence of latinate suffixesthe taxonomy of genres and facets developed in kessler et al is useful for a wide range of types such as found in the brown corpusalthough some of their discriminators could be useful for news articles the indicators do not appear to be directly applicable to a finer classification of news articlesnews articles can be divided into several standard categories typically addressed in journalism textbookswe base our article category ontology shown in lowercase on hill and breen in uppercase the goal of our research is to identify the role of verbs keeping in mind that event profile is but one of many factors in determining text typein our study we explored the contribution of verbs as one factor in document type discrimination we show how article types can be successfully classified within the news domain using verb semantic classeswe initially considered two specific categories of verbs in the corpus communication verbs and support verbsin the wsj corpus the two most common main verbs are say a communication verb and be a support verbin addition to say other high frequency communication verbs include report announce and statein journalistic prose as seen by the statistics in table 1 at least 20 of the sentences contain communication verbs such as say and announce these sentences report point of view or indicate an attributed commentin these cases the subordinated complement represents the main event eg in quotadvisors announced that ibm stock rose 36 points over a three year periodquot there are two actions announce and risein sentences with a communication verb as main verb we considered both the main and the subordinate verb this decision augmented our verb count an additional 20 and even more importantly further captured information on the actual event in an article not just the communication eventas shown in table 1 support verbs such as go or get constitute 30 and other content verbs such as fall adapt recognize or vow make up the remaining 50if we exclude all support type verbs 70 of the verbs yield information in answering the question quotwhat happenedquot or quotwhat did x doquotsince our first intuition of the data suggested that articles with a preponderance of verbs of communication say announce support have get go remainder abuse claim offer table 1 approximate frequency of verbs by type from the wall street journal a certain semantic type might reveal aspects of document type we tested the hypothesis that verbs could be used as a predictor in providing an event profilewe developed two algorithms to explore wordnet to cluster related verbs and build a set of verb chains in a document much as morris and hirst used roget thesaurus or like hirst and st onge used wordnet to build noun chains classify verbs according to a semantic classification system in this case using levin english verb classes and alternations as a basisfor source material we used the manuallyparsed linguistic data consortium wall street journal corpus from which we extracted main and complement of communication verbs to test the algorithms onusing wordnetour first technique was to use wordnet to build links between verbs and to provide a semantic profile of the documentwordnet is a general lexical resource in which words are organized into synonym sets each representing one underlying lexical concept these synonym sets or synsets are connected by different semantic relationships such as hypernymy synonymy antonymy and others the determination of relatedness via taxonomic relations has a rich history the premise is that words with similar meanings will be located relatively close to each other in the hierarchyfigure 1 shows the verbs cite and post which are related via a common ancestor inform let knowthe wnverber toolwe used the hypernym relationship in wordnet because of its high coveragewe counted the number of edges needed to find a common ancestor for a pair of verbsgiven the hierarchical structure of wordnet the lower the edge count in principle the closer the verbs are semanticallybecause wordnet common ancestor inform__ let know testify to indicate announce abduce cite attest report post sound allows individual words to be the descendent of possibly more than one ancestor two words can often be related by more than one common ancestor via different paths possibly with the same relationship results from wnverberwe ran all articles longer than 10 sentences in the wsj corpus through wnverberoutput showed that several verbs eg go take and say participate in a very large percentage of the high frequency synsets this is due to the width of the verb forest in wordnet top level verb synsets tend to have a large number of descendants which are arranged in fewer generations resulting in a flat and bushy tree structurefor example a top level verb synset inform give information let know has over 40 children whereas a similar top level noun synset entity only has 15 childrenas a result using fewer than two levels resulted in groupings that were too limited to aggregate verbs effectivelythus for our system we allowed up to two edges to intervene between a common ancestor synset and each of the verbs respective synsets as in figure 2in addition to the problem of the flat nature of the verb hierarchy our results from wnverber are degraded by ambiguity similar effects have been reported for nounsverbs with differences in high versus low frequency senses caused certain verbs to be incorrectly related for example have and drop are related by the synset meaning quotto give birthquot although this sense of drop is rare in wsjthe results of wnverber in table 2 reflect the effects of bushiness and ambiguitythe five most frequent synsets are given in column 1 column 2 shows some typical verbs which participate in the clustering column 3 shows the type of article which tends to contain these synsetsmost articles end up in the top five nodesthis illustrates the ineffectiveness of these most frequent wordnet synset to discriminate between article typesevaluation using kendall tauwe sought independent confirmation to assess the correlation between two variables rank for wnverber resultsto evaluate the effects of one synset frequency on another we used kendall tau rank order statistic for example was it the case that verbs under the synset act tend not to occur with verbs under the synset thinkif so do articles with this property fit a particular profilein our results we have information about synset frequency where each of the 1236 articles in the corpus constitutes a sampletable 3 shows the results of calculating kendall t with considerations for ranking ties for all 45 pairing combinations of the top 10 most frequently occurring synsetscorrelations can range from 10 reflecting inverse correlation to 10 showing direct correlation ie the presence of one class increases as the presence of the correlated verb class increasesa r value of 0 would show that the two variables values are independent of each otherresults show a significant positive correlation between the synsetsthe range of correlation is from 850 between the communication verb synset and the act verb synset to 238 between the think verb synset and the change state verb synset these correlations show that frequent synsets do not behave independently of each other and thus confirm that the wordnet results are not an effective way to achieve document discriminationalthough the wordnet results were not discriminatory we were still convinced that our initial hypothesis on the role of verbs in determining event profile was worth pursuingwe believe that these results are a byproduct of lexical ambiguity and of the richness of the wordnet hierarchywe thus decided to pursue a new approach to test our hypothesis one which turned out to provide us with clearer and more robust resultsutilizing evcaa different approach to test the hypothesis was to use another semantic categorization method we chose the semantic classes of levin evca as a basis for our next analysis3 levin seminal work is based on the timehonored observation that verbs which participate in similar syntactic alternations tend to share semantic propertiesthus the behavior of a verb with respect to the expression and interpretation of its arguments can be said to be in large part determined by its meaninglevin has meticulously set out a list of syntactic tests which predict membership in no less than 48 classes each of which is divided into numerous subclassesthe rigor and thoroughness of levin study permitted us to encode our algorithm evcaverber on a subset of the evca classes ones which were frequent in our corpusfirst we manually categorized the 100 most frequent verbs as well as 50 additional verbs which covers 56 of the verbs by token in the corpuswe subjected each verb to a set of strict linguistic tests as shown in table 4 and verified primary verb usage against the corpusresults from evcaverberin order to be able to compare article types and emphasize their differences we selected articles that had the highest percentage of a particular verb class from each of the ten verb classes we chose five articles from each evca class yielding a total of 50 articles for analysis from the full set of 1236 articleswe observed that each class discriminated between different article types as shown in table 5in contrast to table 2 the article types are well discriminated by verb classfor example a concentration of communication class verbs indicated that the article type was a general announcement of short or medium length or a longer feature article with many opinions in the textarticles high in motion verbs were also announcements but differed from the communication ones in that they were commonly postings of company earnings reaching a new high or dropping from last quarteragreement and argument verbs appeared in many of the same articles involving issues of some controversyhowever we noted that articles with agreement verbs were a superset of the argument ones in that in our corpus argument verbs did not appear in articles concerning joint ventures and mergersarticles marked by causative class verbs tended to be a bit longer possibly reflecting prose on both the because and effect of a particular actionwe also used evcaverber to investigate articles marked by the absence of members of each verb class such as articles lacking any verbs in the motion verb classhowever we found that absence of a verb class was not discriminatoryevaluation of evca verb classesto strengthen the observations that articles dominated by verbs of one class reflect distinct article types we verified that the verb classes behaved independently of each othercorrelations for evca classes are shown in table 6these show a markedly lower level of correlation between verb classes than the results for wordnet synsets the range being from 265 between motion and aspectual verbs to 026 for motion verbs and agreement verbsthese low values of 7 for pairs of verb classes reflects the independence of the classesfor example the communication and experience verb classes are weakly correlated this we surmise may be due to the different ways opinions can be expressed ie as factual quotes using communication class verbs or as beliefs using experience class verbssonthis paper reports results from two approaches one using wordnet and other based on evca classeshowever the basis for comparison must be made explicitin the case of wordnet all verb tokens were considered in all senses whereas in the case of evca a subset of less ambiguous verbs were manually selectedas reported above we covered 56 of the verbs by tokenindeed when we attempted to add more verbs to evca categories at the 59 mark we reached a point of difficulty in adding new verbs due to ambiguity eg verbs such as getthus although our results using evca are revealing in important ways it must be emphasized that the comparison has some imbalance which puts wordnet in an unnaturally negative lightin order to accurately compare the two approaches we would need to process either the same less ambiguous verb subset with wordnet or the full set of all verbs in all senses with evcaalthough the results reported in this paper permitted the validation of our hypothesis unless a fair comparison between resources is performed conclusions about wordnet as a resource versus evca class distinctions should not be inferredverb patternsin addition to considering verb type frequencies in texts we have observed that verb distribution and patterns might also reveal subtle information in textverb class distribution within the document and within particular subsections also carry meaningfor example we have observed that when sentences with movement verbs such as rise or fall are followed by sentences with because and then a telic aspectual verb such as reach this indicates that a value rose to a certain point due to the actions of some entityidentification of such sequences will enable us to assign functions to particular sections of contiguous text in an article in much the same way that text segmentation program seeks identify topics from distributional vocabulary we can also use specific sequences of verbs to help in determining methods for performing semantic aggregation of individual clauses in text generation for summarizationfuture workour plans are to extend the current research in terms of verb coverage and in terms of article coveragefor verbs we plan to increase the verbs that we cover to include phrasal verbs increase coverage of verbs by categorizing additional high frequency verbs into evca classes examine the effects of increased coverage on determining article typefor articles we plan to explore a general parser so we can test our hypothesis on additional texts and examine how our conclusions scale upfinally we would like to combine our techniques with other indicators to form a more robust system such as that envisioned in biber or suggested in kessler et al conclusionwe have outlined a novel approach to document analysis for news articles which permits discrimination of the event profile of news articlesthe goal of this research is to determine the role of verbs in document analysis keeping in mind that event profile is one of many factors in determining text typeour results show that levin evca verb classes provide reliable indicators of article type within the news domainwe have applied the algorithm to wsj data and have discriminated articles with five evca semantic classes into categories such as features opinions and announcementsthis approach to document type classification using verbs has not been explored previously in the literatureour results on verb analysis coupled with what is already known about np identification convinces us that future combinations of information will be even more successful in categorization of documentsresults such as these are useful in applications such as passage retrieval summarization and information extraction
P98-1112
role of verbs in document analysiswe present results of two methods for assessing the event profile of news articles as a function of verb typethe unique contribution of this research is the focus on the role of verbs rather than nounstwo algorithms are presented and evaluated one of which is shown to accurately discriminate documents by type and semantic properties ie the event profilethe initial method using wordnet produced multiple crossclassification of articles primarily due to the bushy nature of the verb tree coupled with the sense disambiguation problem our second approach using english verb classes and alternations levin showed that monosemous categorization of the frequent verbs in wsj made it possible to usefully discriminate documents for example our results show that articles in which communication verbs predominate tend to be opinion pieces whereas articles with a high percentage of agreement verbs tend to be about mergers or legal casesan evaluation is performed on the results using kendall tauwe present convincing evidence for using verb semantic classes as a discriminant in document classificationwe demonstrate that document type is correlated with the presence of many verbs of a certain evca class
automatic retrieval and clustering of similar words bootstrapping semantics from text is one of the greatest challenges in natural language learning we first define a word similarity measure based on the distributional pattern of words the similarity measure allows us to construct a thesaurus using a parsed corpus we then present a new evaluation methodology for the automatically constructed thesaurus the evaluation results show that the thesaurus is significantly closer to wordnet than roget thesaurus is the meaning of an unknown word can often be inferred from its contextconsider the following example in everyone likes tezgiiinotezgiiino makes you drunkwe make tezgiiino out of cornthe contexts in which the word tezgiiino is used suggest that tezgiiino may be a kind of alcoholic beverage made from corn mashbootstrapping semantics from text is one of the greatest challenges in natural language learningit has been argued that similarity plays an important role in word acquisition identifying similar words is an initial step in learning the definition of a wordthis paper presents a method for making this first stepfor example given a corpus that includes the sentences in our goal is to be able to infer that tezgiiino is similar to quotbeerquot quotwinequot quotvodkaquot etcin addition to the longterm goal of bootstrapping semantics from text automatic identification of similar words has many immediate applicationsthe most obvious one is thesaurus constructionan automatically created thesaurus offers many advantages over manually constructed thesaurifirstly the terms can be corpus or genrespecificmanually constructed generalpurpose dictionaries and thesauri include many usages that are very infrequent in a particular corpus or genre of documentsfor example one of the 8 senses of quotcompanyquot in wordnet 15 is a quotvisitorvisitantquot which is a hyponym of quotpersonquotthis usage of the word is practically never used in newspaper articleshowever its existance may prevent a coreference recognizer to rule out the possiblity for personal pronouns to refer to quotcompanyquotsecondly certain word usages may be particular to a period of time which are unlikely to be captured by manually compiled lexiconsfor example among 274 occurrences of the word quotwesternerquot in a 45 million word san jose mercury corpus 55 of them refer to hostagesif one needs to search hostagerelated articles quotwesternerquot may well be a good search termanother application of automatically extracted similar words is to help solve the problem of data sparseness in statistical natural language processing when the frequency of a word does not warrant reliable maximum likelihood estimation its probability can be computed as a weighted sum of the probabilities of words that are similar to itit was shown in that a similaritybased smoothing method achieved much better results than backoff smoothing methods in word sense disambiguationthe remainder of the paper is organized as followsthe next section is concerned with similarities between words based on their distributional patternsthe similarity measure can then be used to create a thesaurusin section 3 we evaluate the constructed thesauri by computing the similarity between their entries and entries in manually created thesaurisection 4 briefly discuss future work in clustering similar wordsfinally section 5 reviews related work and summarize our contributionsour similarity measure is based on a proposal in where the similarity between two objects is defined to be the amount of information contained in the commonality between the objects divided by the amount of information in the descriptions of the objectswe use a broadcoverage parser to extract dependency triples from the text corpusa dependency triple consists of two words and the grammatical relationship between them in the input sentencefor example the triples extracted from the sentence quoti have a brown dogquot are we use the notation 11w r w ii to denote the frequency count of the dependency triple in the parsed corpuswhen w r or w is the wild card the frequency counts of all the dependency triples that matches the rest of the pattern are summed upfor example kook obj 11 is the total occurrences of cookobject relationships in the parsed corpus and 1111 is the total number of dependency triples extracted from the parsed corpusthe description of a word w consists of the frequency counts of all the dependency triples that matches the pattern the commonality between two words consists of the dependency triples that appear in the descriptions of both wordsfor example is the the description of the word quotcellquotii cell objof contain114 objof decorate112 icell nmod bacteriall3 i cell nmod blood vessell11 i cell nmod body112 icell nmod bone marrow 112 icell nmod burial 111 j cell nmod chameleon111 assuming that the frequency counts of the dependency triples are independent of each other the information contained in the description of a word is the sum of the information contained in each individual frequency countto measure the information contained in the statement ilw r ilc we first measure the amount of information in the statement that a randomly selected dependency triple is when we do not know the value of 11w r willwe then measure the amount of information in the same statement when we do know the value of11w r w iithe difference between these two amounts is taken to be the information contained in 11w r w ii c an occurrence of a dependency triple can be regarded as the cooccurrence of three events a a randomly selected word is w b a randomly selected dependency type is r c a randomly selected word is w1when the value of ii w r w ii is unknown we assume that a and c are conditionally independent given bthe probability of a b and c cooccurring is estimated by where pale is the maximum likelihood estimation of a probability distribution and ii cell obj of bludgeon111 licell objof cal11111 cell objof come from113 when the value of ii w r w ii is known we can obtain pale directly let denote the amount information contained in 11w r w1 iicits value can be cornit is worth noting that 1 is equal to the mutual information between w and w let t be the set of pairs such that log x 1ris positivewe define the similarity sim between two words w1 and w2 as follows we parsed a 64millionword corpus consisting of the wall street journal san jose mercury and ap newswire from the parsed corpus we extracted 565 million dependency triples in the parsed corpus there are 5469 nouns 2173 verbs and 2632 adjectivesadverbs that occurred at least 100 timeswe computed the pairwise similarity between all the nouns all the verbs and all the adjectivesadverbs using the above similarity measurefor each word we created a thesaurus entry which contains the topn1 words that are most similar to it2 the thesaurus entry for word w has the following format where pos is a part of speech we is a word sesim and se are ordered in descending orderfor example the top10 words in the noun verb and adjective entries for the word quotbriefquot are shown below brief affidavit 013 petition 005 memorandum 005 motion 005 lawsuit 005 deposition 005 slight 005 prospectus 004 document 004 paper 004 brief tell 009 urge 007 ask 007 meet 006 appoint 006 elect 005 name 005 empower 005 summon 005 overrule 004 brief lengthy 013 short 012 recent 009 prolonged 009 long 009 extended 009 daylong 008 scheduled 008 stormy 007 planned 006 two words are a pair of respective nearest neighbors if each is the other most similar wordour program found 543 pairs of rnn nouns 212 pairs of rnn verbs and 382 pairs of rnn adjectivesadverbs in the automatically created thesaurusappendix a lists every 10th of the rnnsthe result looks very strongfew pairs of rnns in appendix a have clearly better alternativeswe also constructed several other thesauri using the same corpus but with the similarity measures in figure 1the measure sirrihindie is the same as the similarity measure proposed in except that it does not use dependency triples with negative mutual informationthe measure simhindi is the same as sim h indle except that all types of dependency relationships are used instead of just subject and object relationshipsthe measures simeosine simdice and simjacard are versions of similarity measures commonly used in information retrieval unlike sim sim h indle and si mhindler they only where s is the set of senses of w in the wordnet super is the set of superclasses of concept c in the wordnet r is the set of words that belong to a same roget category as w make use of the unique dependency triples and ignore their frequency countsin this section we present an evaluation of automatically constructed thesauri with two manually compiled thesauri namely wordnet15 and roget thesauruswe first define two word similarity measures that are based on the structures of wordnet and roget the similarity measure simwn is based on the proposal in the similarity measure sim roget treats all the words in roget as featuresa word w possesses the feature f if f and w belong to a same roget categorythe similarity between two words is then defined as the cosine coefficient of the two feature vectorswith simwn and simroget we transform wordnet and roget into the same format as the automatically constructed thesauri in the previous sectionwe now discuss how to measure the similarity between two thesaurus entriessuppose two thesaurus entries for the same word are as follows for example is the entry for quotbrief quot in our automatically generated thesaurus and and are corresponding entries in wordnet thesaurus and roget thesaurus brief affidavit 013 petition 005 memorandum 005 motion 005 lawsuit 005 deposition 005 slight 005 prospectus 004 document 004 paper 004our evaluation was conducted with 4294 nouns that occurred at least 100 times in the parsed corpus and are found in both wordnet15 and the roget thesaurustable 1 shows the average similarity between corresponding entries in different thesauri and the standard deviation of the average which is the standard deviation of the data items divided by the square root of the number of data itemssince the differences among sim simjacard cosine simdzce and are very small we only included the results for sitncosi in table 1 for the sake of brevityit can be seen that sim hindle and cosine are significantly more similar to wordnet than roget is but are significantly less similar to roget than wordnet isthe differences between hindle and hindle clearly demonstrate that the use of other types of dependencies in addition to subject and object relationships is very beneficialthe performance of sim hindle and cosine are quite closeto determine whether or not the differences are statistically significant we computed their differences in similarities to wordnet and roget thesaurus for each individual entrytable 2 shows the average and standard deviation of the average differencesince the 95 confidence intervals of all the differences in table 2 are on the positive side one can draw the statistical conclusion that simis better than simhzndle which is better than sinacosinewordnet average gra vp sim hindle 0008021 0000428 simcosine 0012798 0000386 hindle cosine 0004777 0000561 roget average cravg simhindle 0002415 0000401 sim cosine 0013349 0000375 hindle cosine 0010933 0000509reliable extraction of similar words from text corpus opens up many possibilities for future workfor example one can go a step further by constructing a tree structure among the most similar words so that different senses of a given word can be identified with different subtreeslet w1 wr be a list of words in descending order of their similarity to a given word w the similarity tree for w is created as follows among lw w1 wz11for example figure 3 shows the similarity tree for the top40 most similar words to dutythe first number behind a word is the similarity of the word to its parentthe second number is the similarity of the word to the root node of the treeinspection of sample outputs shows that this algorithm works wellhowever formal evaluation of its accuracy remains to be future workthere have been many approaches to automatic detection of similar words from text corporaours is similar to in the use of dependency relationship as the word features based on which word similarities are computedevaluation of automatically generated lexical resources is a difficult problemin a small set of sample results are presentedin automatically extracted collocations are judged by a lexicographerin and clusters of similar words are evaluated by how well they are able to recover data items that are removed from the input corpus one at a timein the collocations and their associated scores were evaluated indirectly by their use in parse tree selectionthe merits of different measures for association strength are judged by the differences they make in the precision and the recall of the parser outputsthe main contribution of this paper is a new evaluation methodology for automatically constructed thesauruswhile previous methods rely on indirect tasks or subjective judgments our method allows direct and objective comparison between automatically and manually constructed thesaurithe results show that our automatically created thesaurus is significantly closer to wordnet than roget thesaurus isour experiments also surpasses previous experiments on automatic thesaurus construction in scale and accuracythis research has also been partially supported by nserc research grant 0gp121338 and by the institute for robotics and intelligent systems
P98-2127
automatic retrieval and clustering of similar wordsbootstrapping semantics from text is one of the greatest challenges in natural language learningwe first define a word similarity measure based on the distributional pattern of wordsthe similarity measure allows us to construct a thesaurus using a parsed corpuswe then present a new evaluation methodology for the automatically constructed thesaurusthe evaluation results show that the thesaurns is significantly closer to wordnet than roget thesaurus iswe use dependency relation as word features to compute word similarities from large corpora
robust pronoun resolution with limited knowledge most traditional approaches to anaphora resolution rely heavily on linguistic and domain knowledge one of the disadvantages of developing a knowledgebased system however is that it is a very labourintensive and timeconsuming task this paper presents a robust knowledgepoor approach to resolving pronouns in technical manuals which operates on texts preprocessed by a partofspeech tagger input is checked against agreement and for a number of antecedent indicators candidates are assigned scores by each indicator and the candidate with the highest score is returned as the antecedent evaluation reports a success rate of 897 which is better than the success rates of the approaches selected for comparison and tested on the same data in addition preliminary experiments show that the approach can be successfully adapted for other languages with minimum modifications for the most part anaphora resolution has focused on traditional linguistic methods however to represent and manipulate the various types of linguistic and domain knowledge involved requires considerable human input and computational expensewhile various alternatives have been proposed making use of eg neural networks a situation semantics framework or the principles of reasoning with uncertainty there is still a strong need for the development of robust and effective strategies to meet the demands of practical nlp systems and to enhance further the automatic processing of growing language resourcesseveral proposals have already addressed the anaphora resolution problem by deliberately limiting the extent to which they rely on domain andor linguistic knowledge our work is a continuation of these latest trends in the search for inexpensive fast and reliable procedures for anaphora resolutionit is also an example of how anaphors in a specific genre can be resolved quite successfully without any sophisticated linguistic knowledge or even without parsingfinally our evaluation shows that the basic set of antecedent tracking indicators can work well not only for english but also for other languages with a view to avoiding complex syntactic semantic and discourse analysis we developed a robust knowledgepoor approach to pronoun resolution which does not parse and analyse the input in order to identify antecedents of anaphorsit makes use of only a partofspeech tagger plus simple noun phrase rules and operates on the basis of antecedenttracking preferences the approach works as follows it takes as an input the output of a text processed by a partofspeech tagger identifies the noun phrases which precede the anaphor within a distance of 2 sentences checks them for gender and number agreement with the anaphor and then applies the genrespecific antecedent indicators to the remaining candidates the noun phrase with the highest aggregate score is proposed as antecedent in the rare event of a tie priority is given to the candidate with the higher score for immediate referenceif immediate reference has not been identified then priority is given to the candidate with the best collocation pattern scoreif this does not help the candidate with the higher score for indicating verbs is preferredif still no choice is possible the most recent from the remaining candidates is selected as the antecedentantecedent indicators play a decisive role in tracking down the antecedent from a set of possible candidatescandidates are assigned a score for each indicator the candidate with the highest aggregate score is proposed as the antecedentthe antecedent indicators have been identified empirically and are related to salience to structural matches to referential distance or to preference of termswhilst some of the indicators are more genrespecific and others are less genrespecific the majority appear to be genreindependentin the following we shall outline some the indicators used and shall illustrate them by examplesdefinite noun phrases in previous sentences are more likely antecedents of pronominal anaphors than indefinite ones we regard a noun phrase as definite if the head noun is modified by a definite article or by demonstrative or possessive pronounsthis rule is ignored if there are no definite articles possessive or demonstrative pronouns in the paragraph noun phrases in previous sentences representing the quotgiven informationquot 1 are deemed good candidates for antecedents and score 1 in a coherent text the given or known information or theme usually appears first and thus forms a coreferential link with the preceding textthe new information or rheme provides some information about the themewe use the simple heuristics that the given information is the first noun phrase in a nonimperative sentenceindicating verbs if a verb is a member of the verb_set discuss present illustrate identify summarise examine describe define show check develop review report outline consider investigate explore assess analyse synthesise study survey deal cover we consider the first np following it as the preferred antecedent empirical evidence suggests that because of the salience of the noun phrases which follow them the verbs listed above are particularly good indicatorslexically reiterated items are likely candidates for antecedent lexically reiterated items include repeated synonymous noun phrases which may often be preceded by definite articles or demonstrativesalso a sequence of noun phrases with the same head counts as lexical reiteration section heading preference if a noun phrase occurs in the heading of the section part of which is the current sentence then we consider it as the preferred candidate quotnonprepositionalquot noun phrases a quotpurequot quotnonprepositionalquot noun phrase is given a higher preference than a noun phrase which is part of a prepositional phrase example insert the cassettei into the vcr making sure iti is suitable for the length of recordinghere quotthe vcrquot is penalised for being part of the prepositional phrase quotinto the vcrquotthis preference can be explained in terms of salience from the point of view of the centering theorythe latter proposes the ranking quotsubject direct object indirect objectquot and noun phrases which are parts of prepositional phrases are usually indirect objectsthis preference is given to candidates which have an identical collocation pattern with a pronoun the collocation preference here is restricted to the patterns quotnoun phrase verbquot and quotverb noun phrase quotowing to lack of syntactic information this preference is somewhat weaker than the collocation preference described in example press the keyi down and turn the volume up press iti againimmediate reference in technical manuals the quotimmediate referencequot clue can often be useful in identifying the antecedentthe heuristics used is that in constructions of the form quot v1 np con v2 it v3 itquot where con e andorbeforeafter the noun phrase immediately after v1 is a very likely candidate for antecedent of the pronoun quotitquot immediately following v2 and is therefore given preference this preference can be viewed as a modification of the collocation preferenceit is also quite frequent with imperative constructionsexample to print the paper you can stand the printeri up or lay iti flatto turn on the printer press the power button i and hold iti down for a momentunwrap the paperi form iti and align iti then load iti into the drawerreferential distance in complex sentences noun phrases in the previous clause2 are the best candidate for the antecedent of an anaphor in the subsequent clause followed by noun phrases in the previous sentence then by nouns situated 2 sentences further back and finally nouns 3 sentences further back for anaphors in simple sentences noun phrases in the previous sentence are the best candidate for antecedent followed by noun phrases situated 2 sentences further back and finally nouns 3 sentences further back term preference nps representing terms in the field are more likely to be the antecedent than nps which are not terms as already mentioned each of the antecedent indicators assigns a score with a value these scores have been determined experimentally on an empirical basis and are constantly being updatedtop symptoms like quotlexical reiterationquot assign score quot2quot whereas quotnonprepositionalquot noun phrases are given a negative score of quot1quotwe should point out that the antecedent indicators are preferences and not absolute factorsthere might be cases where one or more of the antecedent indicators do not quotpointquot to the correct antecedentfor instance in the sentence quotinsert the cassette into the vcr i making sure iti is turned onquot the indicator quotnonprepositional noun phrasesquot would penalise the correct antecedentwhen all preferences are taken into account however the right antecedent is still very likely to be tracked down in the above example the quotnonprepositional noun phrasesquot heuristics would be overturned by the quotcollocational preferencequot heuristicsthe algorithm for pronoun resolution can be described informally as follows 3a sentence splitter would already have segmented the text into sentences a pos tagger would already have determined the parts of speech and a simple phrasal grammar would already have detected the noun phrases 4in this project we do not treat cataphora nonanaphoric quotitquot occurring in constructions such as quotit is importantquot quotit is necessaryquot is eliminated by a quotreferential filterquot 5note that this restriction may not always apply in languages other than english on the other hand there are certain collective nouns in english which do not agree in number with their antecedents can be referred to by quotitquot and are exempted from the agreement testfor this purpose we have drawn up a comprehensive list of all such cases to our knowledge no other computational treatment of pronominal anaphora resolution has addressed the problem of quotagreement exceptionsquot antecedentif two candidates have an equal in order to evaluate the effectiveness of the apscore the candidate with the higher score for proach and to explore if how far it is superior over immediate reference is proposed as antecedent the baseline models for anaphora resolution we also if immediate reference does not hold propose tested the sample text on a baseline model which the candidate with higher score for collocational checks agreement in number and gender and where patternif collocational pattern suggests a tie or more than one candidate remains picks as antecedoes not hold select the candidate with higher dent the most recent subject matching the gender score for indicating verbsif this indicator does and number of the anaphor a baseline model not hold again go for the most recent candidate which picks as antecedent the most recent noun 3evaluation phrase that matches the gender and number of the for practical reasons the approach presented does anaphor not incorporate syntactic and semantic information the success rate of the quotbaseline subjectquot was and it is not real 292 whereas the success rate of quotbaseline most istic to expect its performance to be as good as an recent npquot was 625given that our knowledgeapproach which makes use of syntactic and semantic poor approach is basically an enhancement of a knowledge in terms of constraints and preferences baseline model through a set of antecedent indicathe lack of syntactic information for instance tors we see a dramatic improvement in performance means giving up ccommand constraints and subject when these preferences are called upon preference which could be used in center superior to both baseline models when the antecetrackingsyntactic parallelism useful in discrimi dent was neither the most recent subject nor the nating between identical pronouns on the basis of most recent noun phrase matching the anaphor in their syntactic function also has to be forgonelack gender and numberexample of semantic knowledge rules out the use of verb se identify the draweri by the lit paper port led and mantics and semantic parallelismour evaluation add paper to iti however suggests that much less is lost than might the aggregate score for quotthe drawerquot is 7 be fearedin fact our evaluation shows that the re we believe that the good heading 0 collocation 0 referential distance 2 success rate is due to the fact that a number of ante nonprepositional noun phrase 0 immediate refercedent indicators are taken into account and no fac ence 2 7 whereas aggregate score for the most tor is given absolute preferencein particular this recent matching noun phrase out of 223 pronouns in the text 167 were nonanaphoric the evaluation carried out was manual to ensure that no added error was generated another reason for doing it by hand is to ensure a fair comparison with breck baldwin method which not being available to us had to be handsimulated the evaluation indicated 836 success ratethe quotbaseline subjectquot model tested on the same data scored 339 recall and 679 precision whereas quotbaseline most recentquot scored 667note that quotbaseline subjectquot can be assessed both in terms of recall and precision because this quotversionquot is not robust in the event of no subject being available it is not able to propose an antecedent in the second experiment we evaluated the approach from the point of view also of its quotcritical success ratequotthis measure applies only to anaphors quotambiguousquot from the point of view of number and gender and is indicative of the performance of the antecedent indicatorsour evaluation established the critical success rate as 82a case where the system failed was when the anaphor and the antecedent were in the same sentence and where preference was given to a candidate in the preceding sentencethis case and other cases suggest that it might be worthwhile reconsideringrefining the weights for the indicator quotreferential distancequotsimilarly to the first evaluation we found that the robust approach was not very successful on sentences with too complicated syntax a price we have to pay for the quotconveniencequot of developing a knowledgepoor systemthe results from experiment 1 and experiment 2 can be summarised in the following slightly more representative figuresthe lower figure in quotbaseline subjectquot corresponds to quotrecallquot and the higher figure to quotprecisionquotif we regard as quotdiscriminative powerquot of each antecedent indicator the ratio quotnumber of successful antecedent identifications when this indicator was appliedquotquotnumber of applications of this indicatorquot the immediate reference emerges as the most discriminative indicator followed by nonprepositional noun phrase collocation section heading lexical reiteration givenness term preference and referential distance the relatively low figures for the majority of indicators should not be regarded as a surprise firstly we should bear in mind that in most cases a candidate was picked as an antecedent on the basis of applying a number of different indicators and secondly that most anaphors had a relatively high number of candidates for antecedentin terms of frequency of use the most frequently used indicator proved to be referential distance used in 989 of the cases followed by term preference givenness lexical reiteration definiteness section heading immediate reference and collocation as expected the most frequent indicators were not the most discriminative oneswe felt appropriate to extend the evaluation of our approach by comparing it to breck baldwin cogmac approach which features quothigh precision coreference with limited knowledge and linguistics resourcesquotthe reason is that both our approach and breck baldwin approach share common principles and therefore a comparison would be appropriategiven that our approach is robust and returns antecedent for each pronoun in order to make the comparison as fair as possible we used cogniac quotresolve allquot version by simulating it manually on the same training data used in evaluation b abovecogniac successfully resolved the pronouns in 75 of the casesthis result is comparable with the results described in for the training data from the genre of technical manuals it was rule 5 which was most frequently used followed by rule 8 rule 7 rule 1 and rule 3 it would be fair to say that even though the results show superiority of our approach on the training data used they cannot be generalised automatically for other genres or unrestricted texts and for a more accurate picture further extensive tests are necessaryan attractive feature of any nlp approach would be its language quotuniversalityquotwhile we acknowledge that most of the monolingual nlp approaches are not automatically transferable to other languages it would be highly desirable if this could be done with minimal adaptationwe used the robust approach as a basis for developing a genrespecific reference resolution approach in polishas expected some of the preferences had to be modified in order to fit with specific features of polish for the time being we are using the same scores for polishthe evaluation for polish was based technical manuals available on the internet the sample texts contained 180 pronouns among which were 120 instances of exophoric reference the robust approach adapted for polish demonstrated a high success rate of 933 in resolving anaphors similarly to the evaluation for english we compared the approach for polish with a baseline model which discounts candidates on the basis of agreement in number and gender and if there were still competing candidates selects as the antecedent the most recent subject matching the anaphor in gender and number a baseline model which checks agreement in number and gender and if there were still more than one candidate left picks up as the antecedent the most recent noun phrase that agrees with the anaphorour preferencebased approach showed clear superiority over both baseline modelsthe first baseline model was successful in only 237 of the cases whereas the second had a success rate of 684therefore the 933 success rate demonstrates a dramatic increase in precision which is due to the use of antecedent tracking preferenceswe have recently adapted the approach for arabic as well our evaluation based on 63 examples from a technical manual indicates a success rate of 952 we have described a robust knowledgepoor approach to pronoun resolution which operates on texts preprocessed by a partofspeech taggerevaluation shows a success rate of 897 for the genre of technical manuals and at least in this genre the approach appears to be more successful than other similar methodswe have also adapted and evaluated the approach for polish and for arabic
P98-2143
robust pronoun resolution with limited knowledgemost traditional approaches to anaphora resolution rely heavily on linguistic and domain knowledgeone of the disadvantages of developing a knowledgebased system however is that it is a very labourintensive and timeconsuming taskthis paper presents a robust knowledgepoor approach to resolving pronouns in technical manuals which operates on texts preprocessed by a partofspeech taggerinput is checked against agreement and for a number of antecedent indicatorscandidates are assigned scores by each indicator and the candidate with the highest score is returned as the antecedentevaluation reports a success rate of 897 which is better than the success rates of the approaches selected for comparison and tested on the same datain addition preliminary experiments show that the approach can be successfully adapted for other languages with minimum modificationswe first apply a set of constraints to filter grammatically incompatible candidate antecedents and then rank the remaining ones using salience factorswe find that the current evaluation of anaphora resolution algorithms and systems is befeft of any common ground for comparison due to the difference in evaluation data as well as the diversity of preprocessing tools employed by each anaphora resolution system
multilingual authoring using feedback texts there are obvious reasons for trying to automate the production of multilingual documentation especially for routine subjectmatter in restricted domains two approaches have been adopted machine translation of a source text and multilingual natural language generation from a knowledge base for mt information extraction is a major difficulty since the meaning must be derived by analysis of the source text mnlg avoids this difficulty but seems at first sight to require an expensive phase of knowledge engineering in order to encode the meaning we introduce here a new technique which employs mnlg during the phase of knowledge editing a feedback text generated from a possibly incomplete knowledge base describes in natural language the knowledge encoded so far and the options for extending it this method allows anyone speaking one of the supported languages to produce texts in all of them requiring from the author only expertise in the subjectmatter not expertise in knowledge engineering the production of multilingual documentation has an obvious practical importancecompanies seeking global markets for their products must provide instructions or other reference materials in a variety of languageslarge political organizations like the european union are under pressure to provide multilingual versions of official documents especially when communicating with the publicthis need is met mostly by human translation an author produces a source document which is passed to a number of other people for translation into other languageshuman translation has several wellknown disadvantagesit is not only costly but timeconsuming often delaying the release of the product in some markets also the quality is uneven and hard to control for all these reasons the production of multilingual documentation is an obvious candidate for automation at least for some classes of documentnobody expects that automation will be applied in the foreseeable future for literary texts ranging over wide domains however there is a mass of nonliterary material in restricted domains for which automation is already a realistic aim instructions for using equipment are a good examplethe most direct attempt to automize multilingual document production is to replace the human translator by a machinethe source is still a natural language document written by a human author a program takes this source as input and produces an equivalent text in another language as outputmachine translation has proved useful as a way of conveying roughly the information expressed by the source but the output texts are typically poor and overliteralthe basic problem lies in the analysis phase the program cannot extract from the source all the information that it needs in order to produce a good output textthis may happen either because the source is itself poor or because the source uses constructions and concepts that lie outside the program rangesuch problems can be alleviated to some extent by constraining the source document eg through use of a controlled language such as aecma an alternative approach to translation is that of generating the multilingual documents from a nonlinguistic sourcein the case of automatic multilingual natural language generation the source will be a knowledge base expressed in a formal languageby eliminating the analysis phase of mt mnlg can yield highquality output texts free from the literal quality that so often arises from structural imitation of an input textunfortunately this benefit is gained at the cost of a huge increase in the difficulty of obtaining the sourceno longer can the domain expert author the document directly by writing a text in natural languagedefining the source becomes a task akin to building an expert system requiring collaboration between a domain expert and a knowledge engineer owing to this cost mnlg has been applied mainly in contexts where the knowledge base is already available having been created for another purpose for discussion see reiter and mellish is there any way in which a domain expert might author a knowledge base without going through this timeconsuming and costly collaboration with a knowledge engineerassuming that some kind of mediation is needed between domain expert and knowledge formalism the only alternative is to provide easier tools for editing knowledge basessome knowledge management projects have experimented with graphical presentations which allow editing by direct manipulation so that there is no need to learn the syntax of a programming language see for example skuce and lethbridge this approach has also been adopted in two mnlg systems gist which generates social security forms in english italian and german and drafter which generates instructions for software applications in english and frenchthese projects were the first attempts to produce symbolic authoring systems that is systems allowing a domain expert with no training in knowledge engineering to author a knowledge base from which texts in many languages can be generatedalthough helpful graphical tools for managing knowledge bases remain at best a compromise solutiondiagrams may be easier to understand than logical formalisms but they still lack the flexibility and familiarity of natural ianguage text as empirical studies on editing diagrammatic representations have shown for discussion see power et al this observation has led us to explore a new possibility at first sight paradoxical that of a symbolic authoring system in which the current knowledge base is presented through a natural language text generated by the systemthis kills two birds with one stone the source is still a knowledge base not a text so no problem of analysis arises but this source is presented to the author in natural language through what we will call a feedback textas we shall see the feedback text has some special features which allow the author to edit the knowledge base as well as viewing its contentswe have called this editing method wysiwym or what you see is what you meant a natural language text presents a knowledge base that the author has built by purely semantic decisions feedback texts to the authorthe feedback texts will include mousesensitive anchors allowing the author to make semantic decisions eg by selecting options from popup menusthe wysiwym system allows a domain expert speaking any one of the supported languages to produce good output texts in all of thema more detailed description of the architecture is given in scott et al 2 example of a wysiwym system the first application of wysiwym was drafterii a system which generates instuctions for using word processors and diary managersat present three languages are supported english french and italianas an example we will follow a session in which the author encodes instructions for scheduling an appointment with the openwindows calendar managerthe desired content is shown by the following output text which the system will generate when the knowledge base is complete to schedule the appointment before starting open the appointment editor window by choosing the appointment option from the edit menuthen proceed as follows in outline the knowledge base underlying this text is as followsthe whole instruction is represented by a procedure instance with two attributes a goal and a methodthe method instance also has two attributes a precondition and a sequence of steps preconditions and steps are procedures in their turn so they may have methods as well as goalseventually we arrive at subprocedures for which no method is specified it is assumed that the reader of the manual will be able to click on the insert button without being told howsince in drafterh every output text is based on a procedure a newly initialised knowledge base is seeded with a single procedure instance for which the goal and method are undefinedin prolog notation we can represent such a knowledge base by the following assertions procedure goal method here prod l is an identifier for the procedure instance the assertion procedure means that this is an instance of type procedure and the assertion goal means that prod has a goal attribute for which the value is currently undefined when a new knowledge base is created drafterii presents it to the author by generating a feedback text in the currently selected languageassuming that this language is english the instruction to the generator will be generate and the feedback text displayed to the author will be achieve this goal by applying this methodthis text has several special features ing on an anchor the author obtains a popup menu listing the permissible values of the attribute by selecting one of these options the author updates the knowledge basealthough the anchors may be tackled in any order we will assume that the author proceeds from left to rightclicking on this goal yields the popup menu choose click close create save schedule start from which the author selects cheduleeach option in the menu is associated with an updater a prolog term that specifies how the knowledge base should be updated if the option is selectedin this case the updater is insert meaning that an instance of type schedule should become the value of the goal attribute on prodrunning the updater yields an extended knowledge base including a new instance schedl with an undefined attribute actee procedure goal schedule actee method from the updated knowledge base the generator produces a new feedback textschedule this event by applying this methodnote that this text has been completely regeneratedit was not produced from the previous text merely by replacing the anchor this goal by a longer stringcontinuing to specify the goal the author now clicks on this event appointment meeting this time the intended selection is appointment but let us assume that by mistake the author drags the mouse too far and selects meetingthe feedback text schedule the meeting by applying this method immediately shows that an error has been made but how can it be correctedthis problem is solved in wysiwym by allowing the author to select any span of the feedback text that represents an attribute with a specified value and to cut it so that the attribute becomes undefined while its previous value is held in a buffereven large spans representing complex attribute values can be treated in this way so that complex chunks of knowledge can be copied across from one knowledge base to anotherwhen the author selects the phrase the meeting the system displays a popup menu with two options cut copy by selecting cut the author activates the updater cut which updates the knowledge base by removing the instance meetl currently the value of the actee attribute on schedl and holding it in a bufferwith this attribute now undefined the feedback text reverts to schedule this event by applying this method whereupon the author can once again expand this eventthis time however the popup menu that opens on this anchor will include an extra option that of pasting back the material that has just been cutof course this option is only provided if the instance currently held in the buffer is a suitable value for the attribute represented by the anchorpaste appointment meeting the paste option here will be associated with the updater paste which would assign the instance currently in the buffer in this case meet1 as the value of the actee attribute on sched1fortunately the author avoids reinstating this error and selects appointment yielding the following reassuring feedback text schedule the appointment by applying this methodnote incidentally that this text presents a knowledge base that is potentially complete since all obligatory attributes have been specifiedthis can be immediately seen from the absence of any red anchorsintending to add a method the author now clicks on this methodin this case the popup menu shows only one option method running the associated updater yields the following knowledge base procedure goal schedule actee appointment method method precondition steps steps first procedure goal method rest meeting a considerable expansion has taken place here because the system has been configured to automatically instantiate obligatory attributes that have only one permissible type of valuesince the steps attribute on method1 is obligatory and must have a value of type steps the instance steps1 is immediately createdin its turn this instance has the attributes first and rest where first is obligatory and must be filled by a procedurea second procedure instance proc2 is therefore created with its own goal and methodto incorporate all this new material the feedback text is recast in a new pattern the main goal being expressed by an infinitive construction instead of an imperative to schedule the appointment first achieve this preconditionthen follow these stepsnote that at any stage the author can switch to one of the other supported languages egfrenchthis will result in a new call to the generator and hence in a new feedback text expressing the procedure prodinsertion du rendezvous avant de commencer accomplir cette tcicheexecuter les actions suivantesclicking for example on cette action will now yield the usual options for instanciating a goal attribute but expressed in frenchthe associated updaters are identical to those for the corresponding menu in english choix cliquer fermer enregistrement insertion lancement the now be clear so let us advance to a later stage in which the scheduling procedure has been fully encodedto schedule the appointment first open the appointment editor windowthen follow these stepsto open the appointment editor window first achieve this preconditionthen follow these steps basic mechanism should two points about this feedback text are worth generate notingfirst to avoid overcrowding the main paragraph the text planner has deferred the subprocedure for opening the appointment editor window which is presented in a separate paragraphto maintain a connection the action of opening the appointment editor window is mentioned twice secondly no red anchors are left so the knowledge base is potentially completethis means that the author may now generate an output text by switching the modality from feedback to outputthe resulting instruction to the generator will be generate yielding the output text shown at the beginning of the sectionfurther output texts can be obtained by switching to another language egfrench insertion du rendezvous avant de commencer ouvrir la fenetre appointment editor en choisissant loption appointment dans le menu editexecuter les actions suivantes 1 choisir lheure de fin du rendezvous2 inserer la description du rendezvous dans la zone de texte what3 cliquer sur le bouton insertnote that in output modality the generator ignores optional undefined attributes the method for opening the appointment editor window thus reduces to a single action which can be reunited with its goal in the main paragraphwysiwym editing is a new idea that requires practical testingwe have not yet carried out formal usability trials nor investigated the design of feedback texts nor confirmed that adequate response times could be obtained for fullscale applicationshowever if satisfactory largescale implementations prove feasible the method brings many potential benefits a document in natural language is the most flexible existing medium for presenting informationwe cannot be sure that all meanings can be expressed clearly in network diagrams or other specialized presentations we can be sure they can be expressed in a document it seems intuitively obvious that authors will understand feedback texts much better than they understand alternative methods of presenting knowledge bases such as network diagramsour experience has been that people can learn to use the drafterii system in a few minutes authors require no training in a controlled language or any other presentational conventionthis avoids the expense of initial training it also means that presentational conventions need not be relearned when a knowledge base is reexamined after a delay of months or years since the knowledge base is presented through a document in natural language it becomes immediately accessible to anyone peripherally concerned with the project documentation of the knowledge base often a tedious and timeconsuming task becomes automatic the model can be viewed and edited in any natural language that is supported by the generator further languages can be added as neededwhen supported by a multilingual natural language generation system as in drafterii wysiwym editing obviates the need for traditional language localisation of the humancomputer interfacenew linguistic styles can also be added as a result wysiwym editing is ideal for facilitating knowledge sharing and transfer within a multilingual projectspeakers of several different languages could collectively edit the same knowledge base each user viewing and modifying the knowledge in hisher own language navigated by the methods familiar from books and from complex electronic documents obviating any need for special training in navigationthe crucial advantage of wysiwym editing compared with alternative natural language interfaces is that it eliminates all the usual problems associated with parsing and semantic interpretationfeedback texts with menus have been used before in the nlmenu system but only as a means of presenting syntactic optionsnlmenu guides the author by listing the extensions of the current sentence that are covered by its grammar in this way it makes parsing more reliable by enforcing adherence to a sublanguage but parsing and interpretation are still requiredso far wysiwym editing has been implemented in two domains software instructions and patient information leafletswe are currently evaluating the usability of these systems partly to confirm that authors do indeed find them easy to use and partly to investigate issues in the design of feedback texts
P98-2173
multilingual authoring using feedback textsthere are obvious reasons for trying to automate the production of multilingual documentation especially for routine subjectmatter in restricted domains two approaches have been adopted machine translation of a source text and multilingual natural language generation from a knowledge basefor mt information extraction is a major difficulty since the meaning must be derived by analysis of the source text mnlg avoids this difficulty but seems at first sight to require an expensive phase of knowledge engineering in order to encode the meaningwe introduce here a new technique which employs mnlg during the phase of knowledge editinga feedback text generated from a possibly incomplete knowledge base describes in natural language the knowledge encoded so far and the options for extending itthis method allows anyone speaking one of the supported languages to produce texts in all of them requiring from the author only expertise in the subjectmatter not expertise in knowledge engineeringwe propose wysiwym as a method for the authoring of semantic information through direct manipulation of structures rendered in natural language textin this system logical forms are entered interactively and the corresponding linguistic realization of the expressions is generated in several languages
statistical models for unsupervised prepositional phrase attachment several unsupervised statistical models for the prepositional phrase attachment task that approach the accuracy of the best supervised methods for this task our unsupervised approach uses a heuristic based on attachment proximity and trains from raw text that is annotated with only partofspeech tags and morphological base forms as opposed to attachment information it is therefore less resourceintensive and more portable than previous corpusbased algorithm proposed for this task we present results for prepositional phrase attachment in both english and spanish prepositional phrase attachment is the task of deciding for a given preposition in a sentence the attachment site that corresponds to the interpretation of the sentencefor example the task in the following examples is to decide whether the preposition with modifies the preceding noun phrase or the preceding verb phrase in sentence 1 with modifies the noun shirt since with pockets describes the shirthowever in sentence 2 with modifies the verb washed since with soap describes how the shirt is washedwhile this form of attachment ambiguity is usually easy for people to resolve a computer requires detailed knowledge about words in order to successfully resolve such ambiguities and predict the correct interpretationmost of the previous successful approaches to this problem have been statistical or corpusbased and they consider only prepositions whose attachment is ambiguous between a preceding noun phrase and verb phraseprevious work has framed the problem as a classification task in which the goal is to predict n or v corresponding to noun or verb attachment given the head verb v the head noun n the preposition p and optionally the object of the preposition n2for example the tuples corresponding to the example sentences are the correct classifications of tuples 1 and 2 are n and v respectively describes a partially supervised approach in which the fidditch partial parser was used to extract tuples from raw text where p is a preposition whose attachment is ambiguous between the head verb v and the head noun n the extracted tuples are then used to construct a classifier which resolves unseen ambiguities at around 80 accuracylater work such as trains and tests on quintuples of the form extracted from the penn treebank and has gradually improved on this accuracy with other kinds of statistical learning methods yielding up to 845 accuracyrecently have reported 88 accuracy by using a corpusbased model in conjunction with a semantic dictionarywhile previous corpusbased methods are highly accurate for this task they are difficult to port to other languages because they require resources that are expensive to construct or simply nonexistent in other languageswe present an unsupervised algorithm for prepositional phrase attachment in english that requires only an partofspeech tagger and a morphology database and is therefore less resourceintensive and more portable than previous approaches which have all required either treebanks or partial parsersthe exact task of our algorithm will be to construct a classifier c which maps an instance of an ambiguous prepositional phrase to either n or v corresponding to noun attachment or verb attachment respectivelyin the full natural language parsing task there are more than just two potential attachment sites but we limit our task to choosing between a verb v and a noun n so that we may compare with previous supervised attempts on this problemwhile we will be given the candidate attachment sites during testing the training procedure assumes no a priori information about potential attachment siteswe generate training data from raw text by using a partofspeech tagger a simple chunker an extraction heuristic and a morphology databasethe order in which these tools are applied to raw text is shown in table 1the tagger from first annotates sentences of raw text with a sequence of partofspeech tagsthe chunker implemented with two small regular expressions then replaces simple noun phrases and quantifier phrases with their head wordsthe extraction heuristic then finds head word tuples and their likely attachments from the tagged and chunked textthe heuristic relies on the observed fact that in english and in languages with similar word order the attachment site of a preposition is usually located only a few words to the left of the prepositionfinally numbers are replaced by a single token the text is converted to lower case and the morphology database is used to find the base forms of the verbs and nounsthe extracted head word tuples differ from the training data used in previous supervised attempts in an important wayin the supervised case both of the potential sites namely the verb v and the noun n are known before the attachment is resolvedin the unsupervised case discussed here the extraction heuristic only finds what it thinks are unambiguous cases of prepositional phrase attachmenttherefore there is only one possible attachment site for the preposition and either the verb v or the noun n does not exist in the case of nounattached preposition or a verbattached preposition respectivelythis extraction heuristic loosely resembles a step in the bootstrapping procedure used to get training data for the classifier of in that step unambiguous attachments from the fidditch parser output are initially used to resolve some of the ambiguous attachments and the resolved cases are iteratively used to disambiguate the remaining unresolved casesour procedure differs critically from in that we do not iterate we extract unambiguous attachments from unparsed input sentences and we totally ignore the ambiguous casesit is the hypothesis of this approach that the information in just the unambiguous attachment events can resolve the ambiguous attachment events of the test datagiven a tagged and chunked sentence the extraction heuristic returns head word tuples of the form or where v is the verb n is the noun p is the preposition n2 is the object of the prepositionthe main idea of the extraction heuristic is that an attachment site of a preposition is usually within a few words to the left of the prepositionwe extract if table 1 also shows the result of the applying the extraction heuristic to a sample sentencethe heuristic ignores cases where p of since such cases are rarely ambiguous and we opt to model them deterministically as noun attachmentswe will report accuracies on both cases where p of and where p 0 ofalso the heuristic excludes examples with the verb to be from the training set since we found them to be unreliable sources of evidenceapplying the extraction heuristic to 970k unannotated sentences from the 1988 wall st journal data yields approximately 910k unique head word tuples of the form or the extraction heuristic is far from perfect when applied to and compared with the annotated wall st journal data of the penn treebank only 69 of the extracted head word tuples represent correct attachments2 the extracted tuples are meant to be a noisy but abundant substitute for the information that one might get from a treebanktables 2 and 3 list the most frequent extracted head word tuples for unambiguous verb and noun attachments respectivelymany of the frequent nounattached tuples such as num to num3 are incorrectthe prepositional phrase to num is usually attached to a verb such as rise or fall in the wall st journal domain eg profits rose 46 to 52 millionwhile the extracted tuples of the form and represent unambiguous noun and verb attachments in which either the verb or noun is known our eventual goal is to resolve ambiguous attachments in the test data of the form in which both the noun n and verb v are always knownwe therefore must use any information in the unambiguous cases to resolve the ambiguous casesa natural way is to use a classifier that compares the probability of each outcome we do not currently use n2 in the probability model and we omit it from further discussionwe can factor pr as follows the terms pr and pr are independent of the attachment a and need not be computed in c but the estimation of pr and pr is problematic since our training data ie the head words extracted from raw text occur with either n or v but never both n v this leads to make some heuristically motivated approximationslet the random variable 0 range over true false and let it denote the presence or absence of any preposition that is unambiguously attached to the noun or verb in questionthen p is the conditional probability that a particular noun n in free text has an unambiguous prepositional phrase attachmentwe approximate pr as follows the rationale behind this approximation is that the tendency of a v n pair towards a noun attachment is related to the tendency of the noun alone to occur with an unambiguous prepositional phrasethe z term exists only to make the approximation a well formed probability over a e in 171we approximate pr as follows the rationale behind these approximations is that when generating p given a noun attachment only the counts involving the noun are relevant assuming also that the noun has an attached prepositional phrase ie 0 truewe use word statistics from both the tagged corpus and the set of extracted head word tuples to estimate the probability of generating 0 true p and n2counts from the extracted set of tuples assume that 0 true while counts from the corpus itself may correspond to either 0 true or 0 false depending on if the noun if p of otherwise or verb in question is or is not respectively unambiguously attached to a prepositionthe quantities pr and pr denote the conditional probability that n or v will occur with some unambiguously attached preposition and are estimated as follows p and define cn ep cap as the number of noun attached tuplesanalogously define cv e c and cv ep cv the counts c and c are from the extracted head word tuplesusing the above notation we can interpolate as follows where c and c are counts from the tagged corpus and where c and c are counts from the extracted head word tuplesthe terms pr and pr denote the conditional probability that a particular preposition p will occur as an unambiguous attachment to n or v we present two techniques to estimate this probability one based on bigram counts and another based on an interpolation methodthis technique uses the bigram counts of the extracted head word tuples and backs off to the uniform distribution when the denominator is zero where 7 is the set of possible prepositions where all the counts c are from the extracted head word tuplesthis technique is similar to the one in and interpolates between the tendencies of the and bigrams and the tendency of the type of attachment towards a particular preposition p first define cn en c as the number of noun attached tuples with the prepositionapproximately 970k unannotated sentences from the 1988 wall st journal were processed in a manner identical to the example sentence in table 1the result was approximately 910000 head word tuples of the form or note that while the head word tuples represent correct attachments only 69 of the time their quantity is about 45 times greater than the quantity of data used in previous supervised approachesthe extracted data was used as training material for the three classifiers clbasel chnterp and cibigram each classifier is constructed as follows cl base this is the quotbaselinequot classifier that predicts n of p of and v otherwise cinterp this classifier has the form of equation uses the method in section 41 to generate 0 and the method in section 422 to generate p clbigram this classifier has the form of equation uses the method in section 41 to generate 0 and the method in section 421 to generate p table 4 shows accuracies of the classifiers on the test set of which is derived from the manually annotated attachments in the penn treebank wall st journal datathe penn treebank is drawn from the 1989 wall st journal data so there is no possibility of overlap with our training datafurthermore the extraction heuristic was developed and tuned on a quotdevelopment setquot ie a set of annotated examples that did not overlap with either the test set or the training settable 5 shows the two probabilities pr and pr using the same approximations as cbigram for the ambiguous example rise num to num and pr are not neededwhile the tuple is more frequent than the conditional probabilities prefer a v which is the choice that maximizes pr both classifiers cinterp and cbigram clearly outperform the baseline but the classifier does not outperform dbigraml even though it interpolates between the less specific evidence and more specific evidence this may be due to the errors in our extracted training data supervised classifiers that train from clean data typically benefit greatly by combining less specific evidence with more specific evidencedespite the errors in the training data the performance of the unsupervised classifiers begins to approach the best performance of the comparable supervised classifiers furthermore we do not use the second noun n2 whereas the best supervised methods use this informationour result shows that the information in imperfect but abundant data from unambiguous attachments as shown in tables 2 and 3 is sufficient to resolve ambiguous prepositional phrase attachments at accuracies just under the supervised stateoftheart accuracywe claim that our approach is portable to languages with similar word order and we support this claim by demonstrating our approach on the spanish languagewe used the spanish tagger and morphological analyzer developed at the xerox research centre europe4 and we modified the extraction heuristic to account for the new tagset and to account for the spanish equivalents of the words of and to be chunking was not performed on the spanish datawe used 450k sentences of raw text from the linguistic data consortium spanish news text collection to extract a training set and we used a nonoverlapping set of 50k sentences from the collection to create test setsthree native spanish speakers were asked to extract and annotate ambiguous instances of spanish prepositional phrase attachmentsthey annotated two sets one set consisted of all ambiguous prepositional phrase attachments of the form and the other set consisted of cases where p confor testing our classifier we used only those judgments on which all three annotators agreedthe performance of the classifiers clbigram dinterp and cbase when trained and tested on spanish language data are shown in table 6the spanish test set has fewer ambiguous prepositions than the english test set as shown by the accuracy of cibahowever the accuracy improvements of clbygrara over cl ba are statistically significant for both test sets5the unsupervised algorithm for prepositional phrase attachment presented here is the only algorithm in the published literature that can significantly outperform the baseline without using data derived from a treebank or parserthe accuracy of our technique approaches the accuracy of the best supervised methods and does so with only a tiny fraction of the supervisionsince only a small part of the extraction heuristic is specific to english and since partofspeech taggers and morphology databases are widely available in other languages our approach is far more portable than previous approaches for this problemwe successfully demonstrated the portability of our approach by applying it to the prepositional phrase attachment task in the spanish languagewe thank dr lauri kartunnen for lending us the spanish natural language tools and mike collins for helpful discussions on this work
P98-2177
statistical models for unsupervised prepositional phrase attachmentwe present several unsupervised statistical models for the prepositional phrase attachment task that approach the accuracy of the best supervised methods for this taskour unsupervised approach uses a heuristic based on attachment proximity and trains from raw text that is annotated with only partofspeech tags and morphological base forms as opposed to attachment informationit is therefore less resourceintensive and more portable than previous corpusbased algorithm proposed for this taskwe present results for prepositional phrase attachment in both english and spanishwe first assume noun attachment for all ofpps and then apply our disambiguation methods to all remaining pps
mindnet acquiring and structuring semantic information from text as a lexical knowledge base constructed automatically from the definitions and example sentences in two machinereadable dictionaries mindnet embodies several features that distinguish it from prior work with mrds it is however more than this static resource alone mindnet represents a general methodology for acquiring structuring accessing and exploiting semantic information from natural language text this paper provides an overview of the distinguishing characteristics of mindnet the steps involved in its creation and its extension beyond dictionary text in this paper we provide a description of the salient characteristics and functionality of mindnet as it exists today together with comparisons to related workwe conclude with a discussion on extending the mindnet methodology to the processing of other corpora and on future plans for mindnetfor additional details and background on the creation and use of mindnet readers are referred to richardson vanderwende and dolan et al mindnet is produced by a fully automatic process based on the use of a broadcoverage nl parsera fresh version of mindnet is built regularly as part of a normal regression processproblems introduced by daily changes to the underlying system or parsing grammar are quickly identified and fixedalthough there has been much research on the use of automatic methods for extracting information from dictionary definitions handcoded knowledge bases egwordnet continue to be the focus of ongoing researchthe euro wordnet project although continuing in the wordnet tradition includes a focus on semiautomated procedures for acquiring lexical contentoutside the realm of nlp we believe that automatic procedures such as mindnet provide the only credible prospect for acquiring world knowledge on the scale needed to support commonsense reasoningat the same time we acknowledge the potential need for the hand vetting of such information to insure accuracy and consistency in production level systemsthe extraction of the semantic information contained in mindnet exploits the very same broadcoverage parser used in the microsoft word 97 grammar checkerthis parser produces syntactic parse trees and deeper logical forms to which rules are applied that generate corresponding structures of semantic relationsthe parser has not been specially tuned to process dictionary definitionsall enhancements to the parser are geared to handle the immense variety of general text of which dictionary definitions are simply a modest subsetthere have been many other attempts to process dictionary definitions using heuristic pattern matching specially constructed definition parsers and even general coverage syntactic parsers however none of these has succeeded in producing the breadth of semantic relations across entire dictionaries that has been produced for mindnetvanderwende describes in detail the methodology used in the extraction of the semantic relations comprising mindneta truly broadcoverage parser is an essential component of this process and it is the basis for extending it to other sources of information such as encyclopedias and text corporathe different types of labeled semantic relations extracted by parsing for inclusion in mindnet are given in the table below these relation types may be contrasted with simple cooccurrence statistics used to create network structures from dictionaries by researchers including veronis and ide kozima and furugori and wilks et al labeled relations while more difficult to obtain provide greater power for resolving both structural attachment and word sense ambiguitieswhile many researchers have acknowledged the utility of labeled relations they have been at times either unable or unwilling to make the effort to obtain themthis deficiency limits the characterization of word pairs such as riverbank and writepen to simple relatedness whereas the labeled relations of mindnet specify precisely the relations riverpartbank and writemeanspenthe automatic extraction of semantic relations from a definition or example sentence for mindnet produces a hierarchical structure of these relations representing the entire definition or sentence from which they camesuch structures are stored in their entirety in mindnet and provide crucial context for some of the procedures described in later sections of this paperthe semrel structure for a definition of car is given in the figure below car quota vehicle with 3 or usu4 wheels and driven by a motor esp one one for carrying peoplequot early dictionarybased work focused on the extraction of paradigmatic relations in particular hypernym relations almost exclusively these relations as well as other syntagmatic ones have continued to take the form of relational triples the larger contexts from which these relations have been taken have generally not been retainedfor labeled relations only a few researchers have appeared to be interested in entire semantic structures extracted from dictionary definitions though they have not reported extracting a significant number of themafter semrel structures are created they are fully inverted and propagated throughout the entire mindnet database being linked to every word that appears in themsuch an inverted structure produced from a definition for motorist and linked to the entry for car is shown in the figure below researchers who produced spreading activation networks from mrds including veronis and ide and kozima and furugori typically only implemented forward links in those networkswords were not related backward to any of the headwords whose definitions mentioned them and words cooccurring in the same definition were not related directlyin the fully inverted structures stored in mindnet however all words are crosslinked no matter where they appearthe massive network of inverted semrel structures contained in mindnet invalidates the criticism leveled against dictionarybased methods by yarowsky and ide and veronis that lkbs created from mrds provide spotty coverage of a language at bestexperiments described elsewhere demonstrate the comprehensive coverage of the information contained in mindnetsome statistics indicating the size of the current version of mindnet and the processing time required to create it are provided in the table belowthe definitions and example sentences are from the longman dictionary of contemporary english and the american heritage dictionary ri edition inverted semrel structures facilitate the access to direct and indirect relationships between the root word of each structure which is the headword for the mindnet entry containing it and every other word contained in the structuresthese relationships consisting of one or more semantic relations connected together constitute semrel paths between two wordsfor example the semrel path between car and person in figure 2 above is carweighting schemes with similar goals are found in work by bradenharder and bookman many researchers both in the dictionary and corpusbased camps have worked extensively on developing methods to identify similarity between words since similarity determination is crucial to many word sense disambiguation and parametersmoothinginference procedureshowever some researchers have failed to distinguish between substitutional similarity and general relatednessthe similarity procedure of mindnet focuses on measuring substitutional similarity but a function is also provided for producing clusters of generally related wordstwo general strategies have been described in the literature for identifying substitutional similarityone is based on identifying direct paradigmatic relations between the words such as hypernym or synonymfor example paradigmatic relations in wordnet have been used by many to determine similarity including li et al and agirre and rigau the other strategy is based on identifying syntagmatic relations with other words that similar words have in commonsyntagmatic strategies for determining similarity have often been based on statistical analyses of large corpora that yield clusters of words occurring in similar bigram and trigram contexts as well as in similar predicateargument structure contexts there have been a number of attempts to combine paradigmatic and syntagmatic similarity strategies however none of these has completely integrated both syntagmatic and paradigmatic information into a single repository as is the case with mindnetthe mindnet similarity procedure is based on the topranked semrel paths between wordsfor example some of the top semrel paths in mindnet between pen and pencil are shown below penfmeansdrawmeans4pencil penfmeanswritemeanspencil penhyp4instrumenthyppencil penhypwritemeanspencil penmeanswrite many patterns being symmetrical but others notseveral experiments were performed in which word pairs from a thesaurus and an antithesaurus were used in a training phase to identify semrel path patterns that indicate similaritythese path patterns were then used in a testing phase to determine the substitutional similarity or dissimilarity of unseen word pairs the results summarized in the table below demonstrate the strength of this integrated approach which uniquely exploits both the paradigmatic and the syntagmatic relations in mindnettraining over 100000 word pairs from a thesaurus and antithesaurus produced 285000 semrel paths containing approx13500 unique path patternstesting over 100000 word pairs from a thesaurus and antithesaurus were evaluated using the path patternssimilar correct dissimilar correct 84 82 human benchmark random sample of 200 similar and dissimilar word pairs were evaluated by 5 humans and by mindnet similar correct dissimilar correct this powerful similarity procedure may also be used to extend the coverage of the relations in mindnetequivalent to the use of similarity determination in corpusbased approaches to infer absent ngrams or triples an inference procedure has been developed which allows semantic relations not presently in mindnet to be inferred from those that areit also exploits the topranked paths between the words in the relation to be inferredfor example if the relation watchmeansgelescope were not in mindnet it could be inferred by first finding the semrel paths between watch and telescope examining those paths to see if another word appears in a means relation with telescope and then checking the similarity between that word and watchas it turns out the word observe satisfies these conditions in the path watchhypobservemeanstelescope and therefore it may be inferred that one can watch by means of a telescopethe seamless integration of the inference and similarity procedures both utilizing the weighted extended paths derived from inverted semrel structures in mindnet is a unique strength of this approachan additional level of processing during the creation of mindnet seeks to provide sense identifiers on the words of semrel structurestypically word sense disambiguation occurs during the parsing of definitions and example sentences following the construction of logical forms detailed information from the parse both morphological and syntactic sharply reduces the range of senses that can be plausibly assigned to each wordother aspects of dictionary structure are also exploited including domain information associated with particular senses in processing normal input text outside of the context of mindnet creation wsd relies crucially on information from mindnet about how word senses are linked to one anotherto help mitigate this bootstrapping problem during the initial construction of mindnet we have experimented with a twopass approach to wsdduring a first pass a version of mindnet that does not include wsd is constructedthe result is a semantic network that nonetheless contains a great deal of quotambientquot information about sense assignmentsfor instance processing the definition spin 101 to produce thread yields a semrel structure in which the sense node spin101 is linked by a deep_subject relation to the undisambiguated form spideron the subsequent pass this information can be exploited by wsd in assigning sense 101 to the word spin in unrelated definitions wolf spider 100 any of various spidersthatdo not spin websthis kind of bootstrapping reflects the broader nature of our approach as discussed in the next section a fully and accurately disambiguated mindnet allows us to bootstrap senses onto words encountered in free text outside the dictionary domainthe creation of mindnet was never intended to be an end unto itselfinstead our emphasis has been on building a broadcoverage nlp understanding systemwe consider the methodology for creating mindnet to consist of a set of general tools for acquiring structuring accessing and exploiting semantic information from nl textour techniques for building mindnet are largely rulebasedhowever we arrive at these representations though the overall structure of mindnet can be regarded as crucially dependent on statisticswe have much more in common with traditional corpusbased approaches than a first glance might suggestan advantage we have over these approaches however is the rich structure imposed by the parse logical form and word sense disambiguation components of our systemthe statistics we use in the context of mindnet allow richer metrics because the data themselves are richerour first foray into the realm of processing free text with our methods has already been accomplished table 2 showed that some 58000 example sentences from ldoce and ahd3 were processed in the creation of our current mindnetto put our hypothesis to a much more rigorous test we have recently embarked on the assimilation of the entire text of the microsoft encarta 98 encyclopediawhile this has presented several new challenges in terms of volume alone we have nevertheless successfully completed a first pass and have produced and added semrel structures from the encarta 98 text to mindnetstatistics on that pass are given below besides our venture into additional english data we fully intend to apply the same methodologies to text in other languages as wellwe are currently developing nlp systems for 3 european and 3 asian languages french german and spanish chinese japanese and koreanthe syntactic parsers for some of these languages are already quite advanced and have been demonstrated publiclyas the systems for these languages mature we will create corresponding mindnets beginning as we did in english with the processing of machinereadable reference materials and then adding information gleaned from corpora
P98-2180
mindnet acquiring and structuring semantic information from textas a lexical knowledge base constructed automatically from the definitions and example sentences in two machinereadable dictionaries mindnet embodies several features that distinguish it from prior work with mrdsit is however more than this static resource alonemindnet represents a general methodology for acquiring structuring accessing and exploiting semantic information from natural language textmindnet is both an extraction methodology and a lexical ontology different from a word net since it was created automatically from a dictionary and its structure is based on such resources
nounphrase cooccurrence statistics for semiautomatic semantic lexicon construction semantic lexicons semiautomatically could be a great time saver relative to creating them by hand in this paper we present an algorithm for extracting potential entries for a category from an online corpus based upon a small set of exemplars our algorithm finds more correct terms and fewer incorrect ones than previous work in this area additionally the entries that are generated potentially provide broader coverage of the category than would occur to an individual coding them by hand our algorithm finds many terms not included within wordnet and could be viewed as an quotenhancerquot of existing broadcoverage resources semantic lexicons play an important role in many natural language processing taskseffective lexicons must often include many domainspecific terms so that available broad coverage resources such as wordnet are inadequatefor example both escort and chinook are types of vehicles but neither are cited as so in wordnetmanually building domainspecific lexicons can be a costly timeconsuming affairutilizing existing resources such as online corpora to aid in this task could improve performance both by decreasing the time to construct the lexicon and by improving its qualityextracting semantic information from word cooccurrence statistics has been effective particularly for sense disambiguation in riloff and shepherd noun cooccurrence statistics were used to indicate nominal category membership for the purpose of aiding in the construction of semantic lexiconsgenerically their algorithm can be outlined as follows our algorithm uses roughly this same generic structure but achieves notably superior results by changing the specifics of what counts as cooccurrence which figures of merit to use for new seed word selection and final ranking the method of initial seed word selection and how to manage compound nounsin sections 25 we will cover each of these topics in turnwe will also present some experimental results from two corpora and discuss criteria for judging the quality of the outputthe first question that must be answered in investigating this task is why one would expect it to work at allwhy would one expect that members of the same semantic category would cooccur in discoursein the word sense disambiguation task no such claim is made words can serve their disambiguating purpose regardless of partofspeech or semantic characteristicsin motivating their investigations riloff and shepherd cited several very specific noun constructions in which cooccurrence between nouns of the same semantic class would be expected including conjunctions lists appositives and noun compounds our algorithm focuses exclusively on these constructionsbecause the relationship between nouns in a compound is quite different than that between nouns in the other constructions the algorithm consists of two separate components one to deal with conjunctions lists and appositives and the other to deal with noun compoundsall compound nouns in the former constructions are represented by the head of the compoundwe made the simplifying assumptions that a compound noun is a string of consecutive nouns and that the head of the compound is the rightmost nounto identify conjunctions lists and appositives we first parsed the corpus using an efficient statistical parser trained on the penn wall street journal treebank we defined cooccurrence in these constructions using the standard definitions of dominance and precedencethe relation is stipulated to be transitive so that all head nouns in a list cooccur with each other two head nouns cooccur in this algorithm if they meet the following four conditions in contrast rs counted the closest noun to the left and the closest noun to the right of a head noun as cooccuring with itconsider the following sentence from the muc4 corpus quota cargo aircraft may drop bombs and a truck may be equipped with artillery for warquot in their algorithm both cargo and bombs would be counted as cooccuring with aircraftin our algorithm cooccurrence is only counted within a noun phrase between head nouns that are separated by a comma or conjunctionif the sentence had read quota cargo aircraft fighter plane or combat helicopter quot then aircraft plane and helicopter would all have counted as cooccuring with each other in our algorithmrs used the same figure of merit both for selecting new seed words and for ranking words in the final outputtheir figure of merit was simply the ratio of the times the noun coocurs with a noun in the seed list to the total frequency of the noun in the corpusthis statistic favors low frequency nouns and thus necessitates the inclusion of a minimum occurrence cutoffthey stipulated that no word occuring fewer than six times in the corpus would be considered by the algorithmthis cutoff has two effects it reduces the noise associated with the multitude of low frequency words and it removes from consideration a fairly large number of certainly valid category membersideally one would like to reduce the noise without reducing the number of valid nounsour statistics allow for the inclusion of rare occcurancesnote that this is particularly important given our algorithm since we have restricted the relevant occurrences to a specific type of structure even relatively common nouns may not occur in the corpus more than a handful of times in such a contextthe two figures of merit that we employ one to select and one to produce a final rank use the following two counts for each noun to select new seed words we take the ratio of count 1 to count 2 for the noun in questionthis is similar to the figure of merit used in rs and also tends to promote low frequency nounsfor the final ranking we chose the log likelihood statistic outlined in dunning which is based upon the cooccurrence counts of all nouns this statistic essentially measures how surprising the given pattern of cooccurrence would be if the distributions were completely randomfor instance suppose that two words occur forty times each and they cooccur twenty times in a millionword corpusthis would be more surprising for two completely random distributions than if they had each occurred twice and had always cooccurreda simple probability does not capture this factthe rationale for using two different statistics for this task is that each is well suited for its particular role and not particularly well suited to the otherwe have already mentioned that the simple ratio is ill suited to dealing with infrequent occurrencesit is thus a poor candidate for ranking the final output if that list includes words of as few as one occurrence in the corpusthe log likelihood statistic we found is poorly suited to selecting new seed words in an iterative algorithm of this sort because it promotes high frequency nouns which can then overly influence selections in future iterations if they are selected as seed wordswe termed this phenomenon infection and found that it can be so strong as to kill the further progress of a categoryfor example if we are processing the category vehicle and the word artillery is selected as a seed word a whole set of weapons that cooccur with artillery can now be selected in future iterationsif one of those weapons occurs frequently enough the scores for the words that it cooccurs with may exceed those of any vehicles and this effect may be strong enough that no vehicles are selected in any future iterationin addition because it promotes high frequency terms such a statistic tends to have the same effect as a minimum occurrence cutoff ie few if any low frequency words get addeda simple probability is a much more conservative statistic insofar as it selects far fewer words with the potential for infection it limits the extent of any infection that does occur and it includes rare wordsour motto in using this statistic for selection is quotfirst do no harmquotthe simple ratio used to select new seed words will tend not to select higher frequency words in the categorythe solution to this problem is to make the initial seed word selection from among the most frequent head nouns in the corpusthis is a sensible approach in any case since it provides the broadest coverage of category occurrences from which to select additional likely category membersin a task that can suffer from sparse data this is quite importantwe printed a list of the most common nouns in the corpus and selected category members by scanning through this listanother option would be to use head nouns identified in wordnet which as a set should include the most common members of the category in questionin general however the strength of an algorithm of this sort is in identifying infrequent or specialized termstable 1 shows the seed words that were used for some of the categories testedthe relationship between the nouns in a compound noun is very different from that in the other constructions we are consideringthe nonhead nouns in a compound noun may or may not be legitimate members of the categoryfor instance either pickup truck or pickup is a legitimate vehicle whereas cargo plane is legitimate but cargo is notfor this reason cooccurrence within noun compounds is not considered in the iterative portions of our algorithminstead all noun compounds with a head that is included in our final ranked list are evaluated for inclusion in a second listthe method for evaluating whether or not to include a noun compound in the second list is intended to exclude constructions such as government plane and include constructions such as fighter planesimply put the former does not correspond to a type of vehicle in the same way that the latter doeswe made the simplifying assumption that the higher the probability of the head given the nonhead noun the better the construction for our purposesfor instance if the noun government is found in a noun compound how likely is the head of that compound to be planehow does this compare to the noun fighterfor this purpose we take two counts for each noun in the compound for each nonhead noun in the compound we evaluate whether or not to omit it in the outputif all of them are omitted or if the resulting compound has already been output the entry is skippedeach noun is evaluated as follows first the head of that noun is determinedto get a sense of what is meant here consider the following compound nuclearpowered aircraft carrierin evaluating the word nuclearpowered it is unclear if this word is attached to aircraft or to carrierwhile we know that the head of the entire compound is carrier in order to properly evaluate the word in question we must determine which of the words following it is its headthis is done in the spirit of the dependency model of lauer by selecting the noun to its right in the compound with the highest probability of occuring with the word in question when occurring in a noun compoundonce the head of the word is determined the ratio of count 1 to count 2 is compared to an empirically set cutoffif it falls below that cutoff it is omittedif it does not fall below the cutoff then it is kept the input to the algorithm is a parsed corpus and a set of initial seed words for the desired categorynouns are matched with their plurals in the corpus and a single representation is settled upon for both eg carcooccurrence bigrams are collected for head nouns according to the notion of cooccurrence outlined abovethe algorithm then proceeds as followswe ran our algorithm against both the muc4 corpus and the wall street journal corpus for a variety of categories beginning with the categories of vehicle and weapon both included in the five categories that rks investigated in their paperother categories that we investigated were crimes people commercial sites states and machinesthis last category was run because of the sparse data for the category weapon in the wall street journalit represents roughly the same kind of category as weapon namely technological artifactsit in turn produced sparse results with the muc4 corpustables 3 and 4 show the top results on both the head noun and the compound noun lists generated for the categories we testedrs evaluated terms for the degree to which they are related to the categoryin contrast we counted valid only those entries that are clear members of the categoryrelated words did not counta valid instance was novel unique and a proper class within the category as an illustration of this last condition neither galileo probe nor gray plane is a valid entry the former because it denotes an individual and the latter because it is a class of planes based upon an incidental feature in the interests of generating as many valid entries as possible we allowed for the inclusion in noun compounds of words tagged as adjectives or cardinality wordsin certain occasions this is necessary to avoid losing key parts of the compoundmost common adjectives are dropped in our compound noun analysis since they occur with a wide variety of headswe determined three ways to evaluate the output of the algorithm for usefulnessthe first is the ratio of valid entries to total entries producedrs reported a ratio of 17 valid to total entries for both the vehicle and weapon categories on the same corpus our algorithm yielded a ratio of 329 valid to total entries for the category vehicle and 36 for the category weaponthis can be seen in the slope of the graphs in figure 1tables 2 and 5 give the relevant data for the categories that we investigatedin general the ratio of valid to total entries fell between 2 and 4 even in the cases that the output was relatively smalla second way to evaluate the algorithm is by the total number of valid entries producedas can be seen from the numbers reported in table 2 our algorithm generated from 24 to nearly 3 times as many valid terms for the two contrasting categories from the muc corpus than the algorithm of rseven more valid terms were generated for appropriate categories using the wall street journalanother way to evaluate the algorithm is with the number of valid entries produced that are not in wordnettable 2 presents these numbers for the categories vehicle and weaponwhereas the rs algorithm produced just 11 terms not already present in wordnet for the two categories combined our algorithm produced 106 or over 3 for every 5 valid terms producedit is for this reason that we are billing our algorithm as something that could enhance existing broadcoverage resources with domainspecific lexical informationwe have outlined an algorithm in this paper that as it stands could significantly speed up the task of building a semantic lexiconwe have also examined in detail the reasons why it works and have shown it to work well for multiple corpora and multiple categoriesthe algorithm generates many words not included in broad coverage resources such as wordnet and could be thought of as a wordnet quotenhancerquot for domainspecific applicationsmore generally the relative success of the algorithm demonstrates the potential benefit of narrowing corpus input to specific kinds of constructions despite the danger of compounding sparse data problemsto this end parsing is invaluablethanks to mark johnson for insightful discussion and to julie sedivy for helpful comments
P98-2182
nounphrase cooccurrence statistics for semiautomatic semantic lexicon constructiongenerating semantic lexicons semiautomatically could be a great time saver relative to creating them by handin this paper we present an algorithm for extracting potential entries for a category from an online corpus based upon a small set of exemplarsour algorithm finds more correct terms and fewer incorrect ones than previous work in this areaadditionally the entries that are generated potentially provide broader coverage of the category than would occur to an individual coding them by handour algorithm finds many terms not included within wordnet and could be viewed as an enhancer of existing broadcoverage resourceswe use cooccurrence statistics in local context to discover sibling relationsour experiments were performed using the muc4 and wall street journal corpuses to select seed words we rank all of the head nouns in the training corpus by frequency and manually select the first 10 nouns that unambiguously belong to each categorywe find that 3 of every 5 words learned by our system are not present in wordnet
never look back an alternative to centering i propose a model for determining the hearer attentional state which depends solely on a list of salient discourse entities the ordering among the elements of the slist covers also the of the center the centering model the ranking criteria for the slist based on the distinction between entities and incorporate preferences for interand intrasentential anaphora the model is the basis for an algorithm which operates incrementally word by word i propose a model for determining the hearer attentional state in understanding discoursemy proposal is inspired by the centering model and draws on the conclusions of strube hahn approach for the ranking of the forwardlooking center list for germantheir approach has been proven as the point of departure for a new model which is valid for english as wellthe use of the centering transitions in brennan et al algorithm prevents it from being applied incrementally in my approach i propose to replace the functions of the backwardlooking center and the centering transitions by the order among the elements of the list of salient discourse entities the slist ranking criteria define a preference for hearerold over hearernew discourse entities generalizing strube hahn approachbecause of these ranking criteria i can account for the difference in salience between definite nps and indefinite nps the slist is not a local data structure associated with individual utterancesthe slist rather describes the attentional state of the hearer at any given point in processing a discoursethe slist is generated incrementally word by word and used immediatelytherefore the slist integrates in the simplest manner preferences for inter and intrasentential anaphora making further specifications for processing complex sentences unnecessarysection 2 describes the centering model as the relevant background for my proposalin section 3 i introduce my model its only data structure the slist and the accompanying algorithmin section 4 i compare the results of my algorithm with the results of the centering algorithm with and without specifications for complex sentences the centering model describes the relation between the focus of attention the choices of referring expressions and the perceived coherence of discoursethe model has been motivated with evidence from preferences for the antecedents of pronouns and has been applied to pronoun resolution inter alia whose interpretation differs from the original modelthe centering model itself consists of two constructs the backwardlooking center and the list of forwardlooking centers and a few rules and constraintseach utterance ui is assigned a list of forwardlooking centers cf and a unique backwardlooking center cba ranking imposed on the elements of the cf reflects the assumption that the most highly ranked element of c f is most likely to be the cbthe most highly ranked element of c f that is realized in u2f1 is the cbtherefore the ranking on the cf plays a crucial role in the modelgrosz et al and brennan et al use grammatical relations to rank the cf but state that other factors might also play a rolefor their centering algorithm brennan et al extend the notion of centering transition relations which hold across adjacent utterances to differentiate types of shift preferred to retain is preferred to smoothshift is preferred to roughshiftthe bfpalgorithm consists of three basic steps to illustrate this algorithm we consider example which has two different final utterances and utterance contains one pronoun utterance two pronounswe look at the interpretation of and after step 2 the algorithm has produced two readings for each variant which are rated by the corresponding transitions in step 3in the pronoun quotshequot is resolved to quotherquot because the continue transition is ranked higher than smoothshift in the second readingin the pronoun quotshequot is resolved to quotfriedmanquot because smoothshift is preferred over roughshiftthe realization and the structure of my model departs significantly from the centering model in contrast to the centering model my model does not need a construct which looks back it does not need transitions and transition ranking criteriainstead of using the cb to account for local coherence in my model this is achieved by comparing the first element of the slist with the preceding statestrube hahn rank the cf according to the information status of discourse entitiesi here generalize these ranking criteria by redefining them in prince termsi distinguish between three different sets of expressions hearerold discourse entities mediated discourse entities and hearernew discourse entities these sets consist of the elements of prince familiarity scale old consists of evoked and unused discourse entities while new consists of brandnew discourse entitiesmed consists of inferrables containing inferrables and anchored brandnew discourse entitiesthese discourse entities are discoursenew but mediated by some hearerold discourse entity i do not assume any difference between the elements of each set with respect to their information statuseg evoked and unused discourse entities have the same information status because both belong to oldfor an operationalization of prince terms i stipulate that evoked discourse entitites are coreferring expressions unused discourse entities are proper names and titlesin texts brandnew proper names are usually accompanied by a relative clause or an appositive which relates them to the hearer knowledgethe corresponding discourse entity is evoked only after this elaborationwhenever these linguistic devices are missing proper names are treated as unusedl i restrict inferrables to the particular subset defined by hahn et al anchored brandnew discourse entities require that the anchor is either evoked or unusedi assume the following conventions for the ranking constraints on the elements of the slistthe 3tuple denotes a discourse entity x which is evoked in utterance uttx at the text position posxwith respect to any two discourse entities and uttx and utty specifying the current utterance ui or the preceding utterance u2_1 i set up the following ordering constraints on elements in the slist 2for any state of the processorhearer the ordering of discourse entities in the slist that can be derived from the ordering constraints to is denoted by the precedence relation if x e old and y e med then x yif x e old and y e new then x yif x e med and y e new then x y if x y e old or x y e med or x y e new then if utt utty then x y if utt utty and pos and indicate that the utterance containing x follows the utterance containing y or that x and y are elements of the same utterance between discourse entities in ui and discourse entities in u2_1 i am able to deal with intrasentential anaphorathere is no need for further specifications for complex sentencesa finer grained ordering is achieved by ranking discourse entities within each of the sets according to their text positionanaphora resolution is performed with a simple lookup in the slist3the elements of the slist are tested in the given order until one test succeedsjust after an anaphoric expression is resolved the slist is updatedthe algorithm processes a text from left to right 2if the analysis of utterance u5 is finished remove all discourse entities from the slist which are not realized in youthe analysis for example is given in table 36i show only these steps which are of interest for the computation of the slist and the pronoun resolutionthe preferences for pronouns are given by the slist immediately above themthe pronoun quotshequot in is resolved to the first element of the slistwhen the pronoun quotherquot in is encountered friedman is the first element of the slist since friedman is unused and in the current utterancebecause of binding restrictions quotherquot cannot be resolved to friedman but to the second element brennanin both and the pronoun quotshequot is resolved to friedman3the slist consists of referring expressions which are specified for text position agreement sortal information and information statuscoordinated nps are collected in a setthe slist does not contain predicative nps pleonastic quotitquot and any elements of direct speech enclosed in double quotesthe difference between my algorithm and the bfpalgorithm becomes clearer when the unused discourse entity quotfriedmanquot is replaced by a brandnew discourse entity eg quota professional driverquot7 in the bfpalgorithm the ranking of the cflist depends on grammatical roleshence driver is ranked higher than brennan in the cft2cin the pronoun quotshequot is resolved to brennan because of the preference for continue over retainin quotshequot is resolved to driver because smoothshift is preferred over roughshiftin my algorithm at the end of the evoked phrase quotherquot is ranked higher than the brandnew phrase quota professional driverquot in both and the pronoun quotshequot is resolved to brennanexample 8 illustrates how the preferences for intra and intersentential anaphora interact with the information status of discourse entitites sentence starts a new discourse segmentthe phrase quota judgequot is brandnewquotmr curtisquot is mentioned several times before in the text hence 71 owe this variant andrew kehlerthis example can misdirect readers because the phrase quota professional driverquot is assigned the quotdefaultquot gender masculineanyway this example like the original example seems not to be felicitous english and has only illustrative character81n the new york tunesdecember 7 1997 pa48 the discourse entity curtis is evoked and ranked higher than the discourse entity judgein the next step the ellipsis refers to judge which is evoked thenthe nouns quotrequestquot and quotprosecutorsquot are brandnew9the pronoun quothequot and the possessive pronoun quothisquot are resolved to curtisquotconditionquot is brandnew but anchored by the possessive pronounfor and i show only the steps immediately before the pronouns are resolvedin both quotmr curtisquot and quotthe judgequot are evokedhowever quotmr curtisquot is the leftmost evoked phrase in this sentence and therefore the most preferred antecedent for the pronoun quothimquotfor my experiments i restricted the length of the slist to five elementstherefore quotprosecutorsquot in is not contained in the slistthe discourse entity smirga is introduced in it becomes evoked after the appositivehence sm1rga is the most preferred antecedent for the pronoun quothequotin the first experiment i compare my algorithm with the bfpalgorithm which was in a second experiment extended by the constraints for complex sentences as described by kameyama methodi use the following guidelines for the handsimulated analysis i do not assume any world knowledge as part of the anaphora resolution processonly agreement criteria binding and sortal constraints are appliedi do not account for false positives and error chainsfollowing walker a segment is defined as a paragraph unless its first sentence has a pronoun in subject position or a pronoun where none of the preceding sentenceinternal noun phrases matches its syntactic featuresat the beginning of a segment anaphora resolution is preferentially performed within the same utterancemy algorithm starts with an empty slist at the beginning of a segmentthe basic unit for which the centering data structures are generated is the utterance youfor the bfpalgorithm i define you as a simple sentence a complex sentence or each full clause of a compound sentencekameyama intrasentential centering operates at the clause levelwhile tensed clauses are defined as utterances on their own untensed clauses are processed with the main clause so that the cflist of the main clause contains the elements of the untensed embedded clausekameyama distinguishes for tensed clauses further between sequential and hierarchical centeringexcept for reported speech nonreport complements and relative clauses all other types of tensed clauses build a chain of utterances on the same levelaccording to the preference for intersentential candidates in the centering model i define the following anaphora resolution strategy for the bepalgorithm test elements of uj_1 test elements of ui lefttoright test elements of cf cf in my algorithm steps and fall together is performed using previous states of the systemresultsthe test set consisted of the beginnings of three short stories by hemingway and three articles from the new york times the results of my experiments are given in table 6the first row gives the number of personal and possessive pronounsthe remainder of the table shows the results for the bfpalgorithm for the bfpalgorithm extended by kameyama intrasentential specifications and for my algorithmthe overall error rate of each approach is given in the rows marked with wrongthe rows marked with wrong give the numbers of errors directly produced by the algorithms strategy the rows marked with wrong the number of analyses with ambiguities generated by the bfpalgorithm the rows marked with wrong give the number of errors caused by specifications for intrasentential anaphorasince my algorithm integrates the specifications for intrasentential anaphora i count these errors as strategic errorsthe rows marked with wrong give the numbers of errors contained in error chainsthe rows marked with wrong give the numbers of the remaining errors interpretationthe results of my experiments showed not only that my algorithm performed better than the centering approaches but also revealed insight in the interaction between inter and intrasentential preferences for anaphoric antecedentskameyama specifications reduce the complexity in that the cflists in general are shorter after splitting up a sentence into clausestherefore the bfpalgorithm combined with her specifications has almost no strategic errors while the number of ambiguities remains constantbut this benefit is achieved at the expense of more errors caused by the intrasentential specificationsthese errors occur in cases like example in which kameyama intrasentential strategy makes the correct antecedent less salient indicating that a clausebased approach is too finegrained and that the hierarchical syntactical structure as assumed by kameyama does not have a great impact on anaphora resolutioni noted too that the bfpalgorithm can generate ambiguous readings for ui when the pronoun in ui does not cospecify the cbin cases where the c1 contains more than one possible antecedent for the pronoun several ambiguous readings with the same transitions are generatedan examplem there is no cb because no element of the preceding utterance is realized in the pronoun quotthemquot in cospecifies quotdeerquot but the bfpalgorithm generates two readings both of which are marked by a retain transition a jim pulled the burlap sacks off the deer b and liz looked at themin general the strength of the centering model is that it is possible to use the cb as the most preferred antecedent for a pronoun in youin my model this effect is achieved by the preference for hearerold discourse entitieswhenever this preference is misleading both approaches give wrong resultssince the cb is defined strictly local while hearerold discourse entities are defined global my model produces less errorsin my model the preference is available immediately while the bfpalgorithm can use its preference not before the second utterance has been processedthe more global definition of hearerold discourse entities leads also to shorter error chains however the test set is too small to draw final conclusions but at least for the texts analyzed the preference for hearerold discourse entities is more appropriate than the preference given by the bfp algorithmkameyama version of centering also omits the centering transitionsbut she uses the cb and a ranking over simplified transitions preventing the incremental application of her modelthe focus model accounts for evoked discourse entities explicitly because it uses the discourse focus which is determined by a successful anaphora resolutionincremental processing is not a topic of these paperseven models which use salience measures for determining the antecedents of pronoun use the concept of evoked discourse entitieshajieova et al assign the highest value to an evoked discourse entityalso lappin leass who give the subject of the current sentence the highest weight have an implicit notion of evokednessthe salience weight degrades from one sentence to another by a factor of two which implies that a repeatedly mentioned discourse entity gets a higher weight than a brandnew subjectin this paper i proposed a model for determining the hearer attentional state which is based on the distinction between hearerold and hearernew discourse entitiesi showed that my model though it omits the backwardlooking center and the centering transitions does not lose any of the predictive power of the centering model with respect to anaphora resolutionin contrast to the centering model my model includes a treatment for intrasentential anaphora and is sufficiently well specified to be applied to real textsits incremental character seems to be an answer to the question kehler recently raisedfurthermore it neither has the problem of inconsistency kehler mentioned with respect to the bfpalgorithm nor does it generate unnecessary ambiguitiesfuture work will address whether the text position which is the weakest grammatical concept is sufficient for the order of the elements of the slist at the second layer of my ranking constraintsi will also try to extend my model for the analysis of definite noun phrases for which it is necessary to integrate it into a more global model of discourse processingacknowledgments this work has been funded by a postdoctoral grant from dfg and is supported by a postdoctoral fellowship award from ircsi would like to thank nobo komagata rashmi prasad and matthew stone who commented on earlier drafts of this paperi am grateful for valuable comments by barbara grosz udo hahn aravind joshi lauri karttunen andrew kehler ellen prince and bonnie webber
P98-2204
never look back an alternative to centeringi propose a model for determining the hearer attentional state which depends solely on a list of salient discourse entities the ordering among the elements of the slist covers also the function of the backwardlooking center in the centering modelthe ranking criteria for the slist are based on the distinction between hearerold and hearernew discourse entities and incorporate preferences for inter and intrasentential anaphorathe model is the basis for an algorithm which operates incrementally word by wordwe argue that the information status of an antecedent is more important than the grammatical role in which it occurswe evaluate on handannotated datawe restrict our algorithm to the current and last sentence
measures of distributional similarity distributional similarity measures for the purpose of improving probability estimation for unseen cooccurrences our contributions are threefold an empirical comparison of a broad range of measures a classification an inherent problem for statistical methods in natural language processing is that of sparse data the inaccurate representation in any training corpus of the probability of low frequency eventsin particular reasonable events that happen to not occur in the training set may mistakenly be assigned a probability of zerothese unseen events generally make up a substantial portion of novel data for example essen and steinbiss report that 12 of the testset bigrams in a 7525 split of one million words did not occur in the training partitionwe consider here the question of how to estimate the conditional cooccurrence probability p of an unseen word pair drawn from some finite set n x v two stateoftheart technologies are katz backoff method and jelinek and mercer interpolation methodboth use p to estimate p when is unseen essentially ignoring the identity of n an alternative approach is distanceweighted averaging which arrives at an estimate for unseen cooccurrences by combining estimates for where s is a set of candidate similar words and sim is a function of the similarity between n and m we focus on distributional rather than semantic similarity because the goal of distanceweighted averaging is to smooth probability distributions although the words quotchancequot and quotprobabilityquot are synonyms the former may not be a good model for predicting what cooccurrences the latter is likely to participate inthere are many plausible measures of distributional similarityin previous work we compared the performance of three different functions the jensenshannon divergence the l1 norm and the confusion probabilityour experiments on a frequencycontrolled pseudoword disambiguation task showed that using any of the three in a distanceweighted averaging scheme yielded large improvements over katz backoff smoothing method in predicting unseen coocurrencesfurthermore by using a restricted version of model that stripped incomparable parameters we were able to empirically demonstrate that the confusion probability is fundamentally worse at selecting useful similar wordsd lin also found that the choice of similarity function can affect the quality of automaticallyconstructed thesauri to a statistically significant degree and the ability to determine common morphological roots by as much as 49 in precision 3the term quotsimilaritybasedquot which we have used previously has been applied to describe other models as well these empirical results indicate that investigating different similarity measures can lead to improved natural language processingon the other hand while there have been many similarity measures proposed and analyzed in the information retrieval literature there has been some doubt expressed in that community that the choice of similarity metric has any practical impact several authors have pointed out that the difference in retrieval performance achieved by different measures of association is insignificant providing that these are appropriately normalised but no contradiction arises because as van rijsbergen continues quotone would expect this since most measures incorporate the same informationquotin the languagemodeling domain there is currently no agreedupon best similarity metric because there is no agreement on what the quotsame informationquot the key data that a similarity function should incorporate isthe overall goal of the work described here was to discover these key characteristicsto this end we first compared a number of common similarity measures evaluating them in a parameterfree way on a decision taskwhen grouped by average performance they fell into several coherent classes which corresponded to the extent to which the functions focused on the intersection of the supports of the distributionsusing this insight we developed an informationtheoretic metric the skew divergence which incorporates the supportintersection data in an asymmetric fashionthis function yielded the best performance overall an average error rate reduction of 4 with respect to the jensenshannon divergence the best predictor of unseen events in our earlier experiments our contributions are thus threefold an empirical comparison of a broad range of similarity metrics using an evaluation methodology that factors out inessential degrees of freedom a proposal building on this comparison of a characteristic for classifying similarity functions and the introduction of a new similarity metric incorporating this characteristic that is superior at evaluating potential proxy distributionsin this section we describe the seven distributional similarity functions we initally evaluated2 for concreteness we choose n and v to be the set of nouns and the set of transitive verbs respectively a cooccurrence pair results when n appears as the head noun of the direct object of v we use p to denote probabilities assigned by a base language model let n and m be two nouns whose distributional similarity is to be determined for notational simplicity we write q for p and are for p their respective conditional verb cooccurrence probabilitiesfigure 1 lists several familiar functionsthe cosine metric and jaccard coefficient are commonly used in information retrieval as measures of association note that jaccard coefficient differs from all the other measures we consider in that it is essentially combinatorial being based only on the sizes of the supports of q r and q r rather than the actual values of the distributionspreviously we found the jensenshannon divergence to be a useful measure of the distance between distributions the function d is the kl divergence which measures the average inefficiency in using one distribution to code for another the function avgq denotes the average distribution avgqr r2 observe that its use ensures that the jensenshannon divergence is always definedin contrast d is undefined if q is not absolutely continuous with respect to are 2strictly speaking some of these functions are dissimilarity measures but each such function f can be recast as a similarity function via the simple transformation c f where c is an appropriate constantwhether we mean f or c f should be clear from context1 the confusion probability has been used by several authors to smooth word cooccurrence probabilities it measures the degree to which word m can be substituted into the contexts in which n appearsif the base language model probabilities obey certain bayesian consistency conditions as is the case for relative frequencies then we may write the confusion probability as follows note that it incorporates unigram probabilities as well as the two distributions q and r finally kendall t which appears in work on clustering similar adjectives is a nonparametric measure of the association between random variables in our context it looks for correlation between the behavior of q and r on pairs of verbsthree versions exist we use the simplest ta here sign q r 2 where sign is 1 for positive arguments 1 for negative arguments and 0 at 0the intuition behind kendall t is as followsassume all verbs have distinct conditional probabilitiesif sorting the verbs by the likelihoods assigned by q yields exactly the same ordering as that which results from ranking them according to r then t 1 if it yields exactly the opposite ordering then t 1we treat a value of 1 as indicating extreme dissimilarity3 it is worth noting at this point that there are several wellknown measures from the nlp literature that we have omitted from our experimentsarguably the most widely used is the mutual information it does not apply in the present setting because it does not measure the similarity between two arbitrary probability distributions and p but rather the similarity between a joint distribution p and the corresponding product distribution pp hammingtype metrics are intended for data with symbolic features since they count feature label mismatches whereas we are dealing feature values that are probabilitiesvariations of the value difference metric have been employed for supervised disambiguation but it is not reasonable in language modeling to expect training data tagged with correct probabilitiesthe dice coefficient is monotonic in jaccard coefficient so its inclusion in our experiments would be redundantfinally we did not use the kl divergence because it requires a smoothed base language modelwe evaluated the similarity functions introduced in the previous section on a binary decision task using the same experimental framework as in our previous preliminary comparison that is the data consisted of the verbobject cooccurrence pairs in the 1988 associated press newswire involving the 1000 most frequent nouns extracted via church and yarowsky processing tools587833 of the pairs served as a training set from which to calculate base probabilitiesfrom the other 20 we prepared test sets as follows after discarding pairs occurring in the training data we split the remaining pairs into five partitions and replaced each nounverb pair with a nounverbverb triple such that p pthe task for the language model under evaluation was to reconstruct which of and was the original cooccurrencenote that by construction was always the correct answer and furthermore methods relying solely on unigram frequencies would perform no better than chancetestset performance was measured by the error rate defined as 12 where t is the number of test triple tokens in the set and a tie results when both alternatives are deemed equally likely by the language model in questionto perform the evaluation we incorporated each similarity function into a decision rule as followsfor a given similarity measure f and neighborhood size k let sfk denote the k most similar words to n according to f we define the evidence according to f for the cooccurrence as then the decision rule was to choose the alternative with the greatest evidencethe reason we used a restricted version of the distanceweighted averaging model was that we sought to discover fundamental differences in behaviorbecause we have a binary decision task efk simply counts the number of k nearest neighbors to n that make the right decisionif we have two functions f and g such that efk egk then the k most similar words according to f are on the whole better predictors than the k most similar words according to g hence f induces an inherently better similarity ranking for distanceweighted averagingthe difficulty with using the full model for comparison purposes is that fundamental differences can be obscured by issues of weightingfor example suppose the probability estimate e are performed poorlywe would not be able to tell whether the because was an inherent deficiency in the l1 norm or just a poor choice of weight function perhaps 2 would have yielded better estimatesfigure 2 shows how the average error rate varies with k for the seven similarity metrics introduced aboveas previously mentioned a steeper slope indicates a better similarity rankingall the curves have a generally upward trend but always lie far below backoff they meet at k 1000 because sf jdx1 is always the set of all nounswe see that the functions fall into four groups the l2 norm kendall t the confusion probability and the cosine metric and the l1 norm jensenshannon divergence and jaccard coefficientwe can account for the similar performance of various metrics by analyzing how they incorporate information from the intersection of the supports of q and are consider the following supports we can rewrite the similarity functions from section 2 in terms of these sets making use of the identities e vevqwqr q evevqr q evevrwqr are evevqr are 1table 1 lists these alternative forms in order of performancewe see that for the noncombinatorial functions the groups correspond to the degree to which the measures rely on the verbs in vqrthe jensenshannon divergence and the l1 norm can be computed simply by knowing the values of q and r on vqrfor the cosine and the confusion probability the distribution values on vqr are key but other information is also incorporatedthe statistic ta takes into account all verbs including those that occur neither with the superior performance of jac seems to underscore the importance of the set vqrjaccard coefficient ignores the values of q and r on vqr but we see that simply knowing the size of vqr relative to the supports of q and r leads to good rankings4 the skew divergence based on the results just described it appears that it is desirable to have a similarity function that focuses on the verbs that cooccur with both of the nouns being comparedhowever we can make a further observation with the exception of the confusion probability all the functions we compared are symmetric that is f f but the substitutability of one word for another need not symmetricfor instance quotfruitquot may be the best possible approximation to quotapplequot but the distribution of quotapplequot may not be a suitable proxy for the distribution of quotfruitquot 4 in accordance with this insight we developed a novel asymmetric generalization of the kl divergence the askew divergence scr d r for 0 a 1it can easily be shown that sc depends only on the verbs in vqrnote that at a 1 the skew divergence is exactly the kl divergence and s12 is twice one of the summands of js 40n a related note an anonymous reviewer cited the 30 following example from the psychology literature we can say smith lecture is like a sleeping pill but quotnot the other way roundquot average error rate error rates definition of commonality is left to the user we view the empirical approach taken in this paper as complementary to linthat is we are working in the context of a particular application and while we have no mathematical certainty of the importance of the quotcommon supportquot information we did not assume it a priori rather we let the performance data guide our thinkingfinally we observe that the skew metric seems quite promisingwe conjecture that appropriate values for a may inversely correspond to the degree of sparseness in the data and intend in the future to test this conjecture on largerscale prediction taskswe also plan to evaluate skewed versions of the jensenshannon divergence proposed by rao and j lin thanks to claire cardie jon kleinberg fernando pereira and stuart shieber for helpful discussions the anonymous reviewers for their insightful comments fernando pereira for access to computational resources at att and stuart shieber for the opportunity to pursue this work at harvard university under nsf grant noiri9712068
P99-1004
measures of distributional similaritywe study distributional similarity measures for the purpose of improving probability estimation for unseen cooccurrenceswe use verbobject relations in both active and passive voice constructionswe find that our asymmetric skew divergence a generalisation of kullbackleibler divergence persorms best for improving probability estimates for unseen word cooccurrences
finding parts in very large corpora we present a method for extracting parts of objects from wholes given a very large corpus our method finds part words with 55 accuracy for the top 50 words as ranked by the system the part list could be scanned by an enduser and added to an existing ontology or used as a part of a rough semantic lexicon we present a method of extracting parts of objects from wholes to be more precise given a single word denoting some entity that has recognizable parts the system finds and rankorders other words that may denote parts of the entity in questionthus the relation found is strictly speaking between words a relation miller 1 calls quotmeronymyquot in this paper we use the more colloquial quotpartofquot terminologywe produce words with 55 accuracy for the top 50 words ranked by the system given a very large corpuslacking an objective definition of the partof relation we use the majority judgment of five human subjects to decide which proposed parts are correctthe program output could be scanned by an enduser and added to an existing ontology or used as a part of a rough semantic lexiconto the best of our knowledge there is no published work on automatically finding parts from unlabeled corporacasting our nets wider the work most similar to what we present here is that by hearst 2 on acquisition of hyponyms in that paper hearst finds lexical correlates to the hyponym relations by looking in text for cases where known hyponyms appear in proximity as in quotboats cars and other vehiclesquot tests the proposed patterns for validity and uses them to extract relations from a corpusin this paper we apply much the same methodology to the partof relationindeed in 2 hearst states that she tried to apply this strategy to the partof relation but failedwe comment later on the differences in our approach that we believe were most important to our comparative successlooking more widely still there is an evergrowing literature on the use of statisticalcorpusbased techniques in the automatic acquisition of lexicalsemantic knowledge we take it as axiomatic that such knowledge is tremendously useful in a wide variety of tasks from lowerlevel tasks like nounphrase reference and parsing to userlevel tasks such as web searches question answering and digestingcertainly the large number of projects that use wordnet 1 would support this contentionand although wordnet is handbuilt there is general agreement that corpusbased methods have an advantage in the relative completeness of their coverage particularly when used as supplements to the more laborintensive methodswebster dictionary defines quotpartquot as quotone of the often indefinite or unequal subdivisions into which something is or is regarded as divided and which together constitute the wholequot the vagueness of this definition translates into a lack of guidance on exactly what constitutes a part which in turn translates into some doubts about evaluating the results of any procedure that claims to find themmore specifically note that the definition does not claim that parts must be physical objectsthus say quotnovelquot might have quotplotquot as a partin this study we handle this problem by asking informants which words in a list are parts of some target word and then declaring majority opinion to be correctwe give more details on this aspect of the study laterhere we simply note that while our subjects often disagreed there was fair consensus that what might count as a part depends on the nature of the word a physical object yields physical parts an institution yields its members and a concept yields its characteristics and processesin other words quotfloorquot is part of quotbuildingquot and quotplotquot is part of quotbookquot our first goal is to find lexical patterns that tend to indicate partwhole relationsfollowing hearst 2 we find possible patterns by taking two words that are in a partwhole relation and finding sentences in our corpus from ldc that have these words within close proximitythe first few such sentences are the basement of the building the basement in question is in a fourstory apartment building the basement of the apartment buildingfrom the building basement the basement of a building the basements of buildings from these examples we construct the five patterns shown in table 1we assume here that parts and wholes are represented by individual lexical items as opposed to complete noun phrases or as a sequence of quotimportantquot noun modifiers together with the headthis occasionally causes problems eg quotconditionerquot was marked by our informants as not part of quotcarquot whereas quotair conditionerquot probably would have made it into a part listnevertheless in most cases head nouns have worked quite well on their ownwe evaluated these patterns by observing how they performed in an experiment on a single exampletable 2 shows the 20 highest ranked part words for each of the patterns ae table 2 shows patterns a and b clearly outperform patterns c d and e although parts occur in all five patterns the lists for a and b are predominately partsorientedthe relatively poor performance of patterns c and e was anticipated as many things occur quotinquot cars other than their partspattern d is not so obviously bad as it differs from the plural case of pattern b only in the lack of the determiner quotthequot or quotaquothowever this difference proves critical in that pattern d tends to pick up quotcountingquot nouns such as quottruckloadquot on the basis of this experiment we decided to proceed using only patterns a and b from table 1we use the ldc north american news corpus which is a compilation of the wire output of several us newspapersthe total corpus is about 100000000 wordswe ran our program on the whole data set which takes roughly four hours on our networkthe bulk of that time is spent tagging the corpusas is typical in this sort of work we assume that our evidence is independently and identically distributed we have found this assumption reasonable but its breakdown has led to a few errorsin particular a drawback of the nanc is the occurrence of repeated articles since the corpus consists of all of the articles that come over the wire some days include multiple updated versions of the same story containing identical paragraphs or sentenceswe wrote programs to weed out such cases but ultimately found them of little usefirst quotupdatequot articles still have substantial variation so there is a continuum between these and articles that are simply on the same topicsecond our data is so sparse that any such repeats are very unlikely to manifest themselves as repeated examples of parttype patternsnevertheless since two or three occurrences of a word can make it rank highly our results have a few anomalies that stem from failure of the lid assumption our seeds are one word and its pluralwe do not claim that all single words would fare as well as our seeds as we picked highly probable words for our corpus that we thought would have parts that might also be mentioned thereinwith enough text one could probably get reasonable results with any noun that met these criteriathe program has three phasesthe first identifies and records all occurrences of patterns a and b in our corpusthe second filters out all words ending with quotingquot quotnessquot or quotityquot since these suffixes typically occur in words that denote a quality rather than a physical objectfinally we order the possible parts by the likelihood that they are true parts according to some appropriate metricwe took some care in the selection of this metricat an intuitive level the metric should be something like p states that w appears in the patterns ab as a whole while p states that p appears as a partmetrics of the form p have the desirable property that they are invariant over p with radically different base frequencies and for this reason have been widely used in corpusbased lexical semantic research 369however in making this intuitive idea someone more precise we found two closely related versions we call metrics based on the first of these quotloosely conditionedquot and those based on the second quotstrongly conditionedquotwhile invariance with respect to frequency is generally a good property such invariant metrics can lead to bad results when used with sparse datain particular if a part word p has occurred only once in the data in the ab patterns then perforce p 1 for the entity w with which it is pairedthus this metric must be tempered to take into account the quantity of data that supports its conclusionto put this another way we want to pick pairs that have two properties p is high and i to p is largewe need a metric that combines these two desiderata in a natural waywe tried two such metricsthe first is dunning 10 loglikelihood metric which measures how quotsurprisedquot one would be to observe the data counts wp wp i i to p i and i w19 i if one assumes that pintuitively this will be high when the observed p p and when the counts supporting this calculation are largethe second metric is proposed by johnson he suggests asking the question how far apart can we be sure the distributions pand p are if we require a particular significance level say 05 or 01we call this new test the quotsignificantdifferencequot test or sigdiffjohnson observes that compared to sigdiff loglikelihood tends to overestimate the importance of data frequency at the expense of the distance between p and ptable 3 shows the 20 highest ranked words for each statistical method using the seed word quotcarquot the first group contains the words found for the method we perceive as the most accurate sigdiff and strong conditioningthe other groups show the differences between them and the first groupthe category means that this method adds the word to its list means the oppositefor example quotbackquot is on the sigdiffloose list but not the sigdiffstrong listin general sigdiff worked better than surprise and strong conditioning worked better than loose conditioningin both cases the less favored methods tend to promote words that are less specific furthermore the combination of sigdiff and strong conditioning worked better than either by itselfthus all results in this paper unless explicitly noted otherwise were gathered using sigdiff and strong conditioning combinedwe tested five subjects for their concept of a quotpartquot we asked them to rate sets of 100 words of which 50 were in our final results settables 6 11 show the top 50 words for each of our six seed words along with the number of subjects who marked the word as a part of the seed conceptthe score of individual words vary greatly but there was relative consensus on most wordswe put an asterisk next to words that the majority subjects marked as correctlacking a formal definition of part we can only define those words as correct and the rest as wrongwhile the scoring is admittedly not perfect it provides an adequate reference resulttable 4 summarizes these resultsthere we show the number of correct part words in the top 10 20 30 40 and 50 parts for each seed overall about 55 of the top 50 words for each seed are parts and about 70 of the top 20 for each seedthe reader should also note that we tried one ambiguous word quotplantquot to see what would happenour program finds parts corresponding to both senses though given the nature of our text the industrial use is more commonour subjects marked both kinds of parts as correct but even so this produced the weakest part list of the six words we triedas a baseline we also tried using as our quotpatternquot the head nouns that immediately surround our target wordwe then applied the same quotstrong conditioning sigdiffquot statistical test to rank the candidatesthis performed quite poorlyof the top 50 candidates for each target only 8 were parts as opposed to the 55 for our programwe also compared out parts list to those of wordnettable 5 shows the parts of quotcarquot in wordnet that are not in our top 20 and the words in our top 20 that are not in wordnet there are definite tradeoffs although we would argue that our top20 set is both more specific and more comprehensivetwo notable words our top 20 lack are quotenginequot and quotdoorquot both of which occur before 100more generally all wordnet parts occur somewhere before 500 with the exception of quottailfinquot which never occurs with carit would seem that our program would be a good tool for expanding wordnet as a person can to the entire statistical nlp group at brown and scan and mark the list of part words in a few minutes particularly to mark johnson brian roark gideon mann and anamaria popescu who provided invaluable help on the projectthe program presented here can find parts of objects given a word denoting the whole object and a large corpus of unmarked textthe program is about 55 accurate for the top 50 proposed parts for each of six examples upon which we tested itthere does not seem to be a single because for the 45 of the cases that are mistakeswe present here a few problems that have caught our attentionidiomatic phrases like quota jalopy of a carquot or quotthe son of a gunquot provide problems that are not easily weeded outdepending on the data these phrases can be as prevalent as the legitimate partsin some cases problems arose because of tagger mistakesfor example quotreenactmentquot would be found as part of a quotcarquot using pattern b in the phrase quotthe reenactment of the car crashquot if quotcrashquot is tagged as a verbthe program had some tendency to find qualities of objectsfor example quotdriveabilityquot is strongly correlated with carwe try to weed out most of the qualities by removing words with the suffixes quotnessquot quotingquot and quotityquot the most persistent problem is sparse data which is the source of most of the noisemore data would almost certainly allow us to produce better lists both because the statistics we are currently collecting would be more accurate but also because larger numbers would allow us to find other reliable indicatorsfor example idiomatic phrases might be recognized as suchso we see quotjalopy of a carquot but not of course quotthe car jalopyquotwords that appear in only one of the two patterns are suspect but to use this rule we need sufficient counts on the good words to be sure we have a representative sampleat 100 million words the nanc is not exactly small but we were able to process it in about four hours with the machines at our disposal so still larger corpora would not be out of the questionfinally as noted above hearst 2 tried to find parts in corpora but did not achieve good resultsshe does not say what procedures were used but assuming that the work closely paralleled her work on hyponyms we suspect that our relative success was due to our very large corpus and the use of more refined statistical measures for ranking the outputthis research was funded in part by nsf grant iri9319516 and onr grant n00149610549thanks
P99-1008
finding parts in very large corporawe present a method for extracting parts of objects from wholes given a very large corpus our method finds part words with 55 accuracy for the top 50 words as ranked by the systemthe part list could be scanned by an enduser and added to an existing ontology or used as a part of a rough semantic lexiconto filter out attributes that are regarded as qualities rather than parts we remove words ending with the suffixes ness ing and ity
inducing a semantically annotated lexicon via thembased clustering we present a technique for automatic induction of slot annotations for subcategorization frames based on induction of hidden classes in the them framework of statistical estimation the models are empirically evalutated by a general decision test induction of slot labeling for subcategorization frames is accomplished by a further application of them and applied experimentally on frame observations derived from parsing large corpora we outline an interpretation of the learned representations as theoreticallinguistic decompositional lexical entries an important challenge in computational linguistics concerns the construction of largescale computational lexicons for the numerous natural languages where very large samples of language use are now availableresnik initiated research into the automatic acquisition of semantic selectional restrictionsribas presented an approach which takes into account the syntactic position of the elements whose semantic relation is to be acquiredhowever those and most of the following approaches require as a prerequisite a fixed taxonomy of semantic relationsthis is a problem because entailment hierarchies are presently available for few languages and we regard it as an open question whether and to what degree existing designs for lexical hierarchies are appropriate for representing lexical meaningboth of these considerations suggest the relevance of inductive and experimental approaches to the construction of lexicons with semantic informationthis paper presents a method for automatic induction of semantically annotated subcategorization frames from unannotated corporawe use a statistical subcatinduction system which estimates probability distributions and corpus frequencies for pairs of a head and a subcat frame the statistical parser can also collect frequencies for the nominal fillers of slots in a subcat framethe induction of labels for slots in a frame is based upon estimation of a probability distribution over tuples consisting of a class label a selecting head a grammatical relation and a filler headthe class label is treated as hidden data in the emframework for statistical estimationin our clustering approach classes are derived directly from distributional dataa sample of pairs of verbs and nouns gathered by parsing an unannotated corpus and extracting the fillers of grammatical relationssemantic classes corresponding to such pairs are viewed as hidden variables or unobserved data in the context of maximum likelihood estimation from incomplete data via the them algorithmthis approach allows us to work in a mathematically welldefined framework of statistical inference ie standard monotonicity and convergence results for the them algorithm extend to our methodthe two main tasks of thembased clustering are i the induction of a smooth probability model on the data and ii the automatic discovery of classstructure in the databoth of these aspects are respected in our application of lexicon inductionthe basic ideas of our thembased clustering approach were presented in rooth our approach constrasts with the merely heuristic and empirical justification of similaritybased approaches to clustering for which so far no clear probabilistic interpretation has been giventhe probability model we use can be found earlier in pereira et al however in contrast to this approach our statistical inference method for clustering is formalized clearly as an themalgorithmapproaches to probabilistic clustering similar to ours were presented recently in saul and pereira and hofmann and puzicha there also themalgorithms for similar probability models have been derived but applied only to simpler tasks not involving a combination of embased clustering models as in our lexicon induction experimentfor further applications of our clustering model see rooth et al we seek to derive a joint distribution of verbnoun pairs from a large sample of pairs of verbs v e v and nouns n e n the key idea is to view v and n as conditioned on a hidden class c e c where the classes are given no prior interpretationthe semantically smoothed probability of a pair is defined to be the joint distribution p is defined by p pppnote that by construction conditioning of v and n on each other is solely made through the classes c in the framework of the them algorithm we can formalize clustering as an estimation problem for a latent class model as followswe are given a sample space of observed incomplete data corresponding to pairs from v x n a sample space x of unobserved complete data corresponding to triples from cx v x n a set x x e x i x c e ci of complete data related to the observation y a completedata specification po corresponding to the joint probability p over cx v x n with parametervector 0 an incomplete data specification p0 which is related to the completedata specification as the marginal probability p0 expo the them algorithm is directed at finding a value a of 0 that maximizes the incompletedata loglikelihood function l as a function of 0 for a given sample ie a arg max l where l ln fly po 0 as prescribed by the them algorithm the parameters of l are estimated indirectly by proceeding iteratively in terms of completedata estimation for the auxiliary function q which is the conditional expectation of the completedata loglikelihood lnpo given the observed data y and the current fit of the parameter values 0this auxiliary function is iteratively maximized as a function of 0 where each iteration is defined by note that our application is an instance of the themalgorithm for contextfree models from which the following particularily simple reestimation formulae can be derivedlet x for fixed c and ythen probabilistic contextfree grammar of gave for the british national corpus intuitively the conditional expectation of the number of times a particular v n or c choice is made during the derivation is prorated by the conditionally expected total number of times a choice of the same kind is madeas shown by baum et al these expectations can be calculated efficiently using dynamic programming techniquesevery such maximization step increases the loglikelihood function l and a sequence of reestimates eventually converges to a maximum of l in the following we will present some examples of induced clustersinput to the clustering algorithm was a training corpus of 1280715 tokens of verbnoun pairs participating in the grammatical relations of intransitive and transitive verbs and their subject and objectfillersthe data were gathered from the maximalprobability parses the headlexicalized fig2 shows an induced semantic class out of a model with 35 classesat the top are listed the 20 most probable nouns in the p distribution and their probabilities and at left are the 30 most probable verbs in the p distribution5 is the class indexthose verbnoun pairs which were seen in the training data appear with a dot in the class matrixverbs with suffix as s indicate the subject slot of an active intransitivesimilarily aso s denotes the subject slot of an active transitive and aso o denotes the object slot of an active transitivethus v in the above discussion actually consists of a combination of a verb with a subcat frame slot as s aso s or aso oinduced classes often have a basis in lexical semantics class 5 can be interpreted as clustering agents denoted by proper names quotmanquot and quotwomanquot together with verbs denoting communicative actionfig1 shows a cluster involving verbs of scalar change and things which can move along scalesfig5 can be interpreted as involving different dispositions and modes of their execution60 100 160 200 260 300 nix el cleeeewe evaluated our clustering models on a pseudodisambiguation task similar to that performed in pereira et al but differing in detailthe task is to judge which of two verbs v and v is more likely to take a given noun n as its argument where the pair has been cut out of the original corpus and the pair is constructed by pairing 71 with a randomly chosen verb v such that the combination is completely unseenthus this test evaluates how well the models generalize over unseen verbsthe data for this test were built as followswe constructed an evaluation corpus of triples by randomly cutting a test corpus of 3000 pairs out of the original corpus of 1280712 tokens leaving a training corpus of 1178698 tokenseach noun n in the test corpus was combined with a verb v which was randomly chosen according to its frequency such that the pair did appear neither in the training nor in the test corpushowever the elements v v and n were required to be part of the training corpusfurthermore we restricted the verbs and nouns in the evalutation corpus to the ones which occured at least 30 times and at most 3000 times with some verbfunctor v in the training corpusthe resulting 1337 evaluation triples were used to evaluate a sequence of clustering models trained from the training corpusthe clustering models we evaluated were parametrized in starting values of the training algorithm in the number of classes of the model and in the number of iteration steps resulting in a sequence of 3 x 10 x 6 modelsstarting from a lower bound of 50 random choice accuracy was calculated as the number of times the model decided for p p out of all choices madefig3 shows the evaluation results for models trained with 50 iterations averaged over starting values and plotted against class cardinalitydifferent starting values had an effect of 2 on the performance of the testwe obtained a value of about 80 accuracy for models between 25 and 100 classesmodels with more than 100 classes show a small but stable overfitting effecta second experiment addressed the smoothing power of the model by counting the number of pairs in the set v xn of all possible combinations of verbs and nouns which received a positive joint probability by the modelthe v x nspace for the above clustering models included about 425 million combinations we approximated the smoothing size of a model by randomly sampling 1000 pairs from v x n and returning the percentage of positively assigned pairs in the random samplefig4 plots the smoothing results for the above models against the number of classesstarting values had an influence of 1 on performancegiven the proportion of the number of types in the training corpus to the v x nspace without clustering we have a smoothing power of 014 whereas for example a model with 50 classes and 50 iterations has a smoothing power of about 93 corresponding to the maximum likelihood paradigm the number of training iterations had a decreasing effect on the smoothing performance whereas the accuracy of the pseudodisambiguation was increasing in the number of iterationswe found a number of 50 iterations to be a good compromise in this tradeoffthe goal of the following experiment was to derive a lexicon of several hundred intransitive and transitive verbs with subcat slots labeled with latent classesto induce latent classes for the subject slot of a fixed intransitive verb the following statistical inference step was performedgiven a latent class model plc for verbnoun pairs and a sample n1 nm of subjects for a fixed intransitive verb we calculate the probability of an arbitrary subject n e n by the estimation of the parametervector 0 can be formalized in the them framework by viewing p or p as a function of 0 for fixed plcthe reestimation formulae resulting from the incomplete data estimation for these probability functions have the following form is the frequency of n in the sample of subjects of the fixed verb a similar them induction process can be applied also to pairs of nouns thus enabling induction of latent semantic annotations for transitive verb framesgiven a lc model na for verbnoun pairs and a sample 1 of noun arguments for a fixed transitive verb we calculate the probability of its noun argument pairs by in an them framework by viewing p or p as a function of 0 for fixed plcthe reestimation formulae resulting from this incomplete data estimation problem have the following simple form is the frequency of in the sample of noun argument pairs of the fixed verb note that the class distributions p and p for intransitive and transitive models can be computed also for verbs unseen in the lc model blush 5 0982975 snarl 5 0962094 constance 3 mandeyille 2 christina 3 jinkwa 2 willie 299737 man 199859 ronni 2 scott 199761 claudia 2 omalley 199755 gabriel 2 shamlou 1 maggie 2 angalo 1 bathsheba 2 corbett 1 sarah 2 southgate 1 girl 19977 ace 1 experiments used a model with 35 classesfrom maximal probability parses for the british national corpus derived with a statistical parser we extracted frequency tables for intransitve verbsubject pairs and transitive verbsubjectobject triplesthe 500 most frequent verbs were selected for slot labelingfig6 shows two verbs v for which the most probable class label is 5 a class which we earlier described as communicative action together with the estimated frequencies of f pe for those ten nouns n for which this estimated frequency is highestfig7 shows corresponding data for an intransitive scalar motion sense of increasefig8 shows the intransitive verbs which take 17 as the most probable labelintuitively the verbs are semantically coherentwhen compared to levin 48 toplevel verb classes we found an agreement of our classification with her class of quotverbs of changes of statequot except for the last three verbs in the list in fig8 which is sorted by probability of the class labelsimilar results for german intransitive scalar motion verbs are shown in fig9the data for these experiments were extracted from the maximalprobability parses of a 41 million word corpus of german subordinate clauses yielding 418290 tokens of pairs of verbs or adjectives and nounsthe lexicalized probabilistic grammar for german used is described in beil et al we compared the german example of scalar motion verbs to the linguistic classification of verbs given by schuhmacher and found an agreement of our classification with the class of quoteinfache anderungsverbenquot except for the verbs anwachsen and stagnieren which were not classified there at allfig10 shows the most probable pair of classes for increase as a transitive verb together with estimated frequencies for the head filler pairnote that the object label 17 is the class found with intransitive scalar motion verbs this correspondence is exploited in the next sectionin some linguistic accounts multiplace verbs are decomposed into representations involving one predicate or relation per argumentfor instance the transitive causativeinchoative verb increase is composed of an actorcausative verb combining with a oneplace predicate in the structure on the left in fig11linguistically such representations are motivated by argument alternations case linking and deep word order language acquistion scope ambiguity by the desire to represent aspects of lexical meaning and by the fact that in some languages the postulated decomposed representations are overt with each primitive predicate corresponding to a morphemefor references and recent discussion of this kind of theory see hale and keyser and kural we will sketch an understanding of the lexical representations induced by latentclass labeling in terms of the linguistic theories mentioned above aiming at an interpretation which combines computational learnability linguistic motivation and denotationalsemantic adequacythe basic idea is that latent classes are computational models of the atomic relation symbols occurring in lexicalsemantic representationsas a first implementation consider replacing the relation symbols in the first tree in fig11 with relation symbols derived from the latent class labelingin the second tree in fig 11 r17 and r8 are relation symbols with indices derived from the labeling procedure of sect4such representations can be semantically interpreted in standard ways for instance by interpreting relation symbols as denoting relations between events and individualssuch representations are semantically inadequate for reasons given in philosophical critiques of decomposed linguistic representations see fodor for recent discussiona lexicon estimated in the above way has as many primitive relations as there are latent classeswe guess there should be a few hundred classes in an approximately complete lexicon fodor arguments which are based on the very limited degree of genuine interdefinability of lexical items and on putnam arguments for contextual determination of lexical meaning indicate that the number of basic concepts has the order of magnitude of the lexicon itselfmore concretely a lexicon constructed along the above principles would identify verbs which are labelled with the same latent classes for instance it might identify the representations of grab and touchfor these reasons a semantically adequate lexicon must include additional relational constantswe meet this requirement in a simple way by including as a conjunct a unique constant derived from the openclass root as in the third tree in fig11we introduce indexing of the open class root in order that homophony of open class roots not result in common conjuncts in semantic representationsfor instance we do not want the two senses of decline exemplified in decline the proposal and decline five percent to have an common entailment represented by a common conjunctthis indexing method works as long as the labeling process produces different latent class labels for the different sensesthe last tree in fig11 is the learned representation for the scalar motion sense of the intransitive verb increasein our approach learning the argument alternation relating the transitive increase to the intransitive increase amounts to learning representations with a common component ri7 a increaserin this case this is achievedwe have proposed a procedure which maps observations of subcategorization frames with their complement fillers to structured lexical entrieswe believe the method is scientifically interesting practically useful and flexible because 1the algorithms and implementation are efficient enough to map a corpus of a hundred million words to a lexicon2the model and induction algorithm have foundations in the theory of parameterized families of probability distributions and statistical estimationas exemplified in the paper learning disambiguation and evaluation can be given simple motivated formulations3the derived lexical representations are linguistically interpretablethis suggests the possibility of largescale modeling and observational experiments bearing on questions arising in linguistic theories of the lexicon4because a simple probabilistic model is used the induced lexical entries could be incorporated in lexicalized syntaxbased probabilistic language models in particular in headlexicalized modelsthis provides for potential application in many areas5the method is applicable to any natural language where text samples of sufficient size computational morphology and a robust parser capable of extracting subcategorization frames with their fillers are available
P99-1014
inducing a semantically annotated lexicon via thembased clusteringwe present a technique for automatic induction of slot annotations for subcategorization frames based on induction of hidden classes in the them framework of statistical estimationthe models are empirically evalutated by a general decision testinduction of slot labeling for subcategorization frames is accomplished by a further application of them and applied experimentally on frame observations derived from parsing large corporawe outline an interpretation of the learned representations as theoreticallinguistic decompositional lexical entrieswe test 3000 random verbnoun pairs requiring the erbs and nouns to appear between 30 and 3000 times in trainingwe use soft clustering to form classes for generalization and do not take recourse to any handcrafter resources in our approach to selectional preference induction
automatic construction of a hypernymlabeled noun hierarchy from text previous work has shown that automatic methods can be used in building semantic lexicons this work goes a step further by automatically creating not just clusters of related words but a hierarchy of nouns and their hypernyms akin to the handbuilt hierarchy in wordnet the purpose of this work is to build something like the hypernymlabeled noun hierarchy of wordnet automatically from text using no other lexical resourceswordnet has been an important research tool but it is insufficient for domainspecific text such as that encountered in the mucs our work develops a labeled hierarchy based on a text corpusin this project nouns are clustered into a hierarchy using data on conjunctions and appositives appearing in the wall street journalthe internal nodes of the resulting tree are then labeled with hypernyms for the nouns clustered underneath them also based on data extracted from the wall street journalthe resulting hierarchy is evaluated by human judges and future research directions are discussedthe first stage in constructing our hierarchy is to build an unlabeled hierarchy of nouns using bottomup clustering methods nouns are clustered based on conjunction and appositive data collected from the wall street journal corpussome of the data comes from the parsed files 221 of the wall street journal penn treebank corpus and additional parsed text was obtained by parsing the 1987 wall street journal text using the parser described in charniak et al from this parsed text we identified all conjunctions of noun phrases and all appositives the idea here is that nouns in conjunctions or appositives tend to be semantically related as discussed in riloff and shepherd and roark and charniak taking the head words of each np and stemming them results in data for about 50000 distinct nounsa vector is created for each noun containing counts for how many times each other noun appears in a conjunction or appositive with itwe can then measure the similarity of the vectors for two nouns by computing the cosine of the angle between these vectors as 111iwr to compare the similarity of two groups of nouns we define similarity as the average of the cosines between each pair of nouns made up of one noun from each of the two groupsev cos sizesize where v ranges over all vectors for nouns in group a w ranges over the vectors for group b and size represents the number of nouns which are descendants of node xwe want to create a tree of all of the nouns in this data using standard bottomup clustering techniques as follows put each noun into its own nodecompute the similarity between each pair of nodes using the cosine methodfind the two most similar nouns and combine them by giving them a common parent we can then compute the new node similarity to each other node by computing a weighted average of the similarities between each of its children and the other nodein other words assuming nodes a and b have been combined under a new parent c the similarity between c and any other node i can be computed as once again we combine the two most similar nodes under a common parentrepeat until all nouns have been placed under a common ancestornouns which have a cosine of 0 with every other noun are not included in the final treein practice we cannot follow exactly that algorithm because maintaining a list of the cosines between every pair of nodes requires a tremendous amount of memorywith 50000 nouns we would initially require a 50000 x 50000 array of values with our current hardware the largest array we the way we handled this limitation is to process the nouns in batchesinitially 5000 nouns are read inwe cluster these until we have 2500 nodesthen 2500 more nouns are read in to bring the total to 5000 again and once again we cluster until 2500 nodes remainthis process is repeated until all nouns have been processedsince the lowestfrequency nouns are clustered based on very little information and have a greater tendency to be clustered badly we chose to filter some of these outby reducing the number of nouns to be read a much nicer structure is obtainedwe now only consider nouns with a vector of length at least 2there are approximately 20000 nouns as the leaves in our final binary tree structureour next step is to try to label each of the internal nodes with a hypernym describing its descendant nounsfollowing wordnet a word a is said to be a hypernym of a word b if native speakers of english accept the sentence quotb is a aquot to determine possible hypernyms for a particular noun we use the same parsed text described in the previous sectionas suggested in hearst we can find some hypernym data in the text by looking for conjunctions involving the word quototherquot as in quotx y and other zsquot from this phrase we can extract that z is likely a hypernym for both x and ythis data is extracted from the parsed text and for each noun we construct a vector of hypernyms with a value of 1 if a word has been seen as a hypernym for this noun and 0 otherwisethese vectors are associated with the leaves of the binary tree constructed in the previous sectionfor each internal node of the tree we construct a vector of hypernyms by adding together the vectors of its childrenwe then assign a hypernym to this node by simply choosing the hypernym with the largest value in this vector that is the hypernym which appeared with the largest number of the node descendant nounswe also list the second and thirdbest hypernyms to account for cases where a single word does not describe the cluster adequately or cases where there are a few good hypernyms which tend to alternate such as quotcountryquot and quotnationquotif a hypernym has occurred with only one of the descendant nouns it is not listed as one of the best hypernyms since we have insufficient evidence that the word could describe this class of nounsnot every node has sufficient data to be assigned a hypernymthe labeled tree constructed in the previous section tends to be extremely redundantrecall that the tree is binaryin many cases a group of nouns really do not have an inherent tree structure for example a cluster of countriesalthough it is possible that a reasonable tree structure could be created with subtrees of say european countries asian countries etc recall that we are using singleword hypernymsa large binary tree of countries would ideally have quotcountryquot as the best hypernym at every levelwe would like to combine these subtrees into a single parent labeled quotcountryquot or quotnationquot with each country appearing as a leaf directly beneath this parentanother type of redundancy can occur when an internal node is unlabeled meaning a hypernym could not be found to describe its descendant nounssince the tree root is labeled somewhere above this node there is necessarily a node labeled with a hypernym which applies to its descendant nouns including those which are a descendant of this nodewe want to move this node children directly under the nearest labeled ancestorwe compress the tree using the following very simple algorithm in depthfirst order examine the children of each internal nodeif the child is itself an internal node and it either has no best hypernym or the same three best hypernyms as its parent delete this child and make its children into children of the parent insteadthere are 20014 leaves and 654 internal nodes in the final tree the toplevel node in our learned tree is labeled quotproductanalystofficialquot since these hypernyms are learned from the wall street journal they are domainspecific labels rather than the more general quotthingpersonquothowever if the hierarchy were to be used for text from the financial domain these labels may be preferredthe next level of the hierarchy the children of the root is as shown in table 1these numbers do not add up to 20014 because 1288 nouns are attached directly to the root meaning that they could not be clustered to any greater level of detailthese tend to be nouns for which little data was available generally proper nouns to evaluate the hierarchy 10 internal nodes dominating at least 20 nouns were selected at randomfor each of these nodes we randomly selected 20 of the nouns from the cluster under that nodethree human judges were asked to evaluate for each noun and each of the three hypernyms listed as quotbestquot for that cluster whether they were actually in a hyponymhypernym relationthe judges were students working in natural language processing or computational linguistics at our institution who were not directly involved in the research for this project5 quotnoisequot nouns randomly selected from elsewhere in the tree were also added to each cluster without the judges knowledge to verify that the judges were not overly generoussome nouns especially proper nouns were not recognized by the judgesfor any noun that was not evaluated by at least two judges we evaluated the nounhypernym pair by examining the appearances of that noun in the source text and verifying that the hypernym was correct for the predominant sense of the nountable 2 presents the results of this evaluationthe table lists only results for the actual candidate hyponym nouns not the noise wordsthe quothypernym 1quot column indicates whether the quotbestquot hypernym was considered correct while the quotany hypernymquot column indicates whether any of the listed hypernyms were acceptedwithin those columns quotmajorityquot lists the opinion of the majority of judges and quotanyquot indicates the hypernyms that were accepted by even one of the judgesthe quothypernym 1anyquot column can be used to compare results to riloff and shepherd for five handselected categories each with a single hypernym and the 20 nouns their algorithm scored as the best members of each category at least one judge marked on average about 31 of the nouns as correctusing randomlyselected categories and randomlyselected category members we achieved 39by the strictest criteria our algorithm produces correct hyponyms for a randomlyselected hypernym 33 of the timeroark and charniak report that for a handselected category their algorithm generally produces 20 to 40 correct entriesfurthermore if we loosen our criteria to consider also the second and thirdbest hypernyms 60 of the nouns evaluated were assigned to at least one correct hypernym according to at least one judgethe quotbankfirmstationquot cluster consists largely of investment firms which were marked as incorrect for quotbankquot resulting in the poor performance on the hypernym 1 measures for this clusterthe last cluster in the list labeled quotcompanyquot is actually a very good cluster of cities that because of sparse data was assigned a poor hypernymsome of the suggestions in the following section might correct this problemof the 50 noise words a few of them were actually rated as correct as well as shown in table 3this is largely because the noise words were selected truly at random so that a noise word for the quotcompanyquot cluster may not have been in that particular cluster but may still have appeared under a quotcompanyquot hypernym elsewhere in the hierarchyfuture work should benefit greatly by using data on the hypernyms of hypernymsin our current tree the best hypernym for the entire tree is quotproductquot however many times nodes deeper in the tree are given this label alsofor example we have a cluster including many forms of currency but because there is little data for these particular words the only hypernym found was quotproductquothowever the parent of this node has the best hypernym of quotcurrencyquotif we knew that quotproductquot was a hypernym of quotcurrencyquot we could detect that the parent node label is more specific and simply absorb the child node into the parentfurthermore we may be able to use data on the hypernyms of hypernyms to give better labels to some nodes that are currently labeled simply with the best hypernyms of their subtrees such as a node labeled quotproductanalystquot which has two subtrees one labeled quotproductquot and containing words for things the other labeled quotanalystquot and containing names of peoplewe would like to instead label this node something like quotentityquotit is not yet clear whether corpus data will provide sufficient data for hypernyms at such a high level of the tree but depending on the intended application for the hierarchy this level of generality might not be requiredas noted in the previous section one major spurious result is a cluster of 51 nouns mainly people which is given the hypernym quotconductorquotthe reason for this is that few of the nouns appear with hypernyms and two of them appear in the same phrase listing conductors thus giving quotconductorquot a count of two sufficient to be listed as the only hypernym for the clusterit might be useful to have some stricter criterion for hypernyms say that they occur with a certain percentage of the nouns below them in the treeadditional hypernym data would also be helpful in this case and should be easily obtainable by looking for other patterns in the text as suggested by hearst because the tree is built in a binary fashion when eg three clusters should all be distinct children of a common parent two of them must merge first giving an artificial intermediate level in the treefor example in the current tree a cluster with best hypernym quotagencyquot and one with best hypernym quotexchangequot have a parent with two best hypernyms quotagencyexchangequot rather than both of these nodes simply being attached to the next level up with best hypernym quotgroupquotit might be possible to correct for this situation by comparing the hypernyms for the two clusters and if there is little overlap deleting their parent node and attaching them to their grandparent insteadit would be useful to try to identify terms made up of multiple words rather than just using the head nouns of the noun phrasesnot only would this provide a more useful hierarchy or at least perhaps one that is more useful for certain applications but it would also help to prevent some errorshearst gives an example of a potential hyponymhypernym pair quotbroken boneinjuryquot using our algorithm we would learn that quotinjuryquot is a hypernym of quotbonequotideally this would not appear in our hierarchy since a more common hypernym would be chosen instead but it is possible that in some cases a bad hypernym would be found based on multiple word phrasesa discussion of the difficulties in deciding how much of a noun phrase to use can be found in hearstideally a useful hierarchy should allow for multiple senses of a word and this is an area which can be explored in future workhowever domainspecific text tends to greatly constrain which senses of a word will appear and if the learned hierarchy is intended for use with the same type of text from which it was learned it is possible that this would be of limited benefitwe used parsed text for these experiments because we believed we would get better results and the parsed data was readily availablehowever it would be interesting to see if parsing is necessary or if we can get equivalent or nearlyequivalent results doing some simpler text processing as suggested in ahlswede and evens both hearst and riloff and shepherd use unparsed textpereira et al used clustering to build an unlabeled hierarchy of nounstheir hierarchy is constructed topdown rather than bottomup with nouns being allowed membership in multiple clusterstheir clustering is based on verbobject relations rather than on the nounnoun relations that we usefuture work on our project will include an attempt to incorporate verbobject data as well in the clustering processthe tree they construct is also binary with some internal nodes which seem to be quotartificialquot but for evaluation purposes they disregard the tree structure and consider only the leaf nodesunfortunately it is difficult to compare their results to ours since their evaluation is based on the verbobject relationsriloff and shepherd suggested using conjunction and appositive data to cluster nouns however they approximated this data by just looking at the nearest np on each side of a particular nproark and charniak built on that work by actually using conjunction and appositive data for noun clustering as we do hereboth of these projects have the goal of building a single cluster of eg vehicles and both use seed words to initialize a cluster with nouns belonging to ithearst introduced the idea of learning hypernymhyponym relationships from text and gives several examples of patterns that can be used to detect these relationships including those used here along with an algorithm for identifying new patternsthis work shares with ours the feature that it does not need large amounts of data to learn a hypernym unlike in much statistical work a single occurrence is sufficientthe hyponymhypernym pairs found by hearst algorithm include some that hearst describes as quotcontext and pointofview dependentquot such as quotwashingtonnationalistquot and quotaircrafttargetquot our work is somewhat less sensitive to this kind of problem since only the most common hypernym of an entire cluster of nouns is reported so much of the noise is filteredwe have shown that hypernym hierarchies of nouns can be constructed automatically from text with similar performance to semantic lexicons built automatically for handselected hypernymswith the addition of some improvements we have identified we believe that these automatic methods can be used to construct truly useful hierarchiessince the hierarchy is learned from sample text it could be trained on domainspecific text to create a hierarchy that is more applicable to a particular domain than a generalpurpose resource such as wordnetthanks to eugene charniak for helpful discussions and for the data used in this projectthanks also to brian roark heidi jfox and keith hall for acting as judges in the project evaluationthis research is supported in part by nsf grant iri9319516 and by onr grant n00149610549
P99-1016
automatic construction of a hypernymlabeled noun hierarchy from textprevious work has shown that automatic methods can be used in building semantic lexiconsthis work goes a step further by automatically creating not just clusters of related words but a hierarchy of nouns and their hypernyms akin to the handbuilt hierarchy in wordnetwe let three judges evaluate ten internal nodes in the hyponym hierarchy that had at least twenty descendants
development and use of a goldstandard data set for subjectivity classifications this paper presents a case study of analyzing and improving intercoder reliability in discourse using statistical techniques corrected tags are formulated and successfully used to guide a revision of the coding manual and develop an automatic classifier this paper presents a case study of analyzing and improving intercoder reliability in discourse tagging using the statistical techniques presented in our approach is data driven we refine our understanding and presentation of the classification scheme guided by the results of the intercoder analysiswe also present the results of a probabilistic classifier developed on the resulting annotationsmuch research in discourse processing has focused on taskoriented and instructional dialogsthe task addressed here comes to the fore in other genres especially news reportingthe task is to distinguish sentences used to objectively present factual information from sentences used to present opinions and evaluationsthere are many applications for which this distinction promises to be important including text categorization and summarizationthis research takes a large step toward developing a reliably annotated gold standard to support experimenting with such applicationsthis research is also a case study of analyzing and improving manual tagging that is applicable to any tagging taskwe perform a statistical analysis that provides information that complements the information provided by cohen kappa in particular we analyze patterns of agreement to identify systematic disagreements that result from relative bias among judges because they can potentially be corrected automaticallythe corrected tags serve two purposes in this workthey are used to guide the revision of the coding manual resulting in improved kappa scores and they serve as a gold standard for developing a probabilistic classifierusing biascorrected tags as goldstandard tags is one way to define a single best tag when there are multiple judges who disagreethe coding manual and data from our experiments are available at hap wwwcsnmsuedur wiebeprojectsin the remainder of this paper we describe the classification being performed the statistical tools used to analyze the data and produce the biascorrected tags the case study of improving intercoder agreement and the results of the classifier for automatic subjectivity tagging we address evidentiality in text which concerns issues such as what is the source of information and whether information is being presented as fact or opinionthese questions are particularly important in news reporting in which segments presenting opinions and verbal reactions are mixed with segments presenting objective fact the definitions of the categories in our coding manual are intentionbased quotif the primary intention of a sentence is objective presentation of material that is factual to the reporter the sentence is objectiveotherwise the sentence is subjectivequot we focus on sentences about private states such as belief knowledge emotions etc and sentences about speech events such as speaking and writingsuch sentences may be either subjective or objectivefrom the coding manual quotsubjective speechevent sentences are used to communicate the speaker evaluations opinions emotions and speculationsthe primary intention of objective speechevent sentences on the other hand is to objectively communicate material that is factual to the reporterthe speaker in these cases is being used as a reliable source of informationquot following are examples of subjective and objective sentences in sentence 4 there is no uncertainty or evaluation expressed toward the speaking eventthus from one point of view one might have considered this sentence to be objectivehowever the object of the sentence is not presented as material that is factual to the reporter so the sentence is classified as subjectivelinguistic categorizations usually do not cover all instances perfectlyfor example sentences may fall on the borderline between two categoriesto allow for uncertainty in the annotation process the specific tags used in this work include certainty ratings ranging from 0 for least certain to 3 for most certainas discussed below in section 32 the certainty ratings allow us to investigate whether a model positing additional categories provides a better description of the judges annotations than a binary model doessubjective and objective categories are potentially important for many text processing applications such as information extraction and information retrieval where the evidential status of information is importantin generation and machine translation it is desirable to generate text that is appropriately subjective or objective in summarization subjectivity judgments could be included in document profiles to augment automatically produced document summaries and to help the user make relevance judgments when using a search enginein addition they would be useful in text categorizationin related work we found that article types such as announcement and opinion piece are significantly correlated with the subjective and objective classificationour subjective category is related to but differs from the statementopinion category of the switchboarddamsl discourse annotation project as well as the gives opinion category of bale model of smallgroup interactionall involve expressions of opinion but while our category specifications focus on evidentiality in text theirs focus on how conversational participants interact with one another in dialogtable 1 presents data for two judgesthe rows correspond to the tags assigned by judge 1 and the columns correspond to the tags assigned by judge 2let nj denote the number of sentences that judge 1 classifies as i and judge 2 classifies as j and let be the probability that a randomly selected sentence is categorized as i by judge 1 and j by judge 2then the maximum likelihood estimate of fiii is 11771 where n eii nii 504table 1 shows a fourcategory data configuration in which certainty ratings 0 and 1 are combined and ratings 2 and 3 are combinednote that the analyses described in this section cannot be performed on the twocategory data configuration due to insufficient degrees of freedom evidence of confusion among the classifications in table 1 can be found in the marginal totals ni and njwe see that judge 1 has a relative preference or bias for objective while judge 2 has a bias for subjectiverelative bias is one aspect of agreement among judgesa second is whether the judges disagreements are systematic that is correlatedone pattern of systematic disagreement is symmetric disagreementwhen disagreement is symmetric the differences between the actual counts and the counts expected if the judges decisions were not correlated are symmetric that is snii for i j where 5i is the difference from independenceour goal is to correct correlated disagreements automaticallywe are particularly interested in systematic disagreements resulting from relative biaswe test for evidence of such correlations by fitting probability models to the dataspecifically we study bias using the model for marginal homogeneity and symmetric disagreement using the model for quasisymmetrywhen there is such evidence we propose using the latent class model to correct the disagreements this model posits an unobserved variable to explain the correlations among the judges observationsthe remainder of this section describes these models in more detailall models can be evaluated using the freeware package coco which was developed by badsberg and is available at http webmathaucdkr jhbcocoa probability model enforces constraints on the counts in the datathe degree to which the counts in the data conform to the constraints is called the fit of the modelin this work model fit is reported in terms of the likelihood ratio statistic g2 and its significance the higher the g2 value the poorer the fitwe will consider model fit to be acceptable if its reference significance level is greater than 001 bias of one judge relative to another is evidenced as a discrepancy between the marginal totals for the two judges bias is measured by testing the fit of the model for marginal homogeneity 25i for all ithe larger the g2 value the greater the biasthe fit of the model can be evaluated as described on pages 293294 of bishop et al judges who show a relative bias do not always agree but their judgments may still be correlatedas an extreme example judge 1 may assign the subjective tag whenever judge 2 assigns the objective tagin this example there is a kind of symmetry in the judges responses but their agreement would be lowpatterns of symmetric disagreement can be identified using the model for quasisymmetrythis model constrains the offdiagonal counts ie the counts that correspond to disagreementit states that these counts are the product of a table for independence and a symmetric table nii ai x ai x aii such that aij iiin this formula ai x a3 is the model for independence and ai3 is the symmetric interaction termintuitively aii represents the difference between the actual counts and those predicted by independencethis model can be evaluated using coco as described on pages 289290 of bishop et al we use the latent class model to correct symmetric disagreements that appear to result from biasthe latent class model was first introduced by lazarsfeld and was later made computationally efficient by goodman goodman procedure is a specialization of the them algorithm which is implemented in the freeware program coco since its development the latent class model has been widely applied and is the underlying model in various unsupervised machine learning algorithms including autoclass the form of the latent class model is that of naive bayes the observed variables are all conditionally independent of one another given the value of the latent variablethe latent variable represents the true state of the object and is the source of the correlations among the observed variablesas applied here the observed variables are the classifications assigned by the judgeslet b d j and m be these variables and let l be the latent variablethen the latent class model is the parameters of the model are p p p pp once estimates of these parameters are obtained each clause can be assigned the most probable latent category given the tags assigned by the judgesthe them algorithm takes as input the number of latent categories hypothesized ie the number of values of l and produces estimates of the parametersfor a description of this process see goodman dawid skene or pedersen bruce three versions of the latent class model are considered in this study one with two latent categories one with three latent categories and one with fourwe apply these models to three data configurations one with two categories one with four categories and one with eight categories all combinations of model and data configuration are evaluated except the fourcategory latent class model with the twocategory data configuration due to insufficient degrees of freedomin all cases the models fit the data well as measured by g2the model chosen as final is the one for which the agreement among the latent categories assigned to the three data configurations is highest that is the model that is most consistent across the three data configurationsour annotation project consists of the following steps2 biascorrected tag in many cases but arguing for his or her own tag in some casesbased on the judges feedback 22 of the 504 biascorrected tags are changed and a second draft of the coding manual is written5a second corpus is annotated by the same four judges according to the new coding manualeach spends about five hours6the results of the second tagging experiment are analyzed using the methods described in section 3 and biascorrected tags are produced for the second data settwo disjoint corpora are used in steps 2 and 5 both consisting of complete articles taken from the wall street journal treebank corpus in both corpora judges assign tags to each noncompound sentence and to each conjunct of each compound sentence 504 in the first corpus and 500 in the secondthe segmentation of compound sentences was performed manually before the judges received the datajudges j and b the first two authors of this paper are nlp researchersjudge m is an undergraduate computer science student and judge d has no background in computer science or linguisticsjudge j with help from m developed the original coding instructions and judge j directed the process in step 4the analysis performed in step 3 reveals strong evidence of relative bias among the judgeseach pairwise comparison of judges also shows a strong pattern of symmetric disagreementthe twocategory latent class model produces the most consistent clusters across the data configurationsit therefore is used to define the biascorrected tagsin step 4 judge b was excluded from the interactive discussion for logistical reasonsdiscussion is apparently important because although b kappa values for the first study are on par with the others b kappa values for agreement with the other judges change very little from the first to the second study in contrast agreement among the other judges noticeably improvesbecause judge b poor performance in the second tagging experiment is linked to a difference in procedure judge b tags are excluded from our subsequent analysis of the data gathered during the second tagging experimenttable 2 shows the changes from study 1 to study 2 in the kappa values for pairwise agreement among the judgesthe best results are clearly for the two who are not authors of this paper the kappa value for the agreement between d and m considering all certainty ratings reaches 76 which allows tentative conclusions on krippendorf scale if we exclude the sentences with certainty rating 0 the kappa values for pairwise agreement between m and d and between j and m are both over 8 which allows definite conclusions on krippendorf scalefinally if we only consider sentences with certainty 2 or 3 the pairwise agreements among m d and j all have high kappa values 087 and overwe are aware of only one previous project reporting intercoder agreement results for similar categories the switchboarddamsl project mentioned abovewhile their kappa results are very good for other tags the opinionstatement tagging was not very successful quotthe distinction was very hard to make by labelers and accounted for a large proportion of our interlabeler errorquot in step 6 as in step 3 there is strong evidence of relative bias among judges d j and m each pairwise comparison of judges also shows a strong pattern of symmetric disagreementthe results of this analysis are presented in table 33 also as in step 3 the twocategory latent class model produces the most consistent clusters across the data configurationsthus it is used to define the biascorrected tags for the second data set as wellrecently there have been many successful applications of machine learning to discourse processing such as in this section we report the results of machine learning experiments in which we develop probablistic classifiers to automatically perform the subjective and objective classificationin the method we use for developing classifiers a search is performed to find a probability model that captures important interdependencies among featuresbecause features can be dropped and added during search the method also performs feature selectionin these experiments the system considers naive bayes full independence full interdependence and models generated from those using forward and backward searchthe model selected is the one with the highest accuracy on a heldout portion of the training data setson each fold one set is used for testing and the other nine are used for trainingfeature selection model selection and parameter estimation are performed anew on each foldthe following are the potential features considered on each folda binary feature is included for each of the following the presence in the sentence of a pronoun an adjective a cardinal number a modal other than will and an adverb other than notwe also include a binary feature representing whether or not the sentence begins a new paragraphfinally a feature is included representing cooccurrence of word tokens and punctuation marks with the subjective and objective classification4 there are many other features to investigate in future work such as features based on tags assigned to previous utterances and features based on semantic classes such as positive and negative polarity adjectives and reporting verbs the data consists of the concatenation of the two corpora annotated with biascorrected tags as described abovethe baseline accuracy ie the frequency of the more frequent class is only 51the results of the experiments are very promisingthe average accuracy across all folds is 7217 more than 20 percentage points higher than the baseline accuracyinterestingly the system performs better on the sentences for which the judges are certainin a post hoc analysis we consider the sentences from the second data set for which judges m j and d rate their certainty as 2 or 3there are 299500 such sentencesfor each fold we calculate the system accuracy on the subset of the test set consisting of such sentencesthe average accuracy of the subsets across folds is 815taking human performance as an upper bound the system has room for improvementthe average pairwise percentage agreement between d j and m and the biascorrected tags in the entire data set is 895 while the system percentage agreement with the biascorrected tags is 7217this paper demonstrates a procedure for automatically formulating a single best tag when there are multiple judges who disagreethe procedure is applicable to any tagging task in which the judges exhibit symmetric disagreement resulting from biaswe successfully use biascorrected tags for two purposes to guide a revision of the coding manual and to develop an automatic classifierthe revision of the coding manual results in as much as a 16 point improvement in pairwise kappa values and raises the average agreement among the judges to a kappa value of over 087 for the sentences that can be tagged with certaintyusing only simple features the classifier achieves an average accuracy 21 percentage points higher than the baseline in 10fold cross validation experimentsin addition the average accuracy of the classifier is 815 on the sentences the judges tagged with certaintythe strong performance of the classifier and its consistency with the judges demonstrate the value of this approach to developing goldstandard tagsthis research was supported in part by the office of naval research under grant number n000149510776we are grateful to matthew t bell and richard a wiebe for participating in the annotation study and to the anonymous reviewers for their comments and suggestions
P99-1032
development and use of a goldstandard data set for subjectivity classificationsthis paper presents a case study of analyzing and improving intercoder reliability in discourse tagging using statistical techniquesbiascorrected tags are formulated and successfully used to guide a revision of the coding manual and develop an automatic classifierwe use a sentencelevel naive bayes classifier using as features the presence or absence of particular syntactic classes punctuation and sentence positionwe define a subjective sentences as sentences expressing evaluations opinions emotions and speculations
automatic identification of noncompositional phrases noncompositional expressions present a special challenge to nlp applications we present a method for automatic identification of noncompositional expressions using their statistical properties in a text corpus our method is based on the hypothesis that when a phrase is noncomposition its mutual information differs significantly from the mutual informations of phrases obtained by substituting one of the word in the phrase with a similar word noncompositional expressions present a special challenge to nlp applicationsin machine translation wordforword translation of noncompositional expressions can result in very misleading translationsin information retrieval expansion of words in a noncompositional expression can lead to dramatic decrease in precision without any gain in recallless obviously noncompositional expressions need to be treated differently than other phrases in many statistical or corpusbased nlp methodsfor example an underlying assumption in some word sense disambiguation systems eg is that if two words occurred in the same context they are probably similarsuppose we want to determine the intended meaning of quotproductquot in quothot productquotwe can find other words that are also modified by quothotquot and then choose the meaning of quotproductquot that is most similar to meanings of these wordshowever this method fails when noncompositional expressions are involvedfor instance using the same algorithm to determine the meaning of quotlinequot in quothot linequot the words quotproductquot quotmerchandisequot quotcarquot etc would lead the algorithm to choose the quotline of productquot sense of quotlinequotwe present a method for automatic identification of noncompositional expressions using their statistical properties in a text corpusthe intuitive idea behind the method is that the metaphorical usage of a noncompositional expression causes it to have a different distributional characteristic than expressions that are similar to its literal meaningthe input to our algorithm is a collocation database and a thesauruswe briefly describe the process of obtaining this inputmore details about the construction of the collocation database and the thesaurus can be found in we parsed a 125million word newspaper corpus with minipar1 a descendent of principar and extracted dependency relationships from the parsed corpusa dependency relationship is a triple where head and modifier are words in the input sentence and type is the type of the dependency relationfor example is an example dependency tree and the set of dependency triples extracted from are shown in there are about 80 million dependency relationships in the parsed corpusthe frequency counts of dependency relationships are filtered with the loglikelihood ratio we call a dependency relationship a collocation if its loglikelihood ratio is greater than a threshold the number of unique collocations in the resulting database2 is about 11 millionusing the similarity measure proposed in we constructed a corpusbased thesaurus3 consisting of 11839 nouns 3639 verbs and 5658 adjectiveadverbs which occurred in the corpus at least 100 timeswe define the probability space to consist of all possible collocation tripleswe use is ft 14i to denote the frequency count of all the collocations that match the pattern where h and m are either words or the wild card and r is either a dependency type or the wild cardfor example to compute the mutual information in a collocation we treat a collocation as the conjunction of three events the mutual information of a collocation is the logarithm of the ratio between the probability of the collocation and the probability of events a b and c cooccur if we assume b and c are conditionally independent given a d type modifierlxl type l ogin this section we use several examples to demonstrate the basic idea behind our algorithmconsider the expression quotspill gutquotusing the automatically constructed thesaurus we find the following top10 most similar words to the verb quotspillquot and the noun quotgutquot spill leak 0153 pour 0127 spew 0125 dump 0118 pump 0098 seep 0096 burn 0095 explode 0094 burst 0092 spray 0091 gut intestine 0091 instinct 0089 foresight 0085 creativity 0082 heart 0079 imagination 0076 stamina 0074 soul 0073 liking 0073 charisma 0071 the collocation quotspill gutquot occurred 13 times in the 125millionword corpusthe mutual information of this collocation is 624searching the collocation database we find that it does not contain any collocation in the form nor where simvsptil is a verb similar to quotspillquot and sirnngut is a noun similar to quotgutquotthis means that the phrases such as quotleak gutquot quotpour gutquot or quotspill intestinequot quotspill instinctquot either did not appear in the corpus at all or did not occur frequent enough to pass the loglikelihood ratio testthe second example is quotred tapequotthe top10 most similar words to quotredquot and quottapequot in our thesaurus are red yellow 0164 purple 0149 pink 0146 green 0136 blue 0125 white 0122 color 0118 orange 0111 brown 0101 shade 0094 tape videotape 0196 cassette 0177 videocassette 0168 video 0151 disk 0129 recording 0117 disc 0113 footage 0111 recorder 0106 audio 0106 the following table shows the frequency and mutual information of quotred tapequot and word combinations in which one of quotredquot or quottapequot is substituted by a similar word even though many other similar combinations exist in the collocation database they have very different frequency counts and mutual information values than quotred tapequotfinally consider a compositional phrase quoteconomic impactquotthe top10 most similar words are economic financial 0305 political 0243 social 0219 fiscal 0209 cultural 0202 budgetary 02 technological 0196 organizational 019 ecological 0189 monetary 0189 impact effect 0227 implication 0163 consequence 0156 significance 0146 repercussion 0141 fallout 0141 potential 0137 ramification 0129 risk 0126 influence 0125 the frequency counts and mutual information values of quoteconomic impactquot and phrases obtained by replacing one of quoteconomicquot and quotimpactquot with a similar word are in table 4not only many combinations are found in the corpus many of them have very similar mutual information values to that of nomial distribution can be accurately approximated by a normal distribution since all the potential noncompositional expressions that we are considering have reasonably large frequency counts we assume their distributions are normallet head type modifierj k and 1 1 n the maximum likelihood estimation of the true probability p of the collocation is k n even though we do not know what p is since p is normally distributed there is n chance that it fails within the interval where zn is a constant related to the confidence level n and the last step in the above derivation is due to the fact that is very smalltable 3 shows the zn values for a sample set of confidence intervalsquoteconomic impactquotin fact the difference of mutual information values appear to be more important to the phrasal similarity than the similarity of individual wordsfor example the phrases quoteconomic falloutquot and quoteconomic repercussionquot are intuitively more similar to quoteconomic impactquot than quoteconomic implicationquot or quoteconomic significancequot even though quotimplicationquot and quotsignificancequot have higher similarity values to quotimpactquot than quotfalloutquot and quotrepercussionquot dothese examples suggest that one possible way to separate compositional phrases and noncompositional ones is to check the existence and mutual information values of phrases obtained by substituting one of the words with a similar worda phrase is probably noncompositional if such substitutions are not found in the collocation database or their mutual information values are significantly different from that of the phrasein order to implement the idea of separating noncompositional phrases from compositional ones with mutual information we must use a criterion to determine whether or not the mutual information values of two collocations are significantly differentalthough one could simply use a predetermined threshold for this purpose the threshold value will be totally arbitraryfurthermore such a threshold does not take into account the fact that with different frequency counts we have different levels confidence in the mutual information valueswe propose a more principled approachthe frequency count of a collocation is a random variable with binomial distributionwhen the frequency count is reasonably large a bin 50 80 90 95 98 99 zn 067 128 164 196 233 258 we further assume that the estimations of p p and p in are accuratethe confidence interval for the true probability gives rise to a confidence interval for the true mutual information the upper and lower bounds of this interval are obtained by substituting 11 with kzs vk and lz1114 in since our confidence of p falling between1vin is n we can have n confidence that the true mutual information is within the upper and lower boundwe use the following condition to determine whether or not a collocation is compositional a collocation a is noncompositional if there does not exist another collocation 3 such that 13 is obtained by substituting the head or the modifier in a with a similar word and there is an overlap between the 95 confidence interval of the mutual information values of a and 0for example the following table shows the frequency count mutual information and the lower and upper bounds of the 95 confidence interval of the true mutual information verbobject freq mutual lower upper count info bound bound make difference 1489 2928 2876 2978 make change 1779 2194 2146 2239 since the intervals are disjoint the two collocations are considered to have significantly different mutual information valuesthere is not yet a wellestablished methodology for evaluating automatically acquired lexical knowledgeone possibility is to compare the automatically identified relationships with relationships listed in a manually compiled dictionaryfor example compared automatically created thesaurus with the wordnet 1990 and roget thesaurushowever since the lexicon used in our parser is based on the wordnet the phrasal words in wordnet are treated as a single wordfor example quottake advantage ofquot is treated as a transitive verb by the parseras a result the extracted noncompositional phrases do not usually overlap with phrasal entries in the wordnettherefore we conducted the evaluation by manually examining sample resultsthis method was also used to evaluate automatically identified hyponyms word similarity and translations of collocations our evaluation sample consists of 5 most frequent open class words in the our parsed corpus have company make do take and 5 words whose frequencies are ranked from 2000 to 2004 path lock resort column gulfwe examined three types of dependency relationships objectverb nounnoun and adjectivenouna total of 216 collocations were extracted shown in appendix awe compared the collocations in appendix a with the entries for the above 10 words in the ntc english idioms dictionary which contains approximately 6000 definitions of idiomsfor our evaluation purposes we selected the idioms in ntceid that satisfy both of the following two conditions a the head word of the idiom is one of the above 10 words b there is a verbobject nounnoun or adjectivenoun relationship in the idiom and the modifier in the phrase is not a variablefor example quottake a stab at somethingquot is included in the evaluation whereas quottake something at face valuequot is notthere are 249 such idioms in ntceid 34 of which are also found in appendix a if we treat the 249 entries in ntceid as the gold standard the precision and recall of the phrases in appendix a are shown in table 4to compare the performance with manually compiled dictionaries we also compute the precision and recall of the entries in the longman dictionary of english idioms that satisfy the two conditions in it can be seen that the overlap between manually compiled dictionaries are quite low reflecting the fact that different lexicographers may have quite different opinion about which phrases are noncompositionalthe collocations in appendix a are classified into three categoriesthe ones marked with sign are found in ntceidthe ones marked withx are parsing errors the unmarked collocations satisfy the condition but are not found in ntceidmany of the unmarked collocation are clearly idioms such as quottake fifth amendmentquot and quottake tollquot suggesting that even the most comprehensive dictionaries may have many gaps in their coveragethe method proposed in this paper can be used to improve the coverage manually created lexical resourcesmost of the parser errors are due to the incompleteness of the lexicon used by the parserfor example quotoptquot is not listed in the lexicon as a verbthe lexical analyzer guessed it as a noun causing the erroneous collocation quot do optquotthe collocation quottrig lockquot should be quottrigger lockquotthe lexical analyzer in the parser analyzed quottriggerquot as the er form of the adjective quottrigquot duplications in the corpus can amplify the effect of a single mistakefor example the following disclaimer occurred 212 times in the corpusquotannualized average rate of return after expenses for the past 30 days not a forecast of future returnsquot the parser analyzed quota forecast of future returnsquot as s ni a forecast of future vp returnsas a result satisfied the condition duplications can also skew the mutual information of correct dependency relationshipsfor example the verbobject relationship between quottakequot and quotbridequot passed the mutual information filter because there are 4 copies of the article containing this phraseif we were able to throw away the duplicates and record only one count of quottakebridequot it would have not pass the mutual information filter the fact that systematic parser errors tend to pass the mutual information filter is both a curse and a blessingon the negative side there is no obvious way to separate the parser errors from true noncompositional expressionson the positive side the output of the mutual information filter has much higher concentration of parser errors than the database that contains millions of collocationsby manually sifting through the output one can construct a list of frequent parser errors which can then be incorporated into the parser so that it can avoid making these mistakes in the futuremanually going through the output is not unreasonable because each noncompositional expression has to be individually dealt with in a lexicon anywayto find out the benefit of using the dependency relationships identified by a parser instead of simple cooccurrence relationships between words we also created a database of the cooccurrence relationship between partofspeech tagged wordswe aggregated all word pairs that occurred within a 4word window of each otherthe same algorithm and similarity measure for the dependency database are used to construct a thesaurus using the cooccurrence databaseappendix b shows all the word pairs that satisfies the condition and that involve one of the 10 words have company make do take path lock resort column gulfit is clear that appendix b contains far fewer true noncompositional phrases than appendix athere have been numerous previous research on extracting collocations from corpus eg and they do not however make a distinction between compositional and noncompositional collocationsmutual information has often been used to separate systematic associations from accidental onesit was also used to compute the distributional similarity between words a method to determine the compositionality of verbobject pairs is proposed in the basic idea in there is that quotif an object appears only with one verb in a large corpus we expect that it has an idiomatic naturequot for each object noun o computes the distributed frequency df and rank the noncompositionality of o according to this valueusing the notation introduced in section 3 df is computed as follows where vi v2 vn are verbs in the corpus that took o as the object and where a and b are constantsthe first column in table 5 lists the top 40 verbobject pairs in the quotmiquot column show the result of our mutual information filterthe sign means that the verbobject pair is also consider to be noncompositional according to mutual information filter the sign means that the verbobject pair is present in our dependency database but it does not satisfy condition for each marked pairs the quotsimilar collocationquot column provides a similar collocation with a similar mutual information value the of marked pairs are not found in our collocation database for various reasonsfor example quotfinish seventhquot is not found because quotseventhquot is normalized as quotnumquot quothave a goquot is not found because quota goquot is not an entry in our lexicon and quottake advantagequot is not found because quottake advantage ofquot is treated as a single lexical item by our parserthe v marks in the quotntcquot column in table 5 indicate that the corresponding verbobject pairs is an idiom in it can be seen that none of the verbobject pairs in table 5 that are filtered out by condition is listed as an idiom in ntceidwe have presented a method to identify noncompositional phrasesthe method is based on the assumption that noncompositional phrases have a significantly different mutual information value than the phrases that are similar to their literal meaningsour experiment shows that this hypothesis is generally truehowever many collocations resulted from systematic parser errors also tend to posses this propertythe author wishes to thank acl reviewers for their helpful comments and suggestionsthis research was partly supported by natural sciences and engineering research council of canada grant ogp121338y chouelca1988looking for needles in a haystack or locating interesting collocational expressions in large textual databasesin proceedings of the riao conference on useroriented contentbased text and image handling cambridge ma march 2124ido dagan and alon itai1994word sense disambiguation using a second language monolingual corpuscomputational linguistics 20563596ted dunning1993accurate methods for the statistics of surprise and coincidencecomputational linguistics 196174 marchmarti a hearst1998automated discovery of wordnet relationsin c fellbaum editor wordnet an electronic lexical database pages 131151mit press
P99-1041
automatic identification of noncompositional phrasesnoncompositional expressions present a special challenge to nlp applicationswe present a method for automatic identification of noncompositional expressions using their statistical properties in a text corpusour method is based on the hypothesis that when a phrase is noncomposition its mutual information differs significantly from the mutual informations of phrases obtained by substituting one of the word in the phrase with a similar wordwe use lsa to distinguish between compositional and non compositional verbparticle constructions and noun noun compoundswe define a decision criterion for non compositional phrases based on the change in the mutual information of a phrase when substituting one word for a similar one based on an automatically constructed thesaurus
deep read a reading comprehension system paper describes initial work on read an automated reading comprehension system that accepts arbitrary text input and answers questions about it we have acquired a corpus of 60 and 60 test stories of to grade material each story is followed by shortanswer questions we used these to construct and evaluate a baseline system that uses pattern matching techniques augmented with additional automated linguistic processing this simple system retrieves the sentence containing the answer 3040 of the time this paper describes our initial work exploring reading comprehension tests as a research problem and an evaluation method for language understanding systemssuch tests can take the form of standardized multiplechoice diagnostic reading skill tests as well as fillintheblank and shortanswer teststypically such tests ask the student to read a story or article and to demonstrate herhis understanding of that article by answering questions about itfor an example see figure 1reading comprehension tests are interesting because they constitute quotfoundquot test material these tests are created in order to evaluate children reading skills and therefore test materials scoring algorithms and human performance measures already existfurthermore human performance measures provide a more intuitive way of assessing the capabilities of a given system than current measures of precision recall fmeasure operating curves etcin addition reading comprehension tests are written to test a range of skill levelswith proper choice of test material it should be possible to challenge systems to successively higher levels of performancefor these reasons reading comprehension tests offer an interesting alternative to the kinds of specialpurpose carefully constructed evaluations that have driven much recent research in language understandingmoreover the current stateoftheart in computerbased language understanding makes this project a good choice it is beyond current systems capabilities but tractableour it was 150 years ago this year that our nation biggest library burned to the groundcopies of all the written books of the time were kept in the library of congressbut they were destroyed by fire in 1814 during a war with the britishthat fire did not stop book loversthe next year they began to rebuild the libraryby giving it 6457 of his books thomas jefferson helped get it startedthe first libraries in the united states could be used by members onlybut the library of congress was built for all the peoplefrom the start it was our national librarytoday the library of congress is one of the largest libraries in the worldpeople can find a copy of just about every book and magazine printedlibraries have been with us since people first learned to writeone of the oldest to be found dates back to about 800 years bcthe books were written on tablets made from claythe people who took care of the books were called quotmen of the written tabletsquot simple bagofwords approach picked an appropriate sentence 3040 of the time with only a few months work much of it devoted to infrastructurewe believe that by adding additional linguistic and world knowledge sources to the system it can quickly achieve primaryschoollevel performance and within a few years quotgraduatequot to realworld applicationsreading comprehension tests can serve as a testbed providing an impetus for research in a number of areas bottlenecks for lexical and world knowledgein addition research into collaboration might lead to insights about intelligent tutoringfinally reading comprehension evaluates systems abilities to answer ad hoc domainindependent questions this ability supports fact retrieval as opposed to document retrieval which could augment future search engines see kupiec for an example of such workthere has been previous work on story understanding that focuses on inferential processing common sense reasoning and world knowledge required for indepth understanding of storiesthese efforts concern themselves with specific aspects of knowledge representation inference techniques or question types see lehnert or schubert in contrast our research is concerned with building systems that can answer ad hoc questions about arbitrary documents from varied domainswe report here on our initial pilot study to determine the feasibility of this taskwe purchased a small corpus of development and test materials consisting of remedial reading materials for grades 36 these materials are simulated news stories followed by shortanswer quot5wquot questions who what when where and why questionswe developed a simple modular baseline system that uses pattern matching techniques and limited linguistic processing to select the sentence from the text that best answers the querywe used our development corpus to explore several alternative evaluation techniques and then evaluated on the test set which was kept blindwe had three goals in choosing evaluation metrics for our systemfirst the evaluation should be automaticsecond it should maintain comparability with human benchmarksthird it should require little or no effort to prepare new answer keyswe used three metrics pr humsent and autsent which satisfy these constraints to varying degreespr was the precision and recall on stemmed content words2 comparing the system response at the word level to the answer key provided by the test publisherhumsent and autsent compared the sentence chosen by the system to a list of acceptable answer sentences scoring one point for a response on the list and zero points otherwisein all cases the score for a set of questions was the average of the scores for each questionfor pr the answer key from the publisher was used unmodifiedthe answer key for humsent was compiled by a human annotator i these materials consisted of levels 25 of quotthe 5 wquot written by linda miller which can be purchased from remedia publications 10135 e via linda d124 scottsdale az 85258repeated words in the answer key match or fail togetherall words are stemmed and stop words are removedat present the stopword list consists of forms of be have and do personal and possessive pronouns the conjunctions and or the prepositions to in at of the articles a and the and the relative and demonstrative pronouns this that and which who examined the texts and chose the sentence that best answered the question even where the sentence also contained additional informationfor autsent an automated routine replaced the human annotator examining the texts and choosing the sentences this time based on which one had the highest recall compared against the published answer keyfor pr we note that in figure 2 there are two content words in the answer key and sentence 1 matches both of them for 22 100 recallthere are seven content words in sentence 1 so it scores 27 29 precisionsentence 2 scores 1250 recall and 1617 precisionthe human preparing the list of acceptable sentences for humsent has a problemsentence 2 responds to the question but requires pronoun coreference to give the full answer sentence 1 contains the words of the answer but the sentence as a whole does not really answer the questionin this and other difficult cases we have chosen to list no answers for the human metric in which case the system receives zero points for the questionthis occurs 11 of the time in our test corpusthe question is still counted meaning that the system receives a penalty in these casesthus the highest score a system could achieve for humsent is 89given that our current system can only respond with sentences from the text this penalty is appropriatethe automated routine for preparing the answer key in autsent selects as the answer key the sentence with the highest recall thus only sentence 1 would be counted as a correct answerwe have implemented all three metricshumsent and autsent are comparable with human benchmarks since they provide a binary score as would a teacher for a student answerin contrast the precision and recall scores of pr lack such a straightforward comparabilityhowever word recall from pare closely mimics the scores of humsent and autsentthe correlation coefficient for answdrecall to humsent in our test set is 98 and from humsent to autsent is also 98with respect to ease of answer key preparation pr and autsent are clearly superior since they use the publisherprovided answer keyhumsent requires human annotation for each questionwe found this annotation to be of moderate difficultyfinally we note that precision as well as recall will be useful to evaluate systems that can return clauses or phrases possibly constructed rather than whole sentence extracts as answerssince most national standardized tests feature a large multiplechoice component many available benchmarks are multiplechoice examsalso although our shortanswer metrics do not impose a penalty for incorrect answers multiplechoice exams such as the scholastic aptitude tests doin realworld applications it might be important that the system be able to assign a confidence level to its answerspenalizing incorrect answers would help guide development in that regardwhile we were initially concerned that adapting the system to multiplechoice questions would endanger the goal of realworld applicability we have experimented with minor changes to handle the multiple choice formatinitial experiments indicate that we can use essentially the same system architecture for both shortanswer and multiple choice teststhe process of taking shortanswer reading comprehension tests can be broken down into the following subtasks a crucial component of all three of these subtasks is the representation of information in textbecause our goal in designing our system was to explore the difficulty of various reading comprehension exams and to measure baseline performance we tried to keep this initial implementation as simple as possibleour system represents the information content of a sentence as the set of words in the sentencethe word sets are considered to have no structure or order and contain unique elementsfor example the representation for is the set in la by giving it 6457 of his books thomas jefferson helped get it started lb 16457 books by get giving helped his it jefferson of started thomas extraction of information content from text both in documents and questions then consists of tokenizing words and determining sentence boundary punctuationfor english written text both of these tasks are relatively easy although not trivialsee palmer and hearst the search subtask consists of finding the best match between the word set representing the question and the sets representing sentences in the documentour system measures the match by size of the intersection of the two word setsfor example the question in would receive an intersection score of 1 because of the mutual set element booksbecause match size does not produce a complete ordering on the sentences of the document we additionally prefer sentences that first match on longer words and second occur earlier in the documentin this section we describe extensions to the extraction approach described abovein the next section we will discuss the performance benefits of these extensionsthe most straightforward extension is to remove function or stop words such as the of a etc from the word sets reasoning that they offer little semantic information and only muddle the signal from the more contentful wordssimilarly one can use stemming to remove inflectional affixes from the words such normalization might increase the signal from contentful wordsfor example the intersection between and would include give if inflection were removed from gave and givingwe used a stemmer described by abney a different type of extension is suggested by the fact that who questions are likely to be answered with words that denote people or organizationssimilarly when and where questions are answered with words denoting temporal and locational words respectivelyby using name taggers to identify person location and temporal information we can add semantic class symbols to the question word sets marking the type of the question and then add corresponding class symbols to the word sets whose sentences contain phrases denoting the proper type of entityfor example due to the name thomas jefferson the word set in would be extended by person as would the word set because it is a who questionthis would increase the matching score by onethe system makes use of the alembic automated named entity system for finding named entitiesin a similar vein we also created a simple common noun classification module using wordnet it works by looking up all nouns of the text and adding person or location classes if any of a noun senses is subsumed by the appropriate wordnet classwe also created a filtering module that ranks sentences higher if they contain the appropriate class identifier even though they may have fewer matching words eg if the bag representation of a sentence does not contain person it is ranked lower as an answer to a who question than sentences which do contain personfinally the system contains an extension which substitutes the referent of personal pronouns for the pronoun in the bag representationfor example if the system were to choose the sentence he gave books to the library the answer returned and scored would be thomas jefferson gave books to the library if he were resolved to thomas jeffersonthe current system uses a very simplistic pronoun resolution system whichour modular architecture and automated scoring metrics have allowed us to explore the effect of various linguistic sources of information on overall system performancewe report here on three sets of findings the value added from the various linguistic modules the questionspecific results and an assessment of the difficulty of the reading comprehension taskwe were able to measure the effect of various linguistic techniques both singly and in combination with each other as shown in figure 3 and table 1the individual modules are indicated as follows name is the alembic named tagger described abovenamehum is handtagged named entitystem is abney automatic stemming algorithmflit is the filtering modulepro is automatic name and personal pronoun coreferenceprolitun is handtagged full reference resolutionsem is the wordnetbased common noun semantic classificationwe computed significance using the nonparametric significance test described by noreen the following performance improvements of the answdrecall metric were statistically significant results at a confidence level of 95 base vs namestem namestem vs filtnamehumstem and filtnamehumstem vs filtprohumnamehumstemthe other adjacent performance differences in figure 3 are suggestive but not statistically significantremoving stop words seemed to hurt overall performance slightlyit is not shown herestemming on the other hand produced a small but fairly consistent improvementwe compared these results to perfect stemming which made little difference leading us to conclude that our automated stemming module worked well enoughname identification provided consistent gainsthe alembic name tagger was developed for newswire text and used here with no modificationswe created handtagged named entity data which allowed us to measure the performance of alembic the accuracy was 765 see chinchor and sundheim for a description of the standard muc scoring metricthis also allowed us to simulate perfect tagging and we were able to determine how much we might gain by improving the name tagging by tuning it to this domainas the results indicate there would be little gain from improved name tagging however some modules that seemed to have little effect with automatic name tagging provided small gains with perfect name tagging specifically wordnet common noun semantics and automatic pronoun resolutionwhen used in combination with the filtering module these also seemed to helpsimilarly the handtagged reference resolution data allowed us to evaluate automatic coreference resolutionthe latter was a combination of name coreference as determined by alembic and a heuristic resolution of personal pronouns to the most recent prior named personusing the muc coreference scoring algorithm this had a precision of 77 and a recall of 183 the use of full handtagged reference resolution caused a substantial increase of the answdrecall metricthis was because the system substitutes the antecedent for all referring expressions improving the wordbased measurethis did not however provide an increase in the sentencebased measuresfinally we plan to do similar human labeling experiments for semantic class identification to determine the potential effect of this knowledge sourceour results reveal that different questiontypes behave very differently as shown in figure 4why questions are by far the hardest because they require understanding of rhetorical structure and because answers tend to be whole clauses rather than phrases embedded in a context that matches the query closelyon the other hand who and when queries benefit from reliable person name and time extractionwho questions seem to benefit most dramatically from perfect name tagging combined with filtering and pronoun resolutionwhat questions show relatively little benefit from the various linguistic techniques probably because there are many types of what question most of which are not answered by a person time or placefinally where question results are quite variable perhaps because location expressions often do not include specific place names3 the low recall is attributable to the fact that the heuristic asigned antecedents only for names and pronouns and completely ignored definite noun phrases and plural pronousthese results indicate that the sample tests are an appropriate and challenging taskthe simple techniques described above provide a system that finds the correct answer sentence almost 40 of the timethis is much better than chance which would yield an average score of about 45 for the sentence metrics given an average document length of 20 sentencessimple linguistic techniques enhance the baseline system score from the low 30 range to almost 40 in all three metricshowever capturing the remaining 60 will clearly require more sophisticated syntactic semantic and world knowledge sourcesour pilot study has shown that reading comprehension is an appropriate task providing a reasonable starting level it is tractable but not trivialour next steps include standardized multiplechoice reading comprehension testthis will require some minor changes in strategyfor example in preliminary experiments our system chose the answer that had the highest sentence matching score when composed with the questionthis gave us a score of 45 on a small multiplechoice test setsuch tests require us to deal with a wider variety of question types eg what is this story aboutthis will also provide an opportunity to look at rejection measures since many tests penalize for random guessing moving from whole sentence retrieval towards answer phrase retrievalthis will allow us to improve answer word precision which provides a good measure of how much extraneous material we are still returning adding new linguistic knowledge sourceswe need to perform further hand annotation experiments to determine the effectiveness of semantic class identification and lexical semantics encoding more semantic information in our representation for both question and document sentencesthis information could be derived from syntactic analysis including noun chunks verb chunks and clause groupings cooperation with educational testing and content providerswe hope to work together with one or more major publishersthis will provide the research community with a richer collection of training and test material while also providing educational testing groups with novel ways of checking and benchmarking their testswe have argued that taking reading comprehension exams is a useful task for developing and evaluating natural language understanding systemsreading comprehension uses found material and provides humancomparable evaluations which can be computed automatically with a minimum of human annotationcrucially the reading comprehension task is neither too easy nor too hard as the performance of our pilot system demonstratesfinally reading comprehension is a task that is sufficiently close to information extraction applications such as ad hoc question answering fact verification situation tracking and document summarization that improvements on the reading comprehension evaluations will result in improved systems for these applicationswe gratefully acknowledge the contribution of lisa ferro who prepared much of the handtagged data used in these experiments
P99-1042
deep read a reading comprehension systemthis paper describes initial work on deep read an automated reading comprehension system that accepts arbitrary text input and answers questions about itwe have acquired a corpus of 60 development and 60 test stories of 3rd to 6th grade material each story is followed by shortanswer questions we used these to construct and evaluate a baseline system that uses pattern matching techniques augmented with additional automated linguistic processing this simple system retrieves the sentence containing the answer 3040 of the timewe use a statistical bagofwords approach matching the question with the lexically most similar sentence in the story
corpusbased identification of nonanaphoric noun phrases coreference resolution involves finding antecedents for anaphoric discourse entities such as definite noun phrases but many definite noun phrases are not anaphoric because their meaning can be understood from general world knowledge we have developed a corpusbased algorithm for automatically identifying definite noun phrases that are nonanaphoric which has the potential to improve the efficiency and accuracy of coreference resolution systems our algorithm generates lists of nonanaphoric noun phrases and noun phrase patterns from a training corpus and uses them to recognize nonanaphoric noun phrases in new texts using 1600 muc4 terrorism news articles as the training corpus our approach achieved 78 recall and 87 precision at identifying such noun phrases in 50 test documents most automated approaches to coreference resolution attempt to locate an antecedent for every potentially coreferent discourse entity in a textthe problem with this approach is that a large number of de may not have antecedentswhile some discourse entities such as pronouns are almost always referential definite descriptionsquot may not beearlier work found that nearly 50 of definite descriptions had no prior referents and we found that number to be even higher 63 in our corpussome nonanaphoric definite descriptions can be identified by looking for syntactic clues like attached prepositional phrases or restrictive relative clausesbut other definite descriptions are nonanaphoric because readers understand their meaning due to common knowledgefor example readers of this in this work we define a definite description to be a noun phrase beginning with the paper will probably understand the real world referents of quotthe fbiquot quotthe white housequot and quotthe golden gate bridgequot these are instances of definite descriptions that a coreference resolver does not need to resolve because they each fully specify a cognitive representation of the entity in the reader mindone way to address this problem is to create a list of all nonanaphoric nps that could be used as a filter prior to coreference resolution but hand coding such a list is a daunting and intractable taskwe propose a corpusbased mechanism to identify nonanaphoric nps automaticallywe will refer to nonanaphoric definite noun phrases as existential nps our algorithm uses statistical methods to generate lists of existential noun phrases and noun phrase patterns from a training corpusthese lists are then used to recognize existential nps in new textscomputational coreference resolvers fall into two categories systems that make no attempt to identify nonanaphoric discourse entities prior to coreference resolution and those that apply a filter to discourse entities identifying a subset of them that are anaphoricthose that do not practice filtering include decision tree models that consider all possible combinations of potential anaphora and referentsexhaustively examining all possible combinations is expensive and we believe unnecessaryof those systems that apply filtering prior to coreference resolution the nature of the filtering variessome systems recognize when an anaphor and a candidate antecedent are incompatiblein sri probabilistic model a pair of extracted templates may be removed from consideration because an outside knowledge base indicates contradictory featuresother systems look for particular constructions using certain trigger wordsfor example pleonastic2 pronouns are identified by looking for modal adjectives or cognitive verbs in a set of patterned constructions a more recent system recognizes a large percentage of nonanaphoric definite noun phrases during the coreference resolution process through the use of syntactic cues and casesensitive rulesthese methods were successful in many instances but they could not identify them allthe existential nps that were missed were existential to the reader not because they were modified by particular syntactic constructions but because they were part of the reader general world knowledgedefinite noun phrases that do not need to be resolved because they are understood through world knowledge can represent a significant portion of the existential noun phrases in a textin our research we found that existential nps account for 63 of all definite nps and 24 of them could not be identified by syntactic or lexical meansthis paper details our method for identifying existential nps that are understood through general world knowledgeour system requires no hand coded information and can recognize a larger portion of existential nps than vieira and poesio systemto better understand what makes an np anaphoric or nonanaphoric we found it useful to classify definite nps into a taxonomywe first classified definite nps into two broad categories referential nps which have prior referents in the texts and existential nps which do notin figure 1 examples of referential nps are quotthe mass kidnappingquot quotthe terroristsquot and quotthe individualsquot while examples of existential nps are quotthe arce battalion commandquot and quotthe farabundo marti national liberation frontquot we should clarify an important pointwhen we say that a definite np is existential we say this because it completely specifies a cognitive representation of the entity in the reader mindthat is suppose quotthe fbiquot appears in both sentence 1 and sentence 7 of a textalthough there may be a cohesive relationship between the noun phrases because they both completely specify independently we consider them to be nonanaphoricdefinite noun phrases we further classified existential nps into two categories independent and associative which are distinguished by their need for contextindependent existentials can be understood in isolationassociative existentials are inherently associated with an event action object or other context3in a text about a basketball game for example we might find quotthe scorequot quotthe hoopquot and quotthe bleachersquot although they may that our independent existentials roughly equate to her new class our associative existentials to her inferable class and our referentials to her evoked class not have direct antecedents in the text we understand what they mean because they are all associated with basketball gamesin isolation a reader would not necessarily understand the meaning of quotthe scorequot because context is needed to disambiguate the intended word sense and provide a complete specificationbecause associative nps represent less than 10 of the existential nps in our corpus our efforts were directed at automatically identifying independent existentialsunderstanding how to identify independent existential nps requires that we have an understanding of why these nps are existentialwe classified independent existentials into two groups semantic and syntacticsemantically independent nps are existential because they are understood by readers who share a collective understanding of current events and world knowledgefor example we understand the meaning of quotthe fbiquot without needing any other informationsyntactically independent nps on the other hand gain this quality because they are modified structurallyfor example in quotthe man who shot liberty valencequot quotthe manquot is existential because the relative clause uniquely identifies its referentour goal is to build a system that can identify independent existential noun phrases automaticallyin the previous section we observed that quotexistentialismquot can be granted to a definite noun phrase either through syntax or semanticsin this section we introduce four methods for recognizing both classes of existentialswe began by building a set of syntactic heuristics that look for the structural cues of restrictive premodification and restrictive postmodificationrestrictive premodification is often found in noun phrases in which a proper noun is used as a modifier for a head noun for example quotthe yous presidentquot quotthe presidentquot itself is ambiguous but quotthe yous presidentquot is notrestrictive postmodification is often represented by restrictive relative clauses prepositional phrases and appositivesfor example quotthe president of the united statesquot and quotthe president who governs the yousquot are existential due to a prepositional phrase and a relative clause respectivelywe also developed syntactic heuristics to recognize referential npsmost nps of the form quotthe quot have an antecedent so we classified them as referentialalso if the head noun of the np appeared earlier in the text we classified the np as referentialthis method then consists of two groups of syntactic heuristicsthe first group which we refer to as the rulein heuristics contains seven heuristics that identify restrictive premodification or postmodification thus targeting existential npsthe second group referred to as the ruleout heuristics contains two heuristics that identify referential npsmost referential nps have antecedents that precede them in the textthis observation is the basis of our first method for identifying semantically independent npsif a definite np occurs in the first sentence4 of a text we assume the np is existentialusing a training corpus we create a list of presumably existential nps by collecting the first sentence of every text and extracting all definite nps that were not classified by the syntactic heuristicswe call this list the 51 extractionswhile examining the si extractions we found many similar nps for example quotthe salvadoran governmentquot quotthe guatemalan governmentquot and quotthe yous governmentquot the similarities indicate that some head nouns when premodified represent existential entitiesby using the si extractions as input to a pattern generation algorithm we built a set of existential head patterns that identify such constructionsthese patterns are of the form quotthe 5 quot such as quotthe governmentquot or quotthe salvadoran governmentquot figure 3 shows the algorithm for creating ehpsit also became clear that some existentials never appear in indefinite constructionsquotthe fbiquot quotthe contraryquot quotthe national guardquot are definite nps which are rarely if ever seen in indefinite constructionsthe chances that a reader will encounter quotan fbiquot are slim to nonethese nps appeared to be perfect candidates for a corpusbased approachto locate quotdefiniteonlyquot nps we made two passes over the corpusthe first pass produced a list of every definite np and its frequencythe second pass counted indefinite uses of all nps cataloged during the first passknowing how often an np was used in definite and indefinite constructions allowed us to sort the nps first by the probability of being used as a definite and second by definiteuse frequencyfor example quotthe contraryquot appeared high on this list because its head noun occurred 15 times in the training corpus and every time it was in a definite constructionfrom this we created a definiteonly list by selecting those nps which occurred at least 5 times and only in definite constructionsexamples from the three methods can be found in the appendixour methods for identifying existential nps are all heuristicbased and therefore can be incorrect in certain situationswe identified two types of common errorsto address these problems we developed a vaccineit was clear that we had a number of infections in our si list including quotthe basequot quotthe for every definite np in a text individualsquot quotthe attackquot and quotthe banksquot we noticed however that many of these incorrect nps also appeared near the bottom of our definiteindefinite list indicating that they were often seen in indefinite constructionswe used the definite probability measure as a way of detecting errors in the si and ehp listsif the definite probability of an np was above an upper threshold the np was allowed to be classified as existentialif the definite probability of an np fell below a lower threshold it was not allowed to be classified by the si or ehp methodthose nps that fell between the two thresholds were considered occasionally existentialoccasionally existential nps were handled by observing where the nps first occurred in the textfor example if the first use of quotthe guerrillasquot was in the first few sentences of a text it was usually an existential useif the first use was later it was usually a referential use because a prior definition appeared in earlier sentenceswe applied an early allowance threshold of three sentences occasionally existential nps occuring under this threshold were classified as existential and those that occurred above were left unclassifiedfigure 4 details the vaccine algorithmwe trained and tested our methods on the latin american newswire articles from muc4 the training set contained 1600 texts and the test set contained 50 textsall texts were first parsed by sundance our heuristicbased partial parser developed at the university of utahwe generated the si extractions by processing the first sentence of all training textsthis produced 849 definite npsusing these nps as input to the existential head pattern algorithm we generated 297 ehpsthe do list was built by using only those nps which appeared at least 5 times in the corpus and 100 of the time as definiteswe generated the do list in two iterations once for head nouns alone and once for full nps resulting in a list of 65 head nouns and 321 full nps6once the methods had been trained we classified each definite np in the test set as referential or existential using the algorithm in figure 5figure 6 graphically represents the main elements of the algorithmnote that we applied vaccines to the si and ehp lists but not to the do list because gaining entry to the do list is much more difficult an np must occur at least 5 times in the training corpus and every time it must occur in a definite constructionto evaluate the performance of our algorithm we handtagged each definite np in the 50 test texts as a syntactically independent existential a semantically independent existential an associative existential or a referential npfigure 8 shows the distribution of definite np types in the test textsof the 1001 definite nps tested 63 were independent existentials so removing these nps from the coreference resolution process could have substantial savingswe measured the accuracy of our classifications using recall and precision metricsresults are shown in figure 7as a baseline measurement we considered the accuracy of classifying every definite np as existentialgiven the distribution of definite np types in our test set this would result in recall of 100 and precision of 72note that we are more interested in high measures of precision than recall because we view this method to be the precursor to a coreference resolution algorithmincorrectly removing an anaphoric np means that the coreference resolver would never have a chance to resolve it on the other hand nonanaphoric nps that slip through can still be ruled as nonanaphoric by the coreference resolverwe first evaluated our system using only the syntactic heuristics which produced only 43 recall but 92 precisionalthough the syntactic heuristics are a reliable way to identify existential definite nps they miss 57 of the true existentialswe expected the si ehp and do methods to increase coveragefirst we evaluated each method independently the results appear in rows 24 of figure 7each method increased recall to between 6169 but decreased precision to 8487all of these methods produced a substantial gain in recall at some cost in precisionnext we tried combining the methods to make sure that they were not identifying exactly the same set of existential npswhen we combined the si and ehp heuristics recall increased to 80 with precision dropping only slightly to 82when we combined all three methods recall increased to 82 without any corresponding loss of precisionthese experiments show that these heuristics substantially increase recall and are identifying different sets of existential npsfinally we tested our vaccine algorithm to see if it could increase precision without sacrificing much recallwe experimented with two variations va used an upper definite probability threshold of 70 and vi used an upper definite probability threshold of 50both variations used a lower definite probability threshold of 25the results are shown in rows 78 of figure 7both vaccine variations increased precision by several percentage points with only a slight drop in recallin previous work the system developed by vieria poesio achieved 74 recall and 85 precision for identifying quotlarger situation and unfamiliar usequot npsthis set of nps does not correspond exactly to our definition of existential nps because we consider associative nps to be existential and they do noteven so our results are slightly better than their previous resultsa more equitable comparison is to measure our system performance on only the independent existential noun phrasesusing this measure our algorithm achieved 818 recall with 856 precision using va and achieved 829 recall with 835 precision using vbwe have developed several methods for automatically identifying existential noun phrases using a training corpusit accomplishes this task with recall and precision measurements that exceed those of the earlier vieira rz poesio system while not exploiting full parse trees appositive constructions handcoded lists or case sensitive text7in addition because the system is fully automated and corpusbased it is suitable for applications that require portability across domainsgiven the large percentage of nonanaphoric discourse entities handled by most coreference resolvers we believe that using a system like ours to filter existential nps has the potential to reduce processing time and complexity and improve the accuracy of coreference resolution
P99-1048
corpusbased identification of nonanaphoric noun phrasescoreference resolution involves finding antecedents for anaphoric discourse entities such as definite noun phrasesbut many definite noun phrases are not anaphoric because their meaning can be understood from general world knowledge we have developed a corpusbased algorithm for automatically identifying definite noun phrases that are nonanaphoric which has the potential to improve the efficiency and accuracy of coreference resolution systemsour algorithm generates lists of nonanaphoric noun phrases and noun phrase patterns from a training corpus and uses them to recognize nonanaphoric noun phrases in new textsusing 1600 muc4 terrorism news articles as the training corpus our approach achieved 78 recall and 87 precision at identifying such noun phrases in 50 test documentswe develop a system for identifying discoursenew dds that incorporates in addition to syntaxbased heuristics aimed at recogznizing predicative and established dds additional techniques for mining from corpora unfamiliar dds including proper names larger situation and semantically functionalwe develop an unsupervised learning algorithm that automatically recognizes definite nps that are existential without syntactic modification because their meaning is universally understood
efficient parsing for bilexical contextfree grammars and head automaton grammars stochastic parsers use grammars where each word type idiosyncratically prefers particular complements with parhead words we present parsing algorithms for two bilexical formalisms improvthe prior upper bounds of for a comspecial case that was known to allow we present an algorithm with an improved grammar constant lexicalized grammar formalisms are of both theoretical and practical interest to the computational linguistics communitysuch formalisms specify syntactic facts about each word of the languagein particular the type of arguments that the word can or must takeearly mechanisms of this sort included categorial grammar and subcategorization frames other lexicalized formalisms include besides the possible arguments of a word a naturallanguage grammar does well to specify possible head words for those argumentsquotconvenequot requires an np object but some nps are more semantically or lexically appropriate here than others and the appropriateness depends largely on the np head we use the general term bilexical for a grammar that records such factsa bilexical grammar makes many stipulations about the compatibility of particular pairs of words in particular rolesthe acceptability of quotnora convened the the authors were supported respectively under arpa grant n6600194c6043 quothuman language technologyquot and ministero delluniversita e della ricerca scientifica e tecnologica project quotmethodologies and tools of high performance systems for multimedia applicationsquot partyquot then depends on the grammar writer assessment of whether parties can be convenedseveral recent realworld parsers have improved stateoftheart parsing accuracy by relying on probabilistic or weighted versions of bilexical grammars the rationale is that soft selectional restrictions play a crucial role in disambiguation1 the chart parsing algorithms used by most of the above authors run in time 0 because bilexical grammars are enormous in practiceheavy probabilistic pruning is therefore needed to get acceptable runtimesbut in this paper we show that the complexity is not so bad after all grammars where an 0 algorithm was previously known the grammar constant can be reduced without harming the 0 propertyour algorithmic technique throughout is to propose new kinds of subderivations that are not constituentswe use dynamic programming to assemble such subderivations into a full parsethe reader is assumed to be familiar with contextfree grammarsour notation follows a contextfree grammar is a tuple g where vn and vt are finite disjoint sets of nonterminal and terminal symbols respectively and s e vn is the start symbolset p is a finite set of productions having the form a a where a e vn a e if every production in p has the form a 4 because or a a for a because e vn a e vt then the grammar is said to be in chomsky normal form 2 every language that can be generated by a cfg can also be generated by a cfg in cnfin this paper we adopt the following conventions a b c d denote symbols in vt w x y denote strings in vat and a 0 denote strings in the input to the parser will be a cfg g together with a string of terminal symbols to be parsed w d1d2 dnalso hi jk denote positive integers which are assumed to be ja quotderivesquot relation written is associated with a cfg as usualwe also use the reflexive and transitive closure of written and define l accordinglywe write a 8 a0 for a derivation in which only 3 is rewrittenwe introduce next a grammar formalism that captures lexical dependencies among pairs of words in vtthis formalism closely resembles stochastic grammatical formalisms that are used in several existing natural language processing systems we will specify a nonstochastic version noting that probabilities or other weights may be attached to the rewrite rules exactly as in stochastic cfg suppose g is a cfg in cnf3 we say that g is bilexical if there exists a set of quotdelexicalized nonterminalsquot vd such that vn aa a e vd a e vt and every production in p has one of the following forms thus every nonterminal is lexicalized at some terminal aa constituent of nonterminal type aa is said to have terminal symbol a as its lexical head quotinheritedquot from the constituent head child in the parse tree notice that the start symbol is necessarily a lexicalized nonterminal thence appears in every string of l it is usually convenient to define g so that the language of interest is actually l x x e l such a grammar can encode lexically specific preferencesfor example p might contain the productions in order to allow the derivation vpsolve solve two puzzles but meanwhile omit the similar productions since puzzles are not edible a goat is not solvable quotsleepquot is intransitive and quotgoatquot cannot take plural determinersthe cost of this expressiveness is a very large grammarstandard contextfree parsing algorithms are inefficient in such a casethe cky algorithm is time 0 where in the worst case i p1 ivni3 for a bilexical grammar the worst case is ipi vd i 3 i vt12 which is large for a large vocabulary vt we may improve the analysis somewhat by observing that when parsing d1 dn the cky algorithm only considers nonterminals of the form adi by restricting to the relevant productions we obtain 02 we observe that in practical applications we always have n aa parse is just a derivation of lhn and its probabilitylike that of any derivation we findis defined as the product of the probabilities of all productions used to condition inference rules in the proof treethe highestprobability derivation for any item can be reconstructed recursively at the end of the parse provided that each item maintains not only a bit indicating whether it can be derived but also the probability and instantiated root rule of its highestprobability derivation treewe now give a variant of the algorithm of 4 the variant has the same asymptotic complexity but will often be faster in practicenotice that the attachleft rule of figure 1 tries to combine the nonterminal label bdhd of a previously derived constituent with every possible nonterminal label of the form cdhthe improved version shown in figure 2 restricts cdh to be the label of a previously derived adjacent constituentthis improves speed if there are not many such constituents and we can enumerate them in 0 time apiece it is necessary to use an agenda data structure when implementing the declarative algorithm of figure 2deriving narrower items before wider ones as before will not work here because the rule halve derives narrow items from wide onesrather than parsing an input string directly it is often desirable to parse another string related by a transductionlet t be a finitestate transducer that maps a morpheme sequence w e vit to its orthographic realization a grapheme sequence fo t may realize arbitrary morphological processes including affixation local clitic movement deletion of phonological nulls forbidden or dispreferred kgrams typographical errors and mapping of multiple senses onto the same graphemegiven grammar g and an input ti we ask whether e twe have extended all the algorithms in this paper to this case the items simply keep track of the transducer state as welldue to space constraints we sketch only the special case of multiple sensessuppose that the input is id d1 dn and each d2 has up to g possible senseseach item now needs to track its head sense along with its head position in idwherever an item formerly recorded a head position h it must now record a pair where dh e vt is a specific sense of dhno rule in figures 12 will mention more than two such pairsso the time complexity increases by a factor of 07 head automaton grammars in time 0 in this section we show that a lengthn string generated by a head automaton grammar can be parsed in time 0we do this by providing a translation from head automaton grammars to bilexical cfgs4 this result improves on the headautomaton parsing algorithm given by alshawi which is analogous to the cky algorithm on bilexical cfgs and is likewise 0 in practice a head automaton grammar is a function h a 14 ha that defines a head automaton for each element of its domainlet vt domain and d a single head automaton is an acceptor for a language of string pairs e v x v informally if b is the leftmost symbol of zr and q e a then ha can move from state q to state q matching symbol b and removing it from the left end of zrsymmetrically if b is the rightmost symbol of zi and q e sa only by reading all of yimmediately after which it is said to be in a flip stateand then reading all of xformally a flip state is one that allows entry on a transition and that either allows exit on a transition or is a final statewe are concerned here with head automaton grammars h such that every ha is splitthese correspond to bilexical cfgs in which any derivation aa xay has the form aa xba xaythat is a word left dependents are more oblique than its right dependents and ccommand themsuch grammars are broadly applicableeven if ha is not split there usually exists a split head automaton hquot recognizing the same languageh la exists if xy e l is regular in particular lei must exist unless ha has a cycle that includes both and 4 transitionssuch cycles would be necessary for ha itself to accept a formal language such as n 0 where word a takes 2n dependents but we know of no naturallanguage motivation for ever using them in a hagone more definition will help us bound the complexitya split head automaton ha is said to be gsplit if its set of flip states denoted c qa has size g the languages that can be recognized by gsplit has are those that can be written as 1g 1 li x r where the li and ri are regular languages over vt eisner actually defined bilexical grammars in terms of the latter property6 we now present our result figure 3 specifies an 0 recognition algorithm for a head automaton grammar h in which every h is gsplitfor deterministic automata the runtime is 0a considerable improvement on the 0 result of which also assumes deterministic automataas in 4 a simple bottomup implementation will sufficefor a practical speedup add hj as an antecedent to the mid rule like our previous algorithms this one takes two steps to attach a child constituent to a parent constituentbut instead of full constituentsstrings xdy e d it uses only halfconstituents like xdi and the other halves of these constituents can be attached later because to find an accepting path for in a split head automaton one can separately find the halfpath before the flip state and the halfpath after the flip state these two halfpaths can subsequently be joined into an accepting path if they have the same flip state s ie one path starts where the other endsannotating our left halfconstituents with s makes this check possiblewe have formally described and given faster parsing algorithms for three practical grammatical rewriting systems that capture dependencies between pairs of wordsall three systems admit naive 0 algorithmswe give the first 0 results for the natural formalism of bilexical contextfree grammar and for alshawi head automaton grammarsfor the usual case split head automaton grammars or equivalent bilexical cfgs we replace the 0 algorithm of by one with a smaller grammar constantnote that eg all senses would restore the g 2 factorindeed this approach gives added flexibility a word sense unlike its choice of flip state is visible to the ha that reads it three models in are susceptible to the 0 method our dynamic programming techniques for cheaply attaching head information to derivations can also be exploited in parsing formalisms other than rewriting systemsthe authors have developed an 0time parsing algorithm for bilexicalized tree adjoining grammars improving the naive 0 methodthe results mentioned in 6 are related to the closure property of cfgs under generalized sequential machine mapping this property also holds for our class of bilexical cfgs
P99-1059
efficient parsing for bilexical contextfree grammars and head automaton grammarsseveral recent stochastic parsers use bilexical grammars where each word type idiosyncratically prefers particular complements with particular head wordswe present o parsing algorithms for two bilexical formalisms improving the prior upper bounds of ofor a common special case that was known to allow o parsing we present an o algorithm with an improved grammar constantwe show that the dynamic programming algorithms for lexicalized pcfgs require o states
a statistical parser for czech this paper considers statistical parsing of czech which differs radically from english in at least two it is a inflected and it has relatively word order differences are likely to pose new problems for techniques that have been developed on english we describe our experience in building on the parsing model of our final results 80 dependency accuracy represent good progress towards the 91 accuracy of the parser on english text much of the recent research on statistical parsing has focused on english languages other than english are likely to pose new problems for statistical methodsthis paper considers statistical parsing of czech using the prague dependency treebank as a source of training and test data also show these characteristicsmany european languages exhibit fwo and hi phenomena to a lesser extentthus the techniques and results found for czech should be relevant to parsing several other languagesthis paper first describes a baseline approach based on the parsing model of which recovers dependencies with 72 accuracywe then describe a series of refinements to the model giving an improvement to 80 accuracy with around 82 accuracy on newspaperbusiness articles textthe prague dependency treebank pdt has been modeled after the penn treebank with one important exception following the praguian linguistic tradition the syntactic annotation is based on dependencies rather than phrase structuresthus instead of quotnonterminalquot symbols used at the nonleaves of the tree the pdt uses socalled analytical functions capturing the type of relation between a dependent and its governing nodethus the number of nodes is equal to the number of tokens plus one the pdt contains also a traditional morphosyntactic annotation at each word position as czech is a hi language the size of the set of possible tags is unusually high more than 3000 tags may be assigned by the czech morphological analyzerthe pdt also contains machineassigned tags and lemmas for each word for evaluation purposes the pdt has been divided into a training set and a developmentevaluation test set pair parsing accuracy is defined as the ratio of correct dependency links vs the total number of dependency links in a sentence as usual with the development test set being available during the development phase all final results has been obtained on the evaluation test set which nobody could see beforehandthe parsing model builds on model 1 of this section briefly describes the modelthe parser uses a lexicalized grammar each nonterminal has an associated headword and partofspeech we write nonterminals as x x is the nonterminal label and x is a pair where w is the associated headword and t as the pos tagsee figure 1 for an example lexicalized tree and a list of the lexicalized rules that it containseach rule has the fonnl with the exception of the top rule in the tree which has the form top h h is the headchild of the phrase which inherits the headword h from its parent p l1ln and ri are left and right modifiers of h either n or m may be zero and n the model can be considered to be a variant of probabilistic contextfree grammar in pcfgs each rule a in the cfg underlying the pcfg has an associated probability pin p is defined as a product of terms by assuming that the righthandside of the rule is generated in three steps probability i p h h where lni stopthe stop symbol is added to the vocabulary of nonterminals and the model stops generating left modifiers when it is generatedother rules in the tree contribute similar sets of probabilitiesthe probability for the entire tree is calculated as the product of all these terms describes a series of refinements to this basic model the addition of quotdistancequot the addition of subcategorization parameters and parameters that model whmovement estimation techniques that smooth various levels of backoff search for the highest probability tree for a sentence is achieved using a ckystyle parsing algorithmmany statistical parsing methods developed for english use lexicalized trees as a representation several emphasize the use of parameters associated with dependencies between pairs of wordsthe czech pdt contains dependency annotations but no tree structuresfor parsing czech we considered a strategy of converting dependency structures in training data to lexicalized trees then running the parsing algorithms originally developed for englisha key point is that the mapping from lexicalized trees to dependency structures is manytooneas an example figure 2 shows an input dependency structure and three different lexicalized trees with this dependency structurethe choice of tree structure is crucial in determining the independence assumptions that the parsing model makesthere are at least 3 degrees of freedom when deciding on the tree structures to provide a baseline result we implemented what is probably the simplest possible conversion scheme the baseline approach gave a result of 719 accuracy on the development test setwhile the baseline approach is reasonably successful there are some linguistic phenomena that lead to clear problemsthis section describes some tree transformations that are linguistically motivated and lead to improvements in parsing accuracyin the pdt the verb is taken to be the head of both sentences and relative clausesfigure 4 illustrates how the baseline transformation method can lead to parsing errors in relative clause casesfigure 4 shows the solution to the problem the label of the relative clause is changed to sbar and an additional vp level is added to the right of the relative pronounsimilar transformations were applied for relative clauses involving whpps whnps and whadverbials the pdt takes the conjunct to be the head of coordination structures in these cases the baseline approach gives tree structures such as that in figure 5the nonterminal label for the phrase is jp this choice of nonterminal is problematic for two reasons the jp label is assigned to all coordinated phrases for example hiding the fact that the constituent in figure 5 is an np the model assumes that left and right modifiers are generated independently of each other and as it stands will give unreasonably high probability to two unlike phrases being coordinatedto fix these problems the nonterminal label in coordination cases was altered to be the same as that of the second conjunct see figure 5a similar transformation was made for cases where a comma was the head of a phrasefigure 6 shows an additional change concerning commasthis change increases the sensitivity of the model to punctuationthis section describes some modifications to the parameterization of the model guish main clauses from relative clauses both have a verb as the head so both are labeled vp a typical parsing error due to relative and main clauses not being distinguished the solution to the problem a modification to relative clause structures in training datathe model of had conditioning variables that allowed the model to learn a preference for dependencies which do not cross verbsfrom the results in table 3 adding this condition improved accuracy by about 09 on the development setthe parser of used punctuation as an indication of phrasal boundariesit was found that if a constituent z has two children x and y separated by a punctuation mark then y is generally followed by a punctuation mark or the end of sentence markerthe parsers of encoded this as a hard constraintin the czech parser we added a cost of 25 2 to structures that violated this constraintthe model of section 3 made the assumption that modifiers are generated independently of each otherthis section describes a hi gram model where the context is increased to consider the previously generated modifier also describes use of bigram statisticsthe righthandside of a rule is now assumed to be generated in the following three step process where lo is defined as a special null symbolthus the previous modifier li_1 is added to the conditioning context part of speech tags serve an important role in statistical parsing by providing the model with a level of generalization as to how classes of words tend to behave what roles they play in sentences and what other classes they tend to combine withstatistical parsers of english typically make use of the roughly 50 pos tags used in the penn treebank corpus but the czech pdt corpus provides a much richer set of pos tags with over 3000 possible tags defined by the tagging system and over 1000 tags actually found in the corpususing that large a tagset with a training corpus of only 19000 sentences would lead to serious sparse data problemsit is also clear that some of the distinctions being made by the tags are more important than others for parsingwe therefore explored different ways of extracting smaller but still maximally informative pos tagsetsthe pos tags in the czech pdt corpus are encoded in 13character stringstable 1 shows the role of each characterfor example the tag nnmp1 a would be used for a word that had quotnounquot as both its main and detailed part of speech that was masculine plural nominative and whose negativeness value was quotaffirmativequotwithin the corpus each word was annotated with all of the pos tags that would be possible given its spelling using the output of a morphological analysis program and also with the single one of those tags that a statistical pos tagging program had predicted to be the correct tag table 2 shows a phrase from the corpus with the alternative possible tags and machineselected tag for each wordin the training portion of the corpus the correct tag as judged by human annotators was also providedin the baseline approach the first letter or quotmain part of speechquot of the full pos strings was used as the tagthis resulted in a tagset with 13 possible valuesa number of alternative richer tagsets were explored using various combinations of character positions from the tag stringthe most successful alternative was a twoletter tag whose first letter was always the main pos and whose second letter was the case field if the main pos was one that displays case while otherwise the second letter was the detailed posthis twoletter scheme resulted in 58 tags and provided about a 11 parsing improvement over the baseline on the development seteven richer tagsets that also included the person gender and number values were tested without yielding any further improvement presumably because the damage from sparse data outweighed the value of the additional information presentan entirely different approach rather than searching by hand for effective tagsets would be to use clustering to derive them automaticallywe explored two different methods bottomup and topdown for automatically deriving pos tag sets based on counts of governing and dependent tags extracted from the parse trees that the parser constructs from the training dataneither tested approach resulted in any improvement in parsing performance compared to the handdesigned quottwo letterquot tagset but the implementations of each were still only preliminary and a clustered tagset more adroitly derived might do betterone final issue regarding pos tags was how to deal with the ambiguity between possible tags both in training and testin the training data there was a choice between using the output of the pos tagger or the human annotator judgment as to the correct tagin test data the correct answer was not available but the pos tagger output could be used if desiredthis turns out to matter only for unknown words as the parser is designed to do its own tagging for words that it has seen in training at least 5 times ignoring any tag supplied with the inputfor quotunknownquot words the parser can be set either to believe the tag supplied by the pos tagger or to allow equally any of the dictionaryderived possible tags for the word effectively allowing the parse context to make the choice our tests indicated that if unknown words are treated by believing the pos tagger suggestion then scores are better if the parser is also trained on the pos tagger suggestions rather than on the human annotator correct tagstraining on the correct tags results in 1 worse performanceeven though the pos tagger tags are less accurate they are more like what the parser will be using in the test data and that turns out to be the key pointon the other hand if the parser allows all possible dictionary tags for unknown words in test material then it pays to train on the actual correct tagsin initial tests this combination of training on the correct tags and allowing all dictionary tags for unknown test words somewhat outperformed the alternative of using the pos tagger predictions both for training and for unknown test wordswhen tested with the final version of the parser on the full development set those two strategies performed at the same levelwe ran three versions of the parser over the final test set the baseline version the full model with all additions and the full model with everything but the bigram modelthe baseline system on the fithat although the science section only contributes 25 of the sentences in test data it contains much longer sentences than the other sections and therefore accounts for 38 of the dependencies in test data nal test set achieved 723 accuracythe final system achieved 800 accuracy3 a 77 absolute improvement and a 278 relative improvementthe development set showed very similar results a baseline accuracy of 719 and a final accuracy of 793table 3 shows the relative improvement of each component of the mode14table 4 shows the results on the development set by genreit is interesting to see that the performance on newswire text is over 2 better than the averaged performancethe science section of the development set is considerably harder to parse the main piece of previous work on parsing czech that we are aware of is described in this is a rulebased system which is based on a manually designed set of rulesthe system accuracy is not evaluated on a test corpus so it is difficult to compare our results to theirswe can however make some comparison of the results in this paper to those on parsing english describes results of 91 accuracy in recovering dependencies on section 0 of the penn wall street journal treebank using model 2 of this task is almost certainly easier for a number of reasons there was more training data wall street journal may be an easier domain than the pdt as a reasonable proportion of sentences come from a subdomain financial news which is relatively restrictedunlike model 1 model 2 of the parser takes subcategorization information into account which gives some improvement on english and might well also improve results on czechgiven these differences it is difficult to make a direct comparison but the overall conclusion seems to be that the czech accuracy is approaching results on english although it is still somewhat behindthe 80 dependency accuracy of the parser represents good progress towards english parsing performancea major area for future work is likely to be an improved treatment of morphology a natural approach to this problem is to consider more carefully how pos tags are used as word classes by the modelwe have begun to investigate this issue through the automatic derivation of pos tags through clustering or quotsplittingquot approachesit might also be possible to exploit the internal structure of the pos tags for example through incremental prediction of the pos tag being generated or to exploit the use of word lemmas effectively splitting wordword relations into syntactic dependencies and more semantic dependencies
P99-1065
a statistical parser for czechthis paper considers statistical parsing of czech which differs radically from english in at least two respects it is a highly inflected language and it has relatively free word orderthese differences are likely to pose new problems for techniques that have been developed on englishwe describe our experience in building on the parsing model of our final results 80 dependency accuracy represent good progress towards the 91 accuracy of the parser on english textwe use a transformed tree bank from the prague dependency treebank for constituent parsing on czech
automatic identification of word translations from unrelated english and german corpora algorithms for the alignment of words in translated texts are well established however only recently new approaches have been proposed to identify word translations from nonparallel or even unrelated texts this task is more difficult because most statistical clues useful in the processing of parallel texts cannot be applied to nonparallel texts whereas for parallel texts in some studies up to 99 of the word alignments have been shown to be correct the accuracy for nonparallel texts has been around 30 up to now the current study which is based on the assumption that there is a correlation between the patterns of word cooccurrences in corpora of different languages makes a significant improvement to about 72 of word translations identified correctly starting with the wellknown paper of brown et al on statistical machine translation there has been much scientific interest in the alignment of sentences and words in translated textsmany studies show that for nicely parallel corpora high accuracy rates of up to 99 can be achieved for both sentence and word alignment of course in practice due to omissions transpositions insertions and replacements in the process of translation with real texts there may be all kinds of problems and therefore robustness is still an issue nevertheless the results achieved with these algorithms have been found useful for the cornpilation of dictionaries for checking the consistency of terminological usage in translations for assisting the terminological work of translators and interpreters and for examplebased machine translationby now some alignment programs are offered commercially translation memory tools for translators such as ibm translation manager or trados translator workbench are bundled or can be upgraded with programs for sentence alignmentmost of the proposed algorithms first conduct an alignment of sentences that is they locate those pairs of sentences that are translations of each otherin a second step a word alignment is performed by analyzing the correspondences of words in each pair of sentencesthe algorithms are usually based on one or several of the following statistical clues all these clues usually work well for parallel textshowever despite serious efforts in the compilation of parallel corpora the availability of a largeenough parallel corpus in a specific domain and for a given pair of languages is still an exceptionsince the acquisition of monolingual corpora is much easier it would be desirable to have a program that can determine the translations of words from comparable or possibly unrelated monolingual texts of two languagesthis is what translators and interpreters usually do when preparing terminology in a specific field they read texts corresponding to this field in both languages and draw their conclusions on word correspondences from the usage of the termsof course the translators and interpreters can understand the texts whereas our programs are only considering a few statistical cluesfor nonparallel texts the first clue which is usually by far the strongest of the three mentioned above is not applicable at allthe second clue is generally less powerful than the first since most words are ambiguous in natural languages and many ambiguities are different across languagesnevertheless this clue is applicable in the case of comparable texts although with a lower reliability than for parallel textshowever in the case of unrelated texts its usefulness may be near zerothe third clue is generally limited to the identification of word pairs with similar spellingfor all other pairs it is usually used in combination with the first cluesince the first clue does not work with nonparallel texts the third clue is useless for the identification of the majority of pairsfor unrelated languages it is not applicable anywayin this situation rapp proposed using a clue different from the three mentioned above his cooccurrence clue is based on the assumption that there is a correlation between cooccurrence patterns in different languagesfor example if the words teacher and school cooccur more often than expected by chance in a corpus of english then the german translations of teacher and school lehrer and schule should also cooccur more often than expected in a corpus of germanin a feasibility study he showed that this assumption actually holds for the language pair englishgerman even in the case of unrelated textswhen comparing an english and a german cooccurrence matrix of corresponding words he found a high correlation between the cooccurrence patterns of the two matrices when the rows and columns of both matrices were in corresponding word order and a low correlation when the rows and columns were in random orderthe validity of the cooccurrence clue is obvious for parallel corpora but as empirically shown by rapp it also holds for nonparallel corporait can be expected that this clue will work best with parallel corpora secondbest with comparable corpora and somewhat worse with unrelated corporain all three cases the problem of robustness as observed when applying the wordorder clue to parallel corpora is not severetranspositions of text segments have virtually no negative effect and omissions or insertions are not criticalhowever the cooccurrence clue when applied to comparable corpora is much weaker than the wordorder clue when applied to parallel corpora so larger corpora and wellchosen statistical methods are requiredafter an attempt with a context heterogeneity measure for identifying word translations fung based her later work also on the cooccurrence assumption by presupposing a lexicon of seed words she avoids the prohibitively expensive computational effort encountered by rapp the method described here although developed independently of fung work goes in the same directionconceptually it is a trivial case of rapp matrix permutation methodby simply assuming an initial lexicon the large number of permutations to be considered is reduced to a much smaller number of vector comparisonsthe main contribution of this paper is to describe a practical implementation based on the cooccurrence clue that yields good resultsas mentioned above it is assumed that across languages there is a correlation between the cooccurrences of words that are translations of each otherif for example in a text of one language two words a and b cooccur more often than expected by chance then in a text of another language those words that are translations of a and b should also cooccur more frequently than expectedthis is the only statistical clue used throughout this paperit is further assumed that there is a small dictionary available at the beginning and that our aim is to expand this base lexiconusing a corpus of the target language we first compute a cooccurrence matrix whose rows are all word types occurring in the corpus and whose columns are all target words appearing in the base lexiconwe now select a word of the source language whose translation is to be determinedusing our sourcelanguage corpus we compute a cooccurrence vector for this wordwe translate all known words in this vector to the target languagesince our base lexicon is small only some of the translations are knownall unknown words are discarded from the vector and the vector positions are sorted in order to match the vectors of the targetlanguage matrixwith the resulting vector we now perform a similarity computation to all vectors in the cooccurrence matrix of the target languagethe vector with the highest similarity is considered to be the translation of our sourcelanguage wordto conduct the simulation a number of resources were requiredthese are as the german corpus we used 135 million words of the newspaper frankfurter allgemeine zeitung and as the english corpus 163 million words of the guardian since the orientation of the two newspapers is quite different and since the time spans covered are only in part overlapping the two corpora can be considered as more or less unrelatedfor testing our results we started with a list of 100 german test words as proposed by russell which he used for an association experiment with german subjectsby looking up the translations for each of these 100 words we obtained a test set for evaluationour germanenglish base lexicon is derived from the collins gem german dictionary with about 22300 entriesfrom this we eliminated all multiword entries so 16380 entries remainedbecause we had decided on our test word list beforehand and since it would not make much sense to apply our method to words that are already in the base lexicon we also removed all entries belonging to the 100 test wordssince our corpora are very large to save disk space and processing time we decided to remove all function words from the textsthis was done on the basis of a list of approximately 600 german and another list of about 200 english function wordsthese lists were compiled by looking at the closed class words in an english and a german morphological lexicon and at word frequency lists derived from our corporaby eliminating function words we assumed we would lose little information function words are often highly ambiguous and their cooccurrences are mostly based on syntactic instead of semantic patternssince semantic patterns are more reliable than syntactic patterns across language families we hoped that eliminating the function words would give our method more generalitywe also decided to lemmatize our corporasince we were interested in the translations of base forms only it was clear that lemmatization would be usefulit not only reduces the sparsedata problem but also takes into account that german is a highly inflectional language whereas english is notfor both languages we conducted a partial lemmatization procedure that was based only on a morphological lexicon and did not take the context of a word form into accountthis means that we could not lemmatize those ambiguous word forms that can be derived from more than one base formhowever this is a relatively rare casealthough we had a contextsensitive lemmatizer for german available this was not the case for english so for reasons of symmetry we decided not to use the context featurei in cases in which an ambiguous word can be both a content and a function word preference was given to those interpretations that appeared to occur more frequentlyfor counting word cooccurrences in most other studies a fixed window size is chosen and it is determined how often each pair of words occurs within a text window of this sizehowever this approach does not take word order within a window into accountsince it has been empirically observed that word order of content words is often similar between languages and since this may be a useful statistical clue we decided to modify the common approach in the way proposed by rapp instead of computing a single cooccurrence vector for a word a we compute several one for each position within the windowfor example if we have chosen the window size 2 we would compute a first cooccurrence vector for the case that word a is two words ahead of another word b a second vector for the case that word a is one word ahead of word b a third vector for a directly following b and a fourth vector for a following two words after bif we added up these four vectors the result would be the cooccurrence vector as obtained when not taking word order into accounthowever this is not what we doinstead we combine the four vectors of length n into a single vector of length 4nsince preliminary experiments showed that a window size of 3 with consideration of word order seemed to give somewhat better results than other window types the results reported here are based on vectors of this kindhowever the computational methods described below are in the same way applicable to window sizes of any length with or without consideration of word orderour method is based on the assumption that there is a correlation between the patterns of word cooccurrences in texts of different languageshowever as rapp proposed this correlation may be strengthened by not using the cooccurrence counts directly but association strengths between words insteadthe idea is to eliminate wordfrequency effects and to emphasize significant word pairs by comparing their observed cooccurrence counts with their expected cooccurrence countsin the past for this purpose a number of measures have been proposedthey were based on mutual information conditional probabilities or on some standard statistical tests such as the chisquare test or the loglikelihood ratio for the purpose of this paper we decided to use the loglikelihood ratio which is theoretically well justified and more appropriate for sparse data than chisquarein preliminary experiments it also led to slightly better results than the conditional probability measureresults based on mutual information or cooccurrence counts were significantly worsefor efficient computation of the loglikelihood ratio we used the following formula2 where with parameters kj expressed in terms of corpus frequencies kli frequency of common occurrence of word a and word b k22 size of corpus corpus frequency of a corpus frequency of b all cooccurrence vectors were transformed using this formulathereafter they were normalized in such a way that for each vector the sum of its entries adds up to onein the rest of the paper we refer to the transformed and normalized vectors as association vectorsto determine the english translation of an unknown german word the association vector of the german word is computed and compared to all association vectors in the english association matrixfor comparison the correspondences between the vector positions and the columns of the matrix are determined by using the base lexiconthus for each vector in the english matrix a similarity value is computed and the english words are ranked according to these valuesit is expected that the correct translation is ranked first in the sorted listfor vector comparison different similarity measures can be consideredsalton mcgill proposed a number of measures such as the cosine coefficient the jaccard coefficient and the dice coefficient for the computation of related terms and synonyms ruge landauer and dumais and fung and mckeown used the cosine measure whereas grefenstette used a weighted jaccard measurewe propose here the cityblock metric which computes the similarity between two vectors x and y as the sum of the absolute differences of corresponding vector positions in a number of experiments we compared it to other similarity measures such as the cosine measure the jaccard measure the euclidean distance and the scalar product and found that the cityblock metric yielded the best resultsthis may seem surprising since the formula is very simple and the computational effort smaller than with the other measuresit must be noted however that the other authors applied their similarity measures directly to the cooccurrence vectors whereas we applied the measures to the association vectors based on the loglikelihood ratioaccording to our observations estimates based on the loglikelihood ratio are generally more reliable across different corpora and languagesthe results reported in the next section were obtained using the following procedure 1based on the word cooccurrences in the german corpus for each of the 100 german test words its association vector was computedin these vectors all entries belonging to words not found in the english part of the base lexicon were deleted2based on the word cooccurrences in the english corpus an association matrix was computed whose rows were all word types of the corpus with a frequency of 100 or higher3 and whose columns were all english words occurring as first translations of the german words in the base lexicon3using the similarity function each of the german vectors was compared to all vectors of the english matrixthe mapping between vector positions was based on the first translations given in the base lexiconfor each of the german source words the english vocabulary was ranked according to the resulting similarity value3 the limitation to words with frequencies above 99 was introduced for computational reasons to reduce the number of vector comparisons and thus speed up the programthe purpose of this limitation was not to limit the number of translation candidates consideredexperiments with lower thresholds showed that this choice has little effect on the results to our set of test words4 this means that alternative translations of a word were not consideredanother approach as conducted by fung yee would be to consider all possible translations listed in the lexicon and to give them equal weightour decision was motivated by the observation that many words have a salient first translation and that this translation is listed first in the collins gem dictionary germanenglishwe did not explore this issue further since in a small pocket dictionary only few ambiguities are listedtable 1 shows the results for 20 of the 100 german test wordsfor each of these test words the top five translations as automatically generated are listedin addition for each word its expected english translation from the test set is given together with its position in the ranked lists of computed translationsthe positions in the ranked lists are a measure for the quality of the predictions with a 1 meaning that the prediction is correct and a high value meaning that the program was far from predicting the correct wordif we look at the table we see that in many cases the program predicts the expected word with other possible translations immediately followingfor example for the german word hauschen the correct translations bungalow cottage house and hut are listedin other cases typical associates follow the correct translationfor example the correct translation of madchen girl is followed by boy man brother and ladythis behavior can be expected from our associationist approachunfortunately in some cases the correct translation and one of its strong associates are mixed up as for example with frau where its correct translation woman is listed only second after its strong associate mananother example of this typical kind of error is pfeifen where the correct translation whistle is listed third after linesman and refereelet us now look at some cases where the program did particularly badlyfor kohl we had expected its dictionary translation cabbage but given that a substantial part of our newspaper corpora consists of political texts we do not need to further explain why our program lists major kohl thatcher gorbachev and bush state leaders who were in office during the time period the texts were writtenin other cases such as krankheit and whisky the simulation program simply preferred the british usage of the guardian over the american usage in our test set instead of sickness the program predicted disease and illness and instead of whiskey it predicted whiskya much more severe problem is that our current approach cannot properly handle ambiguities for the german word weifi it does not predict white but instead knowthe reason is that weifi can also be third person singular of the german verb wissen which in newspaper texts is more frequent than the color whitesince our lemmatizer is not contextsensitive this word was left unlemmatized which explains the resultto be able to compare our results with other work we also did a quantitative evaluationfor all test words we checked whether the predicted translation was identical to our expected translationthis was true for 65 of the 100 test wordshowever in some cases the choice of the expected translation in the test set had been somewhat arbitraryfor example for the german word strafie we had expected street but the system predicted road which is a translation quite as goodtherefore as a better measure for the accuracy of our system we counted the number of times where an acceptable translation of the source word is ranked firstthis was true for 72 of the 100 test words which gives us an accuracy of 72in another test we checked whether an acceptable translation appeared among the top 10 of the ranked liststhis was true in 89 cases5 for comparison fung mckeown report an accuracy of about 30 when only the top candidate is countedhowever it must be emphasized that their result has been achieved under very different circumstanceson the one hand their task was more difficult because they worked on a pair of unrelated languages using smaller corpora and a random selection of test words many of which were multiword termsalso they predetermined a single translation as being correcton the other hand when conducting their evaluation fung mckeown limited the vocabulary they considered as translation candidates to a few hundred terms which obviously facilitates the taskthe method described can be seen as a simple case of the gradient descent method proposed by rapp which does not need an initial lexicon but is computationally prohibitively expensiveit can also be considered as an extension from the monolingual to the bilingual case of the wellestablished methods for semantic or syntactic word clustering as proposed by schiitze grefenstette ruge rapp lin and otherssome of these authors perform a shallow or full syntactical analysis before constructing the cooccurrence vectorsothers reduce the size of the cooccurrence matrices by performing a singular value decompositionhowever in yet unpublished work we found that at least for the computation of synonyms and related words neither syntactical analysis nor singular value decomposition lead to significantly better results than the approach described here when applied to the monolingual case so we did not try to include these methods in our systemnevertheless both methods are of technical value since they lead to a reduction in the size of the cooccurrence matricesfuture work has to approach the difficult problem of ambiguity resolution which has not been dealt with hereone possibility would be to semantically disambiguate the words in the corpora beforehand another to look at cooccurrences between significant word sequences instead of cooccurrences between single wordsto conclude with let us add some speculation by mentioning that the ability to identify word translations from nonparallel texts can be seen as an indicator in favor of the associationist view of human language acquisition it gives us an idea of how it is possible to derive the meaning of unknown words from texts by only presupposing a limited number of known words and then iteratively expanding this knowledge baseone possibility to get the process going would be to learn vocabulary lists as in school another to simply acquire the names of items in the physical worldi thank manfred wettler gisela zunkerrapp wolfgang lezius and anita todd for their support of this work
P99-1067
automatic identification of word translations from unrelated english and german corporaalgorithms for the alignment of words in translated texts are well establishedhowever only recently new approaches have been proposed to identify word translations from nonparallel or even unrelated textsthis task is more difficult because most statistical clues useful in the processing of parallel texts cannot be applied to nonparallel textswhereas for parallel texts in some studies up to 99 of the word alignments have been shown to be correct the accuracy for nonparallel texts has been around 30 up to nowthe current study which is based on the assumption that there is a correlation between the patterns of word cooccurrences in corpora of different languages makes a significant improvement to about 72 of word translations identified correctlywe create bagofwords context vectors around both the source and target language words and then project the source into the english target space via the current small translation dictionarywe filter out bilingual term pairs with low monolingual frequencies we show that accurate translations can be learned for 100 german nouns that are not contained in the seed bilingual dictionary
mining the web for bilingual text strand is a languageindependent system for automatic discovery of text in parallel translation on the world wide web this paper extends the preliminary strand results by adding automatic language identification scaling up by orders of magnitude and formally evaluating performance the most recent endproduct is an automatically acquired parallel corpus comprising 2491 englishfrench document pairs approximately 15 million words per language text in parallel translation is a valuable resource in natural language processingstatistical methods in machine translation typically rely on large quantities of bilingual text aligned at the document or sentence level and a number of approaches in the burgeoning field of crosslanguage information retrieval exploit parallel corpora either in place of or in addition to mappings between languages based on information from bilingual dictionaries despite the utility of such data however sources of bilingual text are subject to such limitations as licensing restrictions usage fees restricted domains or genres and dated text or such sources simply may not exist for language pairs of interestalthough the majority of web content is in english it also shows great promise as a source of multilingual contentusing figures from the babel survey of multilinguality on the web it is possible to estimate that as of june 1997 there were on the order of 63000 primarily nonenglish web servers ranging over 14 languagesmoreover a followup investigation of the nonenglish servers suggests that nearly a third contain some useful crosslanguage data such as parallel english on the page or links to parallel english pages the followup also found pages in five languages not identified by the babel study given the continued explosive increase in the size of the web the trend toward business organizations that cross national boundaries and high levels of competition for consumers in a global marketplace it seems impossible not to view multilingual content on the web as an expanding resourcemoreover it is a dynamic resource changing in content as the world changesfor example diekema et al in a presentation at the 1998 trec7 conference observed that the performance of their crosslanguage information retrieval was hurt by lexical gaps such as bosnial bosnie this illustrates a highly topical missing pair in their static lexical resource and gey et al also at trec7 observed that in doing crosslanguage retrieval using commercial machine translation systems gaps in the lexicon could make the difference between precision of 008 and precision of 083 on individual queriesresnik presented an algorithm called strand designed to explore the web as a source of parallel text demonstrating its potential with a smallscale evaluation based on the author judgmentsafter briefly reviewing the strand architecture and preliminary results this paper goes beyond that preliminary work in two significant waysfirst the framework is extended to include a filtering stage that uses automatic language identification to eliminate an important class of false positives documents that appear structurally to be parallel translations but are in fact not in the languages of interestthe system is then run on a somewhat larger scale and evaluated formally for english and spanish using measures of agreement with independent human judges precision and recall second the algorithm is scaled up more seriously to generate large numbers of parallel documents this time for english and french and again subjected to formal evaluation the concrete end result reported here is an automatically acquired englishfrench parallel corpus of web documents comprising 2491 document pairs approximately 15 million words per language containing little or no noisethis section is a brief summary of the strand system and previously reported preliminary results the strand architecture is organized as a pipeline beginning with a candidate generation stage that generates candidate pairs of documents that might be parallel translationsthe first implementation of the generation stage used a query to the altavista search engine to generate pages that could be viewed as quotparentsquot of pages in parallel translation by asking for pages containing one portion of anchor text containing the string quotenglishquot within a fixed distance of another anchor text containing the string quotspanishquotthis generated many good pairs of pages such as those pointed to by hyperlinks reading click here for english version and click here for spanish version as well as many bad pairs such as university pages containing links to english literature in close proximity to spanish literaturethe candidate generation stage is followed by a candidate evaluation stage that represents the core of the approach filtering out bad candidates from the set of generated page pairsit employs a structural recognition algorithm exploiting the fact that web pages in parallel translation are invariably very similar in the way they are structured hence the in strandfor example see figure 2the structural recognition algorithm first runs both documents through a transducer that reduces each to a linear sequence of tokens corresponding to html markup elements interspersed with tokens representing undifferentiated quotchunksquot of textfor example the transducer would replace the html source text acl 99 conference home page with the three tokens begin title chunk24 and end titlethe number inside the chunk token is the length of the text chunk not counting whitespace from this point on only the length of the text chunks is used and therefore the structural filtering algorithm is completely language independentgiven the transducer output for each document the structural filtering stage aligns the two streams of tokens by applying a standard widely available dynamic programming algorithm for finding an optimal alignment between two linear sequences1 this alignment matches identical markup tokens to each other as much as possible identifies runs of unmatched tokens that appear to exist only in one sequence but not the other and marks pairs of nonidentical tokens that were forced to be matched to each other in order to obtain the best alignment posknown to many programmers as different1111111111111110111111 le vmdredi 25 cacao 199640 membres nit de la seemananon oat assisth au stninaiste sr les psalm exemplars en maniac de reglentation qui visait i aida la rnininbto 6 familiarises avec pereglemsitation cannina de rechange asa progrunes de rtglanentasionlanimanurm zane bmw airedmp gannet direction 4061 de wasoonnation owes la mance en mktit aphasia dtindusthe caned pe rune de introduction de nif animus de diversification 496 6 a anation da services comma les coda yokota l snort lonstion de lindastrie a add que pseud mochainement un document stu i divenifirstse des modes de postai da service qui intrait de divers suj comas les anfmnisnes de premvion de rethange des semi les phi aggroprifs que a problemes leafs par nunn cada volontalms us code volontaire un ensemble dengageme scernaliffs ne faisant pas eaplidtanent pink dun mgirrio logialatif ou reglementaire cone0 pom lateen faxonnex conneller ou israluer is canpartement de ceux qui la ont pan iiistiliminent pax a pnnsuivi 69 gins analyse thincipalaffair ag6 au racentariat du conseil du treeceadroit du gaivemement de makatea n alma simplanent aux mmiciparas une solution de red i reglemenmtioe p gomernernmat au moment a is thglementation f objes true examen adru du public ks gouvornemerns i lishelle town van des approchen volonmirsat en 519014 a la rigkoausion et mans comae submits i celled les codes volontaires prima40 tin cumin menthe manages notanstent sible2 at this point if there were too many unmatched tokens the candidate pair is taken to be prima facie unacceptable and immediately filtered outotherwise the algorithm extracts from the alignment those pairs of chunk tokens that were matched to each other in order to obtain the best alignments3 it then computes the correlation between the lengths of these nonmarkup text chunksas is well known there is a reliably linear relationship in the lengths of text translations small pieces of source text translate to small pieces of target text medium to medium and large to largetherefore we can apply a standard statistical hypothesis test and if p pr and pr prfor english and spanish this translates as a simple requirement that the quotenglishquot page look more like english than spanish and that the quotspanishquot page look more like spanish than englishlanguage identification is performed on the plaintext versions of the pagescharacter 5gram models for languages under consideration are constructed using 100k characters of training data from the european corpus initiative available from the linguistic data consortium in a formal evaluation strand with the new language identification stage was run for english and spanish starting from the top 1000 hits yielded up by altavista in the candidate generation stage leading to a set of 913 candidate pairsa test set of 179 items was generated for annotation by human judges containing it was impractical to manually evaluate all pairs filtered out structurally owing to the time required for judgments and the desire for two independent judgments per pair in order to assess interjudge reliabilitythe two judges were both native speakers of spanish with high proficiency in english neither previously familiar with the projectthey worked independently using a web browser to access test pairs in a fashion that allowed them to view pairs side by sidethe judges were told they were helping to evaluate a system that identifies pages on the web that are translations of each other and were instructed to make decisions according to the following criterion is this pair of pages intended to show the same material to two different users one a reader of english and the other a reader of spanishthe phrasing of the criterion required some consideration since in previous experience with human judges and translations i have found that judges are frequently unhappy with the quality of the translations they are looking atfor present purposes it was required neither that the document pair represent a perfect translation nor even necessarily a good one strand was being tested not on its ability to determine translation quality which might or might not be a criterion for inclusion in a parallel corpus but rather its ability to facilitate the task of locating page pairs that one might reasonably include in a corpus undifferentiated by quality the judges were permitted three responses when computing evaluation measures page pairs classified in the third category by a human judge for whatever reason were excluded from considerationtable 1 shows agreement measures between the two judges between strand and each individual judge and the agreement between strand and the intersection of the two judges annotations that is strand evaluated against only those cases where the two judges agreed which are therefore the items we can regard with the highest confidencethe table also shows cohen lc an agreement measure that corrects for chance agreement the most important value in the table is the value of 07 for the two human judges which can be interpreted as sufficiently high to indicate that the task is reasonably well defined for every language l in that range and that d2 meet the corresponding requirement for frenchdoing so leads to the results in table 3this translates into an estimated 100 precision against 641 recall with a yield of 2491 documents approximately 15 million words per language as counted after removal of html markupthat is with a reasonable though admittedly posthoc revision of the language identification criterion comparison with human subjects suggests the acquired corpus is nontrivial and essentially noise free and moreover that the system excludes only a third of the pages that should have been keptnaturally this will need to be verified in a new evaluation on fresh data6language id across a wide range of languages is not difficult to obtaineg see the 13language set of the freely available cmu stochastic language identhis paper places acquisition of parallel text from the web on solid empirical footing making a number of contributions that go beyond the preliminary studythe system has been extended with automated language identification and scaled up to the point where a nontrivial parallel corpus of english and french can be produced completely automatically from the world wide webin the process it was discovered that the most lightweight use of language identification restricted to just the the language pair of interest needed to be revised in favor of a strategy that includes identification over a wide range of languagesrigorous evaluation using human judges suggests that the technique produces an extremely clean corpus noise estimated at between 0 and 8 even without human intervention requiring no more resources per language than a relatively small sample of text used to train automatic language identificationtwo directions for future work are apparentfirst experiments need to be done using languages that are less common on the weblikely first pairs to try include englishkorean englishitalian and englishgreekinspection of web sites those with bilingual text identified by strand and those without suggests that the strategy of using altavista to generate candidate pairs could be improved upon significantly by adding a true web crawler to quotminequot sites where bilingual text is known to be available eg sites uncovered by a first pass of the system using the altavista enginei would conjecture that for englishfrench there is an order of magnitude more bilingual text on the web than that uncovered in this early stage of researcha second natural direction is the application of webbased parallel text in applications such as lexical acquisition and crosslanguage information retrieval especially since a sideeffect of the core strand algorithm is aligned quotchunksquot ie nonmarkup segments found to correspond to each other based on alignment of the markuppreliminary experiments using even small amounts of these data suggest that standard techniques such as crosslanguage lexical association can uncover useful data
P99-1068
mining the web for bilingual textstrand is a languageindependent system for automatic discovery of text in parallel translation on the world wide webthis paper extends the preliminary strand results by adding automatic language identification scaling up by orders of magnitude and formally evaluating performancethe most recent endproduct is an automatically acquired parallel corpus comprising 2491 englishfrench document pairs approximately 15 million words per languagewe use structure markup information from pages without looking at their content to attempt to align them
estimators for stochastic unificationbased grammars loglinear models provide a statistically sound framework for stochastic quotunificationbasedquot grammars and stochastic versions of other kinds of grammars we describe two computationallytractable ways of estimating the parameters of such grammars from a training corpus of syntactic analyses and apply these to estimate a stochastic version of lexical probabilistic methods have revolutionized computational linguisticsthey can provide a systematic treatment of preferences in parsinggiven a suitable estimation procedure stochastic models can be quottunedquot to reflect the properties of a corpuson the other hand quotunificationbasedquot grammars can express a variety of linguisticallyimportant syntactic and semantic constraintshowever developing stochastic quotunificationbasedquot grammars has not proved as straightforward as might be hopedthe simple quotrelative frequencyquot estimator for pcfgs yields the maximum likelihood parameter estimate which is to say that it minimizes the kulbackliebler divergence between the training and estimated distributionson the other hand as abney points out the contextsensitive dependencies that quotunificationbasedquot constraints introduce render the relative frequency estimator suboptimal in general it does not maximize the likelihood and it is inconsistentabney proposes a markov random field or log linear model for subgs and the models described here are instances of abney general frameworkhowever the montecarlo parameter estimation procedure that abney proposes seems to be computationally impractical for reasonablesized grammarssections 3 and 4 describe two new estimation procedures which are computationally tractablesection 5 describes an experiment with a small lfg corpus provided to us by xerox parcthe log linear framework and the estimation procedures are extremely general and they apply directly to stochastic versions of hpsg and other theories of grammarwe follow the statistical literature in using the term feature to refer to the properties that parameters are associated with let sz be the set of all possible grammatical or wellformed analyseseach feature f maps a syntactic analysis to e s to a real value f the form of a syntactic analysis depends on the underlying linguistic theoryfor example for a pcfg co would be parse tree for a lfg co would be a tuple consisting of a cstructure an fstructure and a mapping from cstructure nodes to fstructure elements and for a chomskyian transformational grammar co would be a derivationloglinear models are models in which the log probability is a linear combination of feature values pcfgs gibbs distributions maximumentropy distributions and markov random fields are all examples of loglinear modelsa loglinear model associates each feature f3 with a realvalued parameter 03a loglinear model with m features is one in which the likelihood p of an analysis w is while the estimators described below make no assumptions about the range of the f in the models considered here the value of each feature f2 is the number of times a particular structural arrangement or configuration occurs in the analysis ranges over the natural numbersfor example the features of a pcfg are indexed by productions ie the value f of feature f is the number of times the ith production is used in the derivation w this set of features induces a treestructured dependency graph on the productions which is characteristic of markov branching processes this tree structure has the important consequence that simple quotrelativefrequenciesquot yield maximumlikelihood estimates for the 02extending a pcfg model by adding additional features not associated with productions will in general add additional dependencies destroy the tree structure and substantially complicate maximum likelihood estimationthis is the situation for a subg even if the features are production occurencesthe unification constraints create nonlocal dependencies among the productions and the dependency graph of a subg is usually not a treeconsequently maximum likelihood estimation is no longer a simple matter of computing relative frequenciesbut the resulting estimation procedures albeit more complicated have the virtue of applying to essentially arbitrary featuresof the production or nonproduction typethat is since estimators capable of finding maximumlikelihood parameter estimates for production features in a subg will also find maximumlikelihood estimates for nonproduction features there is no motivation for restricting features to be of the production typelinguistically there is no particular reason for assuming that productions are the best features to use in a stochastic language modelfor example the adjunct attachment ambiguity in results in alternative syntactic structures which use the same productions the same number of times in each derivation so a model with only production features would necessarily assign them the same likelihoodthus models that use production features alone predict that there should not be a systematic preference for one of these analyses over the other contrary to standard psycholinguistic resultsthere are many different ways of choosing features for a subg and each of these choices makes an empirical claim about possible distributions of sentencesspecifying the features of a subg is as much an empirical matter as specifying the grammar itselffor any given ubg there are a large number of subgs that can be constructed from it differing only in the features that each subg usesin addition to production features the stochastic lfg models evaluated below used the following kinds of features guided by the principles proposed by hobbs and bear adjunct and argument features indicate adjunct and argument attachment respectively and permit the model to capture a general argument attachment preferencein addition there are specialized adjunct and argument features corresponding to each grammatical function used in lfg there are features indicating both high and low attachment another feature indicates nonrightbranching nonterminal nodesthere is a feature for nonparallel coordinate structures each fstructure attributeatomic value pair which appears in any feature structure is also used as a featurewe also use a number of features identifying syntactic structures that seem particularly important in these corpora such as a feature identifying nps that are dates we would have liked to have included features concerning specific lexical items but we felt that our corpora were so small that the associated parameters could not be accurately estimatedsuppose wi wr is a training corpus of n syntactic analysesletting f3 f3 poi the log likelihood of the corpus cz and its derivatives are where e0 is the expected value of h under the distribution determined by the parameters 0the maximumlikelihood estimates are the 0 which maximize log 14 the chief difficulty in finding the maximumlikelihood estimates is calculating e9 which involves summing over the space of wellformed syntactic structures a there seems to be no analytic or efficient numerical way of doing this for a realistic subgabney proposes a gradient ascent based upon a monte carlo procedure for estimating eothe idea is to generate random samples of feature structures from the distribution po where 0 is the current parameter estimate and to use these to estimate eo and hence the gradient of the likelihoodsamples are generated as follows given a subg abney constructs a covering pcfg based upon the subg and 0 the current estimate of 0the derivation trees of the pcfg can be mapped onto a set containing all of the subg syntactic analysesmonte carlo samples from the pcfg are comparatively easy to generate and sample syntactic analyses that do not map to wellformed subg syntactic structures are then simply discardedthis generates a stream of syntactic structures but not distributed according to p abney proposes using a metropolis acceptancerejection method to adjust the distribution of this stream of feature structures to achieve detailed balance which then produces a stream of feature structures distributed according to pei while this scheme is theoretically sound it would appear to be computationally impractical for realistic subgsevery step of the proposed procedure requires a very large number of pcfg samples samples must be found that correspond to wellformed subgs many such samples are required to bring the metropolis algorithm to equilibrium many samples are needed at equilibrium to properly estimate ethe idea of a gradient ascent of the likelihood is appealinga simple calculation reveals that the likelihood is concave and therefore free of local maximabut the gradient is intractablethis motivates an alternative strategy involving a databased estimate of e9 where y is the yield belonging to the syntactic analysis w and yi y is the yield belonging to the ith sample in the training corpusthe point is that eoly yi is generally computablein fact if sz is the set of wellformed syntactic structures that have yield y then a l expectations only involves summing over the possible syntactic analyses or parses 12 of the strings in the training corpuswhile it is possible to construct ubgs for which the number of possible parses is unmanageably high for many grammars it is quite manageable to enumerate the set of possible parses and thereby directly evaluate eo iy yi therefore we propose replacing the gradient by and performing a gradient ascentof course is no longer the gradient of the likelihood function but fortunately it is the gradient of another criterion instead of maximizing the likelihood of the syntactic analyses over the training corpus we maximize the conditional likelihood of these analyses given the observed yieldsin our experiments we have used a conjugategradient optimization program adapted from the one presented in press et al regardless of the pragmatic motivation one could perhaps argue that the conditional probabilities po are as useful as the full probabilities po at least in those cases for which the ultimate goal is syntactic analysisberger et al and jelinek make this same point and arrive at the same estimator albeit through a maximum entropy argumentthe problem of estimating parameters for loglinear models is not newit is especially difficult in cases such as ours where a large sample space makes the direct computation of expectations infeasiblemany applications in spatial statistics involving markov random fields are of this nature as wellin his seminal development of the mrf approach to spatial statistics besag introduced a quotpseudolikelihoodquot estimator to address these difficulties and in fact our proposal here is an instance of his methodin general the likelihood function is replaced by a more manageable product of conditional likelihoods what are the asymptotics of optimizing a pseudolikelihood functionlook first at the likelihood itselffor large n where 00 is the true parameter vectorup to a constant is the negative of the kullbackleibler divergence between the true and estimated distributions of syntactic analysesas sample size grows maximizing likelihood amounts to minimizing divergenceas for pseudolikelihood so that maximizing pseudolikelihood amounts to minimizing the average divergence between the true and estimated conditional distributions of analyses given yieldsmaximum likelihood estimation is consistent under broad conditions the sequence of distributions po associated with the maximum likelihood estimator for 00 given the samples con converges to poopseudolikelihood is also consistent but in the present implementation it is consistent for the conditional distributions poo and not necessarily for the full distribution poo it is not hard to see that pseudolikelihood will not always correctly estimate poosuppose there is a feature l which depends only on yields f2 fzin this case the derivative of plo contains no information about 0in fact in this case any value of 0i gives the same conditional distribution po y 0i is irrelevant to the problem of choosing good parsesdespite the assurance of consistency pseudolikelihood estimation is prone to over fitting when a large number of features is matched against a modestsized training corpusone particularly troublesome manifestation of over fitting results from the existence of features which relative to the training set we might term quotpseudomaximalquot let us say that a feature f is pseudomaximal for a yield y if vwf e 1 f f where w is any correct parse of y ie the feature value on every correct parse of y is greater than or equal to its value on any other parse of y pseudominimal features are defined similarlyit is easy to see that if h is pseudomaximal on each sentence of the training corpus then the parameter assignment 03 oo maximizes the corpus pseudolikelihoodsuch infinite parameter values indicate that the model treats pseudomaximal features categorically ie any parse with a nonmaximal feature value is assigned a zero conditional probabilityof course a feature which is pseudomaximal over the training corpus is not necessarily pseudomaximal for all yieldsthis is an instance of over fitting and it can be addressed as is customary by adding a regularization term that promotes small values of 0 to the objective functiona common choice is to add a quadratic to the loglikelihood which corresponds to multiplying the likelihood itself by a normal distributionin our experiments we multiplied the pseudolikelihood by a zeromean normal in 01 0 with diagonal covariance and with standard deviation ay for 03 equal to 7 times the maximum value of fi found in any parse in the training corpusthus instead of maximizing the log pseudolikelihood we choose 0 to maxithe pseudolikelihood estimator described in the last section finds parameter values which maximize the conditional probabilities of the observed parses given the observed sentences in the training corpusone of the empirical evaluation measures we use in the next section measures the number of correct parses selected from the set of all possible parsesthis suggests another possible objective function choose 0 to maximize the number co of times the maximum likelihood parse is in fact the correct parse in the training corpusc9 is a highly discontinuous function of 0 and most conventional optimization algorithms perform poorly on itwe had the most success with a slightly modified version of the simulated annealing optimizer described in press et al this procedure is much more computationally intensive than the gradientbased pseudolikelihood procedureits computational difficulty grows rapidly with the number of featuresron kaplan and hadar shemtov at xerox parc provided us with two lfg parsed corporathe verbmobil corpus contains appointment planning dialogs while the homecentre corpus is drawn from xerox printer documentationtable 1 summarizes the basic properties of these corporathese corpora contain packed cfstructure representations of the grammatical parses of each sentence with respect to lexicalfunctional grammarsthe corpora also indicate which of these parses is in fact the correct parse because slightly different grammars were used for each corpus we chose not to combine the two corpora although we used the set of features described in section 2 for both in the experiments described belowtable 2 describes the properties of the features used for each corpusin addition to the two estimators described above we also present results from a baseline estimator in which all parses are treated as equally likely we evaluated our estimators using heldout test corpus test we used two evaluation measuresin an actual parsing application a subg might be used to identify the correct parse from the set of grammatical parses so ouorfitrestevaluation measure counts the number c of sentences in the test corpus cahest whose maximum likelihood parse under the estimated model 0 is actually the correct parseif a sentence has 1 most likely parses and one of these parses is the correct parse then we score 1 for this sentencethe second evaluation measure is the pseudolikelihood of the test corpus is the likelihood of the correct parses given their yields so pseudolikelihood measures how much of the probability mass the model puts onto the correct analysesthis metric seems more relevant to applications where the system needs to estimate how likely it is that the correct analysis lies in a certain set of possible parses eg ambiguitypreserving translation and humanassisted disambiguationto make the numbers more manageable we actually present the negative logarithm of the pseudolikelihood rather than the pseudolikelihood itselfso smaller is betterbecause of the small size of our corpora we evaluated our estimators using a 10way crossvalidation paradigmwe randomly assigned sentences of each corpus into 10 approximately equalsized subcorpora each of which was used in turn as the test corpuswe evaluated on each subcorpus the parameters that were estimated from the 9 remaining subcorpora that served as the training corpus for this runthe evaluation scores from each subcorpus were summed in order to provide the scores presented heretable 3 presents the results of the empirical evaluationthe superior performance of both estimators on the verbmobil corpus probably reflects the fact that the nonrule features were designed to match both the grammar and content of that corpusthe pseudolikelihood estimator performed better than the correctparses estimator on both corpora under both evaluation metricsthere seems to be substantial over learning in all these models we routinely improved performance by discarding featureswith a small number of features the correctparses estimator typically scores better than the pseudolikelihood estimator on the correctparses evaluation metric but the pseudolikelihood estimator always scores better on the pseudolikelihood evaluation metricthis paper described a loglinear model for subgs and evaluated two estimators for such modelsbecause estimators that can estimate rule features for subgs can also estimate other kinds of features there is no particular reason to limit attention to rule features in a subgindeed the number and choice of features strongly influences the performance of the modelthe estimated models are able to identify the correct parse from the set of all possible parses approximately 50 of the timewe would have liked to introduce features corresponding to dependencies between lexical itemsloglinear models are wellsuited for lexical dependencies but because of the large number of such dependencies substantially larger corpora will probably be needed to estimate such mo dels 1 alternatively it may be possible to use a simpler nonsubg model of lexical dependencies estimated from a much larger corpus as the reference distribution with parses of the test corpus that were the correct parses and log pl is the negative logarithm of the pseudolikelihood of the test corpushowever there may be applications which can benefit from a model that performs even at this levelfor example in a machineassisted translation system a model like ours could be used to order possible translations so that more likely alternatives are presented before less likely onesin the ambiguitypreserving translation framework a model like this one could be used to choose between sets of analyses whose ambiguities cannot be preserved in translation
P99-1069
estimators for stochastic unificationbased grammarsloglinear models provide a statistically sound framework for stochastic unificationbased grammars and stochastic versions of other kinds of grammarswe describe two computationallytractable ways of estimating the parameters of such grammars from a training corpus of syntactic analyses and apply these to estimate a stochastic version of lexicalfunctional grammarwe incorporate general linguistic principles into a loglinear modelwe use parses generated by a lfg parser as input to a mrf approach
information fusion in the context of multidocument summarization we present a method to automatically generate a concise summary by identifying and synthesizing similar elements across related text from a set of multiple documents our approach is unique in its usage of language generation to reformulate the wording of the summary information overload has created an acute need for summarizationtypically the same information is described by many different online documentshence summaries that synthesize common information across documents and emphasize the differences would significantly help readerssuch a summary would be beneficial for example to a user who follows a single event through several newswiresin this paper we present research on the automatic fusion of similar information across multiple documents using language generation to produce a concise summarywe propose a method for summarizing a specific type of input news articles presenting different descriptions of the same eventhundreds of news stories on the same event are produced daily by news agenciesrepeated information about the event is a good indicator of its importancy to the event and can be used for summary generationmost research on single document summarization particularly for domain independent tasks uses sentence extraction to produce a summary in the case of multidocument summarization of articles about the same event the original articles can include both similar and contradictory informationextracting all similar sentences would produce a verbose and repetitive summary while extracting some similar sentences could produce a summary biased towards some sourcesinstead we move beyond sentence extraction using a comparison of extracted similar sentences to select the phrases that should be included in the summary and sentence generation to reformulate them as new textour work is part of a full summarization system which extracts sets of similar sentences themes in the first stage for input to the components described hereour model for multidocument summarization represents a number of departures from traditional language generationtypically language generation systems have access to a full semantic representation of the domaina content planner selects and orders propositions from an underlying knowledge base to form text contenta sentence planner determines how to combine propositions into a single sentence and a sentence generator realizes each set of combined propositions as a sentence mapping from concepts to words and building syntactic structureour approach differs in the following ways on 3th of september 1995 120 hostages were released by bosnian serbsserbs were holding over 250 youn personnelbosnian serb leader radovan karadjic said he expected quota sign of goodwillquot from the international communityyous f16 fighter jet was shot down by bosnian serbselectronic beacon signals which might have been transmitted by a downed yous fighter pilot in bosnia were no longer being receivedafter six days ogrady downed pilot was rescued by marine forcethe mission was carried out by ch53 helicopters with an escort of missile and rocketarmed cobra helicopters information needed for clarification we developed techniques to map predicateargument structure produced by the contentplanner to the functional representation expected by fufsurge and to integrate new constraints on realization choice using surface features in place of semantic or pragmatic ones typically used in sentence generationan example summary automatically generated by the system from our corpus of themes is shown in figure 1we collected a corpus of themes that was divided into a training portion and a testing portionwe used the training data for identification of paraphrasing rules on which our comparison algorithm is builtthe system we describe has been fully implemented and tested on a variety of input articles there are of course many open research issues that we are continuing to explorein the following sections we provide an overview of existing multidocument summarization systems then we will detail our sentence comparison technique and describe the sentence generation componentwe provide examples of generated summaries and conclude with a discussion of evaluationautomatic summarizers typically identify and extract the most important sentences from an input articlea variety of approaches exist for determining the salient sentences in the text statistical techniques based on word distribution symbolic techniques based on discourse structure and semantic relations between words extraction techniques can work only if summary sentences already appear in the articleextraction cannot handle the task we address because summarization of multiple documents requires information about similarities and differences across articleswhile most of the summarization work has focused on single articles a few initial projects have started to study multidocument summarization documentsin constrained domains eg terrorism a coherent summary of several articles can be generated when a detailed semantic representation of the source text is availablefor example information extraction systems can be used to interpret the source textin this framework use generation techniques to highlight changes over time across input articles about the same eventin an arbitrary domain statistical techniques are used to identify similarities and differences across documentssome approaches directly exploit word distribution in the text recent work exploits semantic relations between text units for content representation such as synonymy and coreferencea spreading activation algorithm and graph matching is used to identify similarities and differences across documentsthe output is presented as a set of paragraphs with similar and unique words highlightedhowever if the same information is mentioned several times in different documents much of the summary will be redundantwhile some researchers address this problem by selecting a subset of the repetitions this approach is not always satisfactoryas we will see in the next section we can both eliminate redundancy from the output and retain balance through the selection of common informationon friday a yous f16 fighter jet was shot down by bosnian serb missile while policing the nofly zone over the regiona bosnian serb missile shot down a yous f16 over northern bosnia on fridayon the eve of the meeting a yous f16 fighter was shot down while on a routine patrol over northern bosniaogrady f16 fighter jet based in aviano italy was shot down by a bosnian serb sa6 antiaircraft missile last friday and hopes had diminished for finding him alive despite intermittent electronic signals from the area which later turned out to be a navigational beaconto avoid redundant statements in a summary we could select one sentence from the set of similar sentences that meets some criteria unfortunately any representative sentence usually includes embedded phrases containing information that is not common to other similar sentencestherefore we need to intersect the theme sentences to identify the common phrases and then generate a new sentencephrases produced by theme intersection will form the content of the generated summarygiven the theme shown in figure 2 how can we determine which phrases should be selected to form the summary contentfor our example theme the problem is to determine that only the phrase quoton friday yous f16 fighter jet was shot down by a bosnian serb missilequot is common across all sentencesthe first sentence includes the clause however in other sentences it appears in different paraphrased forms such as quota bosnian serb missile shot down a yous f16 on fridayquothence we need to identify similarities between phrases that are not identical in wording but do report the same factif paraphrasing rules are known we can compare the predicateargument structure of the sentences and find common partsfinally having selected the common parts we must decide how to combine phrases whether additional information is needed for clarification and how to order the resulting sentences to form the summary was shot by missilequot in order to identify theme intersections sentences must be comparedto do this we need a sentence representation that emphasizes sentence features that are relevant for comparison such as dependencies between sentence constituents while ignoring irrelevant features such as constituent orderingsince predicateargument structure is a natural way to represent constituent dependencies we chose a dependency based representation called dsynt an example of a sentence and its dsynt tree is shown in figure 3each nonauxiliary word in the sentence has a node in the dsynt tree and this node is connected to its direct dependentsgrammatical features of each word are also kept in the nodein order to facilitate comparison words are kept in canonical formin order to construct a dsynt we first run our sentences through collin robust statistical parser we developed a rulebased component that transforms the phrasestructure output of the parser to a dsynt representationfunctional words are eliminated from the tree and the corresponding syntactic features are updatedthe comparison algorithm starts with all sentence trees rooted at verbs from the input dsynt and traverses them recursively if two nodes are identical they are added to the output tree and their children are comparedonce a full phrase has been found it is added to the intersectionif nodes are not identical the algorithm tries to apply an appropriate paraphrasing rule from a set of rules described in the next sectionfor example if the phrases quotgroup of studentsquot and quotstudentsquot are compared then the omit empty head rule is applicable since quotgroupquot is an empty noun and can be dropped from the comparison leaving two identical words quotstudentsquotif there is no applicable paraphrasing rule then the comparison is finished and the intersection result is emptyall the sentences in the theme are compared in pairsthen these intersections are sorted according to their frequencies and all intersections above a given threshold result in theme intersectionfor the theme in figure 2 the intersection result is quoton friday a yous f16 fighter jet was shot down by bosnian serb missilequot1 identification of theme intersection requires collecting paraphrasing patterns which occur in our corpusparaphrasing is defined as alternative ways a human speaker can choose to quotsay the same thingquot by using linguistic knowledge paraphrasing has been widely investigated in the generation community considered sets of paraphrases required for text transformation in order to meet external constraints such as length or readability investigated morphologybased paraphrasing in the context of a term recognition taskhowever there is no general algorithm capable of identifying a sentence as a paraphrase of anotherin our case such a comparison is less difficult since theme sentences are a priori close semantically which significantly constrains the kinds of paraphrasing we need to checkin order to verify this assumption we analyzed paraphrasing patterns through themes of our training corpus derived from the topic detection and tracking corpus overall 200 pairs of sentences conveying the same information were analyzedwe found that 85 of the paraphrasing is achieved by syntactic and lexical transformationsexamples of paraphrasing that require world knowledge are presented below to be exact the result of the algorithm is a dsynt that linearizes as this sentence last week at zvornikquot and quotbosnian serb leaders freed about onethird of the youn personnelquot 2quotsheinbein showed no visible reaction to the rulingquot and quotsamuel sheinbein showed no reaction when chief justice aharon barak read the 32 decisionquot since quotsurfacequot level paraphrasing comprises the vast majority of paraphrases in our corpus and is easier to identify than those requiring worldknowledge we studied paraphrasing patterns in the corpuswe found the following most frequent paraphrasing categories the patterns presented above cover 82 of the syntactic and lexical paraphrases these categories form the basis for paraphrasing rules used by our intersection algorithmthe majority of these categories can be identified in an automatic wayhowever some of the rules can only be approximated to a certain degreefor example identification of similarity based on semantic relations between words depends on the coverage of the thesauruswe identify word similarity using synonym relations from wordnetcurrently paraphrasing using part of speech transformations is not supported by the systemall other paraphrase classes we identified are implemented in our algorithm for theme intersectiona property that is unique to multidocument summarization is the effect of time perspective when reading an original text it is possible to retrieve the correct temporal sequence of events which is usually available explicitlyhowever when we put pieces of text from different sources together we must provide the correct time perspective to the reader including the order of events the temporal distance between events and correct temporal referencesin singledocument summarization one of the possible orderings of the extracted information is provided by the input document itselfhowever in the case of multipledocument summarization some events may not be described in the same articlefurthermore the order between phrases can change significantly from one article to anotherfor example in a set of articles about the oklahoma bombing from our training set information about the quotbombingquot itself quotthe death tollquot and quotthe suspectsquot appear in three different orders in the articlesthis phenomenon can be explained by the fact that the order of the sentences is highly influenced by the focus of the articleone possible discourse strategy for summaries is to base ordering of sentences on chronological order of eventsto find the time an event occurred we use the publication date of the phrase referring to the eventthis gives us the best approximation to the order of events without carrying out a detailed interpretation of temporal references to events in the article which are not always presenttypically an event is first referred to on the day it occurredthus for each phrase we must find the earliest publication date in the theme create a quottime stampquot and order phrases in the summary according to this time stamptemporal distance between events is an essential part of the summaryfor example in the summary in figure 1 about a quotyous pilot downed in bosniaquot the lengthy duration between quotthe helicopter was shot downquot and quotthe pilot was rescuedquot is the main point of the storywe want to identify significant time gaps between events and include them in the summaryto do so we compare the time stamps of the themes and when the difference between two subsequent time stamps exceeds a certain threshold the gap is recordeda time marker will be added to the output summary for each gap for example quotaccording to a reuters report on the 1021quotanother timerelated issue that we address is normalization of temporal references in the summaryif the word quottodayquot is used twice in the summary and each time it refers to a different date then the resulting summary can be misleadingtime references such as quottodayquot and quotmondayquot are clear in the context of a source article but can be ambiguous when extracted from the articlethis ambiguity can be corrected by substitution of this temporal reference with the full timedate reference such as quot1021quotby corpus analysis we collected a set of patterns for identification of ambiguous dateshowever we currently do not handle temporal references requiring inference to resolve the input to the sentence generator is a set of phrases that are to be combined and realized as a sentenceinput features for each phrase are determined by the information recovered by shallow analysis during content planningbebecause this input structure and the requirements on the generator are quite different from typical language generators we had to address the design of the input language specification and its interaction with existing features in a new way instead of using the existing surge syntactic realization in a quotblack boxquot manneras an example consider the case of temporal modifiersthe dsynt for an input phrase will simply note that it contains a prepositional phrasefufsurge our language generator requires that the input contain a semantic role circumstantial which in turn contains a temporal featurethe labelling of the circumstantial as time allows surge to make the following decisions given a sentence such as quotafter they made an emergency landing the pilots were reported missingquot the semantic input also provides a solid basis to authorize sophisticated revisions to a base inputif the sentence planner decides to adjoin a source to the clause surge can decide to move the time circumstantial to the end of the clause leading to quotaccording to reuters on thursday night the pilots were reported missing after making an emergency landingquot without such paraphrasing ability which might be decided based on the semantic roles time and sources the system would have to generate an awkward sentence with both circumstantials appearing one after another at the front of the sentencewhile in the typical generation scenario above the generator can make choices based on semantic information in our situation the generator has only a lowlevel syntactic structure represented as a dsyntit would seem at first glance that realizing such an input should be easier for the syntactic realization componentthe generator in that case is left with little less to do than just linearizing the input specificationthe task we had to solve however is more difficult for two reasons 1the input specification we define must allow the sentence planner to perform revisions that is to attach new constituents to a base input specification without taking into account all possible syntactic interactions between the new constituent and existing ones 2surge relies on semantic information to make decisions and verify that these decisions are compatible with the rest of the sentence structurewhen the semantic information is not available it is more difficult to predict that the decisions are compatible with the input provided in syntactic formwe modified the input specification language for fufsurge to account for these problemswe added features that indicate the ordering of circumstantials in the outputordering of circumstantials can easily be derived from their ordering in the inputthus we label circumstantials with the features fronti and endi where i indicates the relative ordering of the circumstantial within the clausein addition if possible when mapping input phrases to a surge syntactic input the sentence planner tries to determine the semantic type of circumstantial by looking up the preposition this allows fufsurge to map the syntactic category of the circumstantial to the semantic and syntactic features expected by surgehowever in cases where the preposition is ambiguous the generator must rely solely on ordering circumstantials based on ordering found in the inputwe have modified surge to accept this type of input in all places surge checks the semantic type of the circumstantial before making choices we verified that the absence of the corresponding input feature would not lead to an inappropriate default being selectedin summary this new application for syntactic realization highlights the need for supporting hybrid inputs of variable abstraction levelsthe implementation benefited from the bidirectional nature of fuf unification in the handling of hybrid constraints and required little change to the existing surge grammarwhile we used circumstantials to illustrate the issues we also handled revision for a variety of other categories in the same mannerevaluation of multidocument summarization is difficultfirst we have not yet found an existing collection of human written summaries of multiple documents which could serve as a gold standardwe have begun a joint project with the columbia journalism school which will provide such data in the futuresecond methods used for evaluation of extractionbased systems are not applicable for a system which involves text regenerationfinally the manual effort needed to develop test beds and to judge systern output is far more extensive than for single document summarization consider that a human judge would have to read many input articles to rate the validity of a summaryconsequently the evaluation that we performed to date is limitedwe performed a quantitative evaluation of our contentselection componentin order to prevent noisy input from the theme construction component from skewing the evaluation we manually constructed 26 themes each containing 4 sentences on averagefar more training data is needed to tune the generation portionwhile we have tuned the system to perform with minor errors on the manual set of themes we have created we need more robust input data from the theme construction component which is still under development to train the generator before beginning large scale testingone problem in improving output is determining how to recover from errors in tools used in early stages of the process such as the tagger and the parserthe evaluation task for the content selection stage is to measure how well we identify common phrases throughout multiple sentencesour algorithm was compared against intersections extracted by human judges from each theme producing 39 sentencelevel predicateargument structuresour intersection algorithm identified 29 predicateargument structures and was able to identify correctly 69 of the subjects 74 of the main verbs and 65 of the other constituents in our list of model predicateargument structureswe present system accuracy separately for each category since identifying a verb or a subject is in most cases more important than identifying other sentence constituentsin this paper we presented an implemented algorithm for multidocument summarization which moves beyond the sentence extraction paradigmassuming a set of similar sentences as input extracted from multiple documents on the same event our system identifies common phrases across sentences and uses language generation to reformulate them as a coherent summarythe use of generation to merge similar information is a new approach that significantly improves the quality of the resulting summaries reducing repetition and increasing fluencythe system we have developed serves as a point of departure for research in a variety of directionsfirst is the need to use learning techniques to identify paraphrasing patterns in corpus dataas a first pass we found paraphrasing rules manuallythis initial set might allow us to automatically identify more rules and increase the performance of our comparison algorithmfrom the generation side our main goal is to make the generated summary more concise primarily by combining clauses togetherwe will be investigating what factors influence the combination process and how they can be computed from input articlespart of combination will involve increasing coherence of the generated text through the use of connectives anaphora or lexical relations one interesting problem for future work is the question of how much context to include from a sentence from which an intersected phrase is drawncurrently we include no context but in some cases context is crucial even though it is not a part of the intersectionthis is the case for example when the context negates or denies the embedded subclause which matches a subclause in another negating contextin such cases the resulting summary is actually falsethis occurs just once in our test cases but it is a serious errorour work will characterize the types of contextual information that should be retained and will develop algorithms for the case of negation among otherswe would like to thank yael dahannetzer for her help with surgethis material is based upon work supported by the national science foundation under grant noiri961879any opinions findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the national science foundation
P99-1071
information fusion in the context of multidocument summarizationwe present a method to automatically generate a concise summary by identifying and synthesizing similar elements across related text from a set of multiple documentsour approach is unique in its usage of language generation to reformulate the wording of the summarywe observe for that task of multidocument summarization of news articles that extraction may be inappropriate because it may produce summaries which are overly verbose or biased towards some sources
semeval2010 task 13 tempeval2 tempeval2 comprises evaluation tasks for time expressions events and temporal relations the latter of which was split up in four sub tasks motivated by the notion that smaller subtasks would make both data preparation and temporal relation extraction easier manually annotated data were the ultimate aim of temporal processing is the automatic identification of all temporal referring expressions events and temporal relations within a texthowever addressing this aim is beyond the scope of an evaluation challenge and a more modest approach is appropriatethe 2007 semeval task tempeval1 was an initial evaluation exercise based on three limited temporal ordering and anchoring tasks that were considered realistic both from the perspective of assembling resources for development and testing and from the perspective of developing systems capable of addressing the tasks1 tempeval2 is based on tempeval1 but is more elaborate in two respects it is a multilingual task and it consists of six subtasks rather than threein the rest of this paper we first introduce the data that we are dealing withwhich gets us in a position to present the list of task introduced by tempeval2 including some motivation as to why we feel that it is a good idea to split up temporal relation classification into sub taskswe proceed by shortly describing the data resources and their creation followed by the performance of the systems that participated in the tasksthe tempeval annotation language is a simplified version of timeml2 using three timeml tags timex3 event and tlinktimex3 tags the time expressions in the text and is identical to the timex3 tag in timemltimes can be expressed syntactically by adverbial or prepositional phrases as shown in the following examplethe two main attributes of the timex3 tag are type and val both shown in the example typequotdatequot valquot20041122quot for tempeval2 we distinguish four temporal types time date duration and set the val attribute assumes values according to an extension of the iso 8601 standard as enhanced by timex2each document has one special timex3 tag the document creation time which is interpreted as an interval that spans a whole daythe event tag is used to annotate those elements in a text that describe what is conventionally referred to as an eventualitysyntactically events are typically expressed as inflected verbs although event nominals such as crash in killed by the crash should also be annotated as eventsthe most salient event attributes encode tense aspect modality and polarity informationexamples of some of these features are shown below proceedings of the 5th international workshop on semantic evaluation acl 2010 pages 5762 uppsala sweden 1516 july 2010 c2010 association for computational linguistics the relation types for the timeml tlink tag form a finegrained set based on james allens interval logic for tempeval the set of labels was simplified to aid data preparation and to reduce the complexity of the taskwe use only six relation types including the three core relations before after and overlap the two less specific relations beforeoroverlap and overlaporafter for ambiguous cases and finally the relation vague for those cases where no particular relation can be establishedtemporal relations come in two broad flavours anchorings of events to time expressions and orderings of eventsevents can be anchored to an adjacent time expression as in examples 5 and 6 or to the document creation time as in 7the country defaultede2 on debts for that entire yearbefore in addition events can be ordered relative to other events as in the examples below the president spokee to the nation on tuesday on the financial crisishe had conferrede2 with his cabinet regarding policy the day beforeafterwe can now define the six tempeval tasks bdetermine the extent of the events in a text as defined by the timeml event tagin addition determine the value of the features class tense aspect polarity and modalityf determine the temporal relation between two events where one event syntactically dominates the other eventof these tasks c d and e were also defined for tempeval1however the syntactic locality restriction in task c was not present in tempeval1task participants could choose to either do all tasks focus on the time expression task focus on the event task or focus on the four temporal relation tasksin addition participants could choose one or more of the six languages for which we provided data chinese english french italian korean and spanishwe feel that welldefined tasks allow us to structure the workflow allowing us to create taskspecific guidelines and using taskspecific annotation tools to speed up annotationmore importantly each task can be evaluated in a fairly straightforward way contrary to for example the problems that pop up when evaluating two complex temporal graphs for the same documentin addition tasks can be ranked allowing systems to feed the results of one task as a feature into another tasksplitting the task into substask reduces the error rate in the manual annotation and that merging the different subtask into a unique layer as a postprocessing operation provides better and more reliable results than doing a complex task all at oncethe data for the five languages were prepared independently of each other and do not comprise a parallel corpushowever annotation specifications and guidelines for the five languages were developed in conjunction with one other in many cases based on version 121 of the timeml annotation guidelines for english3not all corpora contained data for all six taskstable 1 gives the size of the training set and the relation tasks that were includedall corpora include event and timex annotationthe french corpus contained a subcorpus with temporal relations but these relations were not split into the four tasks c through f annotation proceeded in two phases a dual annotation phase where two annotators annotate each document and an adjudication phase where a judge resolves disagreements between the annotatorsmost languages used bat the brandeis annotation tool a generic webbased annotation tool that is centered around the notion of annotation taskswith the task decomposition allowed by bat it is possible to structure the complex task of temporal annotation by splitting it up in as many sub tasks as seems usefulas 3seehttpwwwtimemlorg such bat was wellsuited for tempeval2 annotationwe now give a few more details on the english and spanish data skipping the other languages for reasons that will become obvious at the beginning of section 6the english data sets were based on timebank a handbuilt gold standard of annotated texts using the timeml markup scheme4 however all event annotation was reviewed to make sure that the annotation complied with the latest guidelines and all temporal relations were added according to the tempeval2 relation tasks using the specified relation typesthe data released for the tempeval2 spanish edition is a fragment of the spanish timebank currently under developmentits documents are originally from the spanish part of the ancora corpus data preparation followed the annotation guidelines created to deal with the specificities of event and timex expressions in spanish for the extents of events and time expressions precision recall and the f1measure are used as evaluation metrics using the following formulas where tp is the number of tokens that are part of an extent in both key and response fp is the number of tokens that are part of an extent in the response but not in the key and fn is the number of tokens that are part of an extent in the key but not in the responsefor attributes of events and time expressions and for relation types we use an even simpler metric the number of correct answers divided by the number of answerseight teams participated in tempeval2 submitting a grand total of eighteen systemssome of these systems only participated in one or two tasks while others participated in all tasksthe distribution over the six languages was very uneven sixteen systems for english two for spanish and one for english and spanishthe results for task a recognition and normalization of time expressions are given in tables 2 and 3the results for spanish are more uniform and generally higher than the results for englishfor spanish the fmeasure for timex3 extents ranges from 088 through 091 with an average of 089 for english the fmeasure ranges from 026 through 086 for an average of 078however due to the small sample size it is hard to make any generalizationsin both languages type detection clearly was a simpler task than determining the valuethe results for task b event recognition are given in tables 4 and 5both tables contain results for both spanish and english the first part of each table contains the results for spanish and the next part the results for englishthe column headers in table 5 are abbreviations for polarity mood modality tense aspect and class note that the english team chose to include modality whereas the spanish team used moodas with the time expressions results the sample size for spanish is small but note again the higher fmeasure for event extents in spanishtable 6 shows the results for all relation tasks with the spanish systems in the first two rows and the english systems in the last six rowsrecall that for spanish the training and test sets only contained data for tasks c and d interestingly the version of the tipsem systems that were applied to the spanish data did much better on task c compared to its english cousins but much worse on task d which is rather puzzlingsuch a difference in performance of the systems could be due to differences in annotation accurateness or it could be due to some particularities of how the two languages express certain temporal aspects or perhaps the one corpus is more homogeneous than the otheragain there are not enough data points but the issue deserves further attentionfor each task the test data provided the event pairs or eventtimex pairs with the relation type set to none and participating systems would replace that value with one of the six allowed relation typeshowever participating systems were allowed to not replace none and not be penalized for itthose cases would not be counted when compiling the scores in table 6table 7 lists those systems that did not classify all relation and the percentage of relations for each task that those systems did not classifya comparison with the tempeval1 results from semeval2007 may be of interestsix systems participated in the tempeval1 tasks compared to seven or eight systems for tempeval2table 8 lists the average scores and the standard deviations for all the tasks that tempeval1 and tempeval2 have in commonthe results are very similar except for task d but if we take a away the one outlier then the average becomes 078 with a standard deviation of 005however we had expected that for tempeval2 the systems would score better on task c since we added the restriction that the event and time expression had to be syntactically adjacentit is not clear why the results on task c have not improvedin this paper we described the tempeval2 task within the semeval 2010 competitionthis task involves identifying the temporal relations between events and temporal expressions in textusing a subset of timeml temporal relations we show how temporal relations and anchorings can be annotated and identified in six different languagesthe markup language adopted presents a descriptive framework with which to examine the temporal aspects of natural language information demonstrating in particular how tense and temporal information is encoded in specific sentences and how temporal relations are encoded between events and temporal expressionsthis work paves the way towards establishing a broad and open standard metadata markup language for natural language texts examining events temporal expressions and their orderingsone thing that would need to be addressed in a followup task is what the optimal number of tasks istempeval2 had six tasks spread out over six languagesthis brought about some logistical challenges that delayed data delivery and may have given rise to a situation where there was simply not enough time for many systems to properly prepareand clearly the shared task was not successful in attracting systems to four of the six languagesirina prodanofthe work on the spanish corpus was supported by a eu marie curie international reintegration grant work on the english corpus was supported under the nsfcri grant 0551615 towards a comprehensive linguistic annotation of language and the nsfint0753069 project sustainable interoperability for language technology funded by the national science foundationfinally thanks to all the participants for sticking with a task that was not always as flawless and timely as it could have been in a perfect world
S10-1010
semeval2010 task 13 tempeval2tempeval2 comprises evaluation tasks for time expressions events and temporal relations the latter of which was split up in four subtasks motivated by the notion that smaller subtasks would make both data preparation and temporal relation extraction easiermanually annotated data were provided for six languages chinese english french italian korean and spanishone of the tasks of this workshop is to determine the temporal relation between an event and a time expression in the same sentence
semeval2010 task 14 word sense induction x26disambiguation this paper presents the description and evaluation framework of semeval2010 word sense induction disambiguation task as well as the evaluation results of 26 participating systems in this task participants were required to induce the senses of 100 target words using a training set and then disambiguate unseen instances of the same words using the induced senses systems answers were evaluated in an unsupervised manner by using two clustering evaluation measures and a supervised manner in a wsd task word senses are more beneficial than simple word forms for a variety of tasks including information retrieval machine translation and others however word senses are usually represented as a fixedlist of definitions of a manually constructed lexical databaseseveral deficiencies are caused by this representation eg lexical databases miss main domainspecific senses they often contain general definitions and suffer from the lack of explicit semantic or contextual links between concepts more importantly the definitions of handcrafted lexical databases often do not reflect the exact meaning of a target word in a given context unsupervised word sense induction aims to overcome these limitations of handconstructed lexicons by learning the senses of a target word directly from text without relying on any handcrafted resourcesthe primary aim of semeval2010 wsi task is to allow comparison of unsupervised word sense induction and disambiguation systemsthe target word dataset consists of 100 words 50 nouns and 50 verbsfor each target word participants were provided with a training set in order to learn the senses of that wordin the next step participating systems were asked to disambiguate unseen instances of the same words using their learned sensesthe answers of the systems were then sent to organisers for evaluationfigure 1 provides an overview of the taskas can be observed the task consisted of three separate phasesin the first phase training phase participating systems were provided with a training dataset that consisted of a set of target word instances participants were then asked to use this training dataset to induce the senses of the target wordno other resources were allowed with the exception of nlp components for morphology and syntaxin the second phase testing phase participating systems were provided with a testing dataset that consisted of a set of target word instances participants were then asked to tag each testing instance with the senses induced during the training phasein the third and final phase the tagged test instances were received by the organisers in order to evaluate the answers of the systems in a supervised and an unsupervised frameworktable 1 shows the total number of target word instances in the training and testing set as well as the average number of senses in the gold standardthe main difference of the semeval2010 as compared to the semeval2007 sense induction task is that the training and testing data are treated separately ie the testing data are only used for sense tagging while the training data are only used for sense inductiontreating the testing data as new unseen instances ensures a realistic evaluation that allows to evaluate the clustering models of each participating systemthe evaluation framework of semeval2010 wsi task considered two types of evaluationin the first one unsupervised evaluation systems answers were evaluated according to vmeasure and paired fscore neither of these measures were used in the semeval2007 wsi taskmanandhar klapaftis provide more details on the choice of this evaluation setting and its differences with the previous evaluationthe second type of evaluation supervised evaluation follows the supervised evaluation of the semeval2007 wsi task in this evaluation induced senses are mapped to gold standard senses using a mapping corpus and systems are then evaluated in a standard wsd taskthe target word dataset consisted of 100 words ie50 nouns and 50 verbsthe training dataset for each target noun or verb was created by following a webbased semiautomatic method similar to the method for the construction of topic signatures specifically for each wordnet sense of a target word we created a query of the following form the consisted of the target word stemthe consisted of a disjunctive set of word lemmas that were related to the target word sense for which the query was createdthe relations considered were wordnets hypernyms hyponyms synonyms meronyms and holonymseach query was manually checked by one of the organisers to remove ambiguous wordsthe following example shows the query created for the first1 and second2 wordnet sense of the target noun failurethe created queries were issued to yahoo search api3 and for each query a maximum of 1000 pages were downloadedfor each page we extracted fragments of text that occurred in html tags and contained the target word stemin the final stage each extracted fragment of text was postagged using the genia tagger and was only retained if the pos of the target word in the extracted text matched the pos of the target word in our datasetthe testing dataset consisted of instances of the same target words from the training datasetthis dataset is part of ontonotes we used the sensetagged dataset in which sentences containing target word instances are tagged with ontonotes sensesthe texts come from various news sources including cnn abc and othersfor the purposes of this section we provide an example in which a target word has 181 instances and 3 gs sensesa system has generated a clustering solution with 4 clusters covering all instancestable 3 shows the number of common instances between clusters and gs sensesthis section presents the measures of unsupervised evaluation ie vmeasure and paired fscore let w be a target word with n instances in the testing datasetlet k cjj 1 n be a set of automatically generated clusters grouping these instances and s gii 1 m the set of gold standard classes containing the desirable groupings of w instancesvmeasure assesses the quality of a clustering solution by explicitly measuring its homogeneity and its completenesshomogeneity refers to the degree that each cluster consists of data points primarily belonging to a single gs class while completeness refers to the degree that each gs class consists of data points primarily assigned to a single cluster let h be homogeneity and c completenessvmeasure is the harmonic mean of h and c iev m 2hc hc homogeneitythe homogeneity h of a clustering solution is defined in formula 1 where h is the conditional entropy of the class distribution given the proposed clustering and h is the class entropywhen h is 0 the solution is perfectly homogeneous because each cluster only contains data points that belong to a single classhowever in an imperfect situation h depends on the size of the dataset and the distribution of class sizeshence instead of taking the raw conditional entropy vmeasure normalises it by the maximum reduction in entropy the clustering information could provide iehwhen there is only a single class 0 any clustering would produce a perfectly homogeneous solutioncompletenesssymmetrically to homogeneity the completeness c of a clustering solution is defined in formula 4 where h is the conditional entropy of the cluster distribution given the class distribution and h is the clustering entropywhen h is 0 the solution is perfectly complete because all data points of a class belong to the same clusterfor the clustering example in table 3 homogeneity is equal to 0404 completeness is equal to 037 and vmeasure is equal to 0386in this evaluation the clustering problem is transformed into a classification problemfor each cluster ci we generate be the set of instance pairs that exist in the automatically induced clusters and f be the set of instance pairs that exist in the gold standardprecision can be defined as the number of common instance pairs between the two sets to the total number of pairs in the clustering solution while recall can be defined as the number of common instance pairs between the two sets to the total number of pairs in the gold standard finally precision and recall are combined to produce the harmonic mean stance pairs for c1 70 for c2 71 for c3 and 5 for c4 resulting in a total of 5505 instance 2 2 pairsin the same vein we can generate 36 total the gs classes contain 5820 instance pairsthere are 3435 common instance pairs hence precision is equal to 6239 recall is equal to 5909 and paired fscore is equal to 6069in this evaluation the testing dataset is split into a mapping and an evaluation corpusthe first one is used to map the automatically induced clusters to gs senses while the second is used to evaluate methods in a wsd settingthis evaluation follows the supervised evaluation of semeval2007 wsi task with the difference that the reported results are an average of 5 random splitsthis repeated random sampling was performed to avoid the problems of the semeval2007 wsi challenge in which different splits were providing different system rankingslet us consider the example in table 3 and assume that this matrix has been created by using the mapping corpustable 3 shows that c1 is more likely to be associated with g3 c2 is more likely to be associated with g2 c3 is more likely to be associated with g3 and c4 is more likely to be associated with g1this information can be utilised to map the clusters to gs sensesparticularly the matrix shown in table 3 is normalised to produce a matrix m in which each entry depicts the estimated conditional probability pgiven an instance i of tw from the evaluation corpus a row cluster vector ic is created in which each entry k corresponds to the score assigned to ck to be the winning cluster for instance ithe product of ic and m provides a row sense vector ig in which the highest scoring entry a denotes that ga is the winning sensefor example if we produce the row cluster vector c1 08c2 01 c3 01 c400and multiply it with the normalised matrix of table 3 then we would get a row sense vector in which g3 would be the winning sense with a score equal to 043in this section we present the results of the 26 systems along with two baselinesthe first baseline most frequent sense groups all testing instances of a target word into one clusterthe second baseline random randomly assigns an instance to one out of four clustersthe number of clusters of random was chosen to be roughly equal to the average number of senses in the gsthis baseline is executed five times and the results are averagedtable 4 shows the vmeasure performance of the 26 systems participating in the taskthe last column shows the number of induced clusters of each system in the test setthe mfs baseline has a vmeasure equal to 0 since by definition its completeness is 1 and homogeneity is 0all systems outperform this baseline apart from one whose vmeasure is equal to 0regarding the random baseline we observe that 17 perform better which indicates that they have learned useful information better than chancetable 4 also shows that vmeasure tends to favour systems producing a higher number of clusters than the number of gs senses although vmeasure does not increase monotonically with the number of clusters increasingfor that reason we introduced the second unsupervised evaluation measure that penalises systems when they produce a higher number of clusters or a lower number of clusters than the gs number of sensestable 5 shows the performance of systems using the second unsupervised evaluation measurein this evaluation we observe that most of the systems perform better than randomdespite that none of the systems outperform the mfs baselineit seems that systems generating a smaller number of clusters than the gs number of senses are biased towards the mfs hence they are not able to perform betteron the other hand systems generating a higher number of clusters are penalised by this measuresystems generating a number of clusters roughly the same as the gs tend to conflate the gs senses lot more than the mfstable 6 shows the results of this evaluation for a 8020 test set split ie80 for mapping and 20 for evaluationthe last columns shows the average number of gs senses identified by each system in the five splits of the evaluation datasetsoverall 14 systems outperform the mfs while 17 of them perform better than randomthe ranking of systems in nouns and verbs is differentfor instance the highest ranked system in nouns is uoy while in verbs duluthmixnarrowgapit seems that depending on the partofspeech of the target word different algorithms features and parameters tuning have different impactthe supervised evaluation changes the distribution of clusters by mapping each cluster to a weighted vector of senseshence it can potentially favour systems generating a high number of homogeneous clustersfor that reason we applied a second testing set split where 60 of the testing corpus was used for mapping and 40 for evaluationreducing the size of the mapping corpus allows us to observe whether the above statement is correct since systems with a high number of clusters would suffer from unreliable mappingtable 7 shows the results of the second supervised evaluationthe ranking of participants did not change significantly ie we observe only different rankings among systems belonging to the same participantdespite that table 7 also shows that the reduction of the mapping corpus has a different impact on systems generating a larger number of clusters than the gs number of sensesfor instance uoy that generates 1154 clusters outperformed the mfs by 377 in the 8020 split and by 371 in the 6040 splitthe reduction of the mapping corpus had a minimal impact on its performancein contrast ksu kdd that generates 175 clusters was below the mfs by 649 in the 8020 split and by 783 in the 6040 splitthe reduction of the mapping corpus had a larger impact in this casethis result indicates that the performance in this evaluation also depends on the distribution of instances within the clusterssystems generating a skewed distribution in which a small number of homogeneous clusters tag the majority of instances and a larger number of clusters tag only a few instances are likely to have a better performance than systems that produce a more uniform distributionwe presented the description evaluation framework and assessment of systems participating in the semeval2010 sense induction taskthe evaluation has shown that the current stateoftheart lacks unbiased measures that objectively evaluate clusteringthe results of systems have shown that their performance in the unsupervised and supervised evaluation settings depends on cluster granularity along with the distribution of instances within the clustersour future work will focus on the assessment of sense induction on a taskoriented basis as well as on clustering evaluationwe gratefully acknowledge the support of the eu fp7 indect project grant no218086 the national science foundation grant nsf0715078 consistent criteria for word sense disambiguation and the gale program of the defense advanced research projects agency contract nohr001106c0022 a subcontract from the bbnagile team
S10-1011
semeval2010 task 14 word sense induction x26disambiguationthis paper presents the description and evaluation framework of semeval2010 word sense induction disambiguation task as well as the evaluation results of 26 participating systemsin this task participants were required to induce the senses of 100 target words using a training set and then disambiguate unseen instances of the same words using the induced sensessystem answers were evaluated in an unsupervised manner by using two clustering evaluation measures and a supervised manner in a wsd taskin constructing the dataset we use wordnet to first randomly select one sense of the word and then construct a set of words in relation to the first word chosen synset
semeval2012 task 8 crosslingual textual entailment for content synchronization this paper presents the first round of the on textual entailment for organized within semeval2012 the task was designed to promote research on semantic inference over texts written in different languages targeting at the same time a real application scenario participants were presented with datasets for different language pairs where multidirectional entailment relations had to be identified we report on the training and test data used for evaluation the process of their creation the participating systems the approaches adopted and the results achieved the crosslingual textual entailment task addresses textual entailment recognition under the new dimension of crosslinguality and within the new challenging application scenario of content synchronizationcrosslinguality represents a dimension of the te recognition problem that has been so far only partially investigatedthe great potential for integrating monolingual te recognition components into nlp architectures has been reported in several areas including question answering information retrieval information extraction and document summarizationhowever mainly due to the absence of crosslingual textual entailment recognitioncomponents similar improvements have not been achieved yet in any crosslingual applicationthe clte task aims at prompting research to fill this gapalong such direction research can now benefit from recent advances in other fields especially machine translation and the availability of i large amounts of parallel and comparable corpora in many languages ii open source software to compute wordalignments from parallel corpora and iii open source software to set up mt systemswe believe that all these resources can positively contribute to develop inference mechanisms for multilingual datacontent synchronization represents a challenging application scenario to test the capabilities of advanced nlp systemsgiven two documents about the same topic written in different languages the task consists of automatically detecting and resolving differences in the information they provide in order to produce aligned mutually enriched versions of the two documentstowards this objective a crucial requirement is to identify the information in one page that is either equivalent or novel with respect to the content of the otherthe task can be naturally cast as an entailment recognition problem where bidirectional and unidirectional entailment judgments for two text fragments are respectively mapped into judgments about semantic equivalence and noveltyalternatively the task can be seen as a machine translation evaluation problem where judgments about semantic equivalence and novelty depend on the possibility to fully or partially translate a text fragment into the otherthe recent advances on monolingual te on the one hand and the methodologies used in statistical machine translation on the other offer promising solutions to approach the clte taskin line with a number of systems that model the rte task as a similarity problem the standard sentence and word alignment programs used in smt offer a strong baseline for cltehowever although representing a solid starting point to approach the problem similaritybased techniques are just approximations open to significant improvements coming from semantic inference at the multilingual level taken in isolation similaritybased techniques clearly fall short of providing an effective solution to the problem of assigning directions to the entailment relations thanks to the contiguity between clte te and smt the proposed task provides an interesting scenario to approach the issues outlined above from different perspectives and large room for mutual improvementgiven a pair of topically related text fragments in different languages the clte task consists of automatically annotating it with one of the following entailment judgments in this task both t1 and t2 are assumed to be true statementsalthough contradiction is relevant from an applicationoriented perspective contradictory pairs are not present in the dataset created for the first round of the taskfour clte corpora have been created for the following language combinations spanishenglish italianenglish frenchenglish germanenglish the datasets are released in the xml format shown in figure 1the dataset was created following the crowdsourcing methodology proposed in which consists of the following steps only the pairs where the difference between the number of words in t1 and t2 was below a fixed threshold were retained1 the final result is a monolingual english dataset annotated with multidirectional entailment judgments which are well distributed over length different values ranging from 0 to 9 to ensure the good quality of the datasets all the collected pairs were manually checked and corrected when necessaryonly pairs with agreement between two expert annotators were retainedthe final result is a multilingual parallel entailment corpus where t1s are in 5 different languages and t2s are in englishit is worth mentioning that the monolingual english corpus a byproduct of our data collection methodology will be publicly released as a further contribution to the research community2 each dataset consists of 1000 pairs balanced across the four entailment judgments for each language combination the distribution of the four entailment judgments according to length different is shown in figure 2vertical bars represent for each length different value the proportion of pairs belonging to the four entailment classesas can be seen the length different constraint applied to the length difference in the monolingual english pairs is substantially reflected in the crosslingual datasets for all language combinationsin fact as shown in table 1 the majority of the pairs is always included in the same length different range and within this range the distribution of the four classes is substantially uniformour assumption is that such data distribution makes entailment judgments based on mere surface features such as sentence length ineffective thus encouraging the development of alternative deeper processing strategiesevaluation results have been automatically computed by comparing the entailment judgments returned by each system with those manually assigned by human annotatorsthe metric used for systems ranking is accuracy over the whole test set ie the number of correct judgments out of the total number of judgments in the test setadditionally we calculated precision recall and f1 measures for each of the four entailment judgment categories taken separatelythese scores aim at giving participants the possibility to gain clearer insights into their systems behavior on the entailment phenomena relevant to the taskfor each language combination two baselines considering the length difference between t1 and t2 have been calculated judgments returned by the two classifiers are composed into a single multidirectional judgment both the baselines have been calculated with the libsvm package using a linear kernel with default parametersbaseline results are reported in table 2although the four clte datasets are derived from the same monolingual enen corpus baseline results present slight differences due to the effect of translation into different languagesparticipants were allowed to submit up to five runs for each language combinationa total of 17 teams registered to participate in the task and downloaded the training setout of them 12 downloaded the test set and 10 submitted valid runseight teams produced submissions for all the language combinations while two teams participated only in the spen taskin total 92 runs have been submitted and evaluated despite the novelty and the difficulty of the problem these numbers demonstrate the interest raised by the task and the overall success of the initiativeaccuracy results are reported in table 3as can be seen from the table overall accuracy scores are quite different across language pairs with the highest result on spen which is considerably higher than the highest score on deen this might be due to the fact that most of the participating systems rely on a pivoting approach that addresses clte by automatically translating t1 in the same language of t2 regarding the deen dataset pivoting methods might be penalized by the lower quality of mt output when german t1s are translated into englishthe comparison with baselines results leads to interesting observationsfirst of all while all systems significantly outperform the lowest 1class baseline both other baselines are surprisingly hard to beatthis shows that despite the effort in keeping the distribution of the entailment classes uniform across different length different values eliminating the correlation between sentences length and correct entailment decisions is difficultas a consequence although disregarding semantic aspects of the problem features considering such information are quite effectivein general systems performed better on the spen dataset with most results above the binary baseline and half of the systems above the multiclass baselinefor the other language pairs the results are lower with only 3 out of 8 participants above the two baselines in all datasetsaverage results reflect this situation the average scores are always above the binary baseline whereas only the spen average result is higher than the multiclass baselineto better understand the behaviour of each system table 4 provides separate precision recall and f1 scores for each entailment judgment calculated over the best runs of each participating teamoverall the results suggest that the bidirectional and no entailment categories are more problematic than forward and backward judgmentsfor most datasets in fact systems performance on bidirectional and no entailment is significantly lower typically on recallexcept for the deen dataset also average f1 results on these judgments are lowerthis might be due to the fact that for all datasets the vast majority of bidirectional and no entailment judgments falls in a length different range where the distribution of the four classes is more uniform similar reasons can justify the fact that backward entailment results are consistently higher on all datasetscompared with forward entailment these judgments are in fact less scattered across the entire length different range a rough classification of the approaches adopted by participants can be made along two orthogonal dimensions namely concerning the former dimension most of the systems adopted a pivoting approach relying on google translate microsoft bing translator or a combination of google bing and other mt systems to produce english t2sregarding the latter dimension the compositional approach was preferred to multiclass classification the best performing system relies on a hybrid approach and a compositional strategybesides the frequent recourse to mt tools other resources used by participants include online dictionaries for the translation of single words word alignment tools partofspeech taggers np chunkers named entity recognizers stemmers stopwords lists and wikipedia as an external multilingual corpusmore in detail buap pivoting compositional adopts a pivoting method based on translating t1 into the language of t2 and vice versa similarity measures and rules are respectively used to annotate the two resulting sentence pairs with entailment judgments and combine them in a single decisionceli cross lingual compositional multiclass uses dictionaries for word matching and a multilingual corpus extracted from wikipedia for term weightingword overlap and similarity measures are then used in different approaches to the taskin one run they are used to train a classifier that assigns separate entailment judgments for each directionsuch judgments are finally composed into a single one for each pairin the other runs the same features are used for multiclass classificationdirrelcond3 cross lingual compositional uses bilingual dictionaries to translate content words into englishthen entailment decisions are taken combining directional relatedness scores between words in both directions fbk cross lingual compositional multiclass uses crosslingual matching features extracted from lexical phrase tables semantic phrase tables and dependency relations the features are used for multiclass and binary classification using svmshdu hybrid compositional uses a combination of binary classifiers for each entailment directionthe classifiers use both monolingual alignment features based on meteor alignments and crosslingual alignment features based on giza ict pivoting compositional adopts a pivoting method and the open source edits system to calculate similarity scores between monolingual english pairsseparate unidirectional entailment judgments obtained from binary classifier are combined to return one of the four valid clte judgmentsjucsenlp pivoting compositional uses microsoft bing translator7 to produce monolingual english pairsseparate lexical mapping scores are calculated considering different types of information and similarity metricsbinary entailment decisions are then heuristically combined into single decisionssagan pivoting multiclass adopts a pivoting method using google translate and trains a monolingual system based on a svm multiclass classifiera clte corpus derived from the rte3 dataset is also used as a source of additional training materialsoftcard pivoting multiclass after automatic translation with google translate uses svms to learn entailment decisions based on information about the cardinality of t1 t2 their intersection and their unioncardinalities are computed in different ways considering tokens in t1 and t2 their idf and their similarity ualacant pivoting multiclass exploits translations obtained from google translate microsoft bing translator and the apertium opensource mt platform 8 then a multiclass svm classifier is used to take entailment decisions using information about overlapping subsegments as featuresdespite the novelty of the problem and the difficulty to capture multidirectional entailment relations across languages the first round of the crosslingual textual entailment for content synchronization task organized within semeval2012 was a successful experiencethis year a new interesting challenge has been proposed a benchmark for four language combinations has been released baseline results have been proposed for comparison and a monolingual english dataset has been produced as a byproduct which can be useful for monolingual te researchthe interest shown by participants was encouraging 10 teams submitted a total of 92 runs for all the language pairs proposedoverall the results achieved on all datasets are encouraging with best systems significantly outperforming the proposed baselinesit is worth observing that the nature of the task which lies between semantics and machine translation led to the participation of teams coming from both these communities showing interesting opportunities for integration and mutual improvementthe proposed approaches reflect this situation with teams traditionally working on mt now dealing with entailment and teams traditionally participating in the rte challenges now dealing with crosslingual alignment techniquesour ambition for the future editions of the clte task is to further consolidate the bridge between the semantics and mt communitiesthis work has been partially supported by the ecfunded project cosyne the authors would also like to acknowledge giovanni moretti from celct for evaluation scripts and technical assistance and the volunteer translators that contributed to the creation of the dataset
S12-1053
semeval2012 task 8 crosslingual textual entailment for content synchronizationthis paper presents the first round of the task on crosslingual textual entailment for content synchronization organized within semeval2012the task was designed to promote research on semantic inference over texts written in different languages targeting at the same time a real application scenarioparticipants were presented with datasets for different language pairs where multidirectional entailment relations had to be identifiedwe report on the training and test data used for evaluation the process of their creation the participating systems the approaches adopted and the results achieved
centroidbased summarization of multiple documents sentence extraction utilitybased evaluation and user studies we present a multidocument summarizer called mead which generates summaries using cluster centroids produced by a topic detection and tracking system we also describe two new techniques based on sentence utility and subsumption which we have applied to the evaluation of both single and multiple document summaries finally we describe two user studies that test our models of multidocument summarization on october 12 1999 a relatively small number of news sources mentioned in passing that pakistani defense minister gen pervaiz musharraf was away visiting sri lankahowever all world agencies would be actively reporting on the major events that were to happen in pakistan in the following days prime minister nawaz sharif announced that in gen musharrafs absence the defense minister had been sacked and replaced by general zia addinlarge numbers of messages from various sources started to inundate the newswire about the army occupation of the capital the prime minister ouster and his subsequent placement under house arrest gen musharraf s return to his country his ascendancy to power and the imposition of military control over pakistanthe paragraph above summarizes a large amount of news from different sourceswhile it was not automatically generated one can imagine the use of such automatically generated summariesin this paper we will describe how multidocument summaries are built and evaluatedthe process of identifying all articles on an emerging event is called topic detection and tracking a large body of research in tdt has been created over the past two years allan et al 98we will present an extension of our own research on tdt radev et al 1999 to cover summarization of multidocument clustersour entry in the official tdt evaluation called cidr radev et al 1999 uses modified tfidf to produce clusters of news articles on the same eventwe developed a new technique for multidocument summarization called centroidbased summarization which uses as input the centroids of the clusters produced by c1dr to identify which sentences are central to the topic of the cluster rather than the individual articleswe have implemented cbs in a system named meadthe main contributions of this paper are the development of a centroidbased multidocument summarizer the use of clusterbased sentence utility and crosssentence informational subsumption for evaluation of single and multidocument summaries two user studies that support our findings and an evaluation of meadan event cluster produced by a tdt system consists of chronologically ordered news articles from multiple sources which describe an event as it develops over timeevent clusters range from2 to 10 documents from which mead produces summaries in the form of sentence extractsa key feature of mead is its use of cluster centroids which consist of words which are central not only to one article in a cluster but to all the articlesmead is significantly different from previous work on multidocument summarization radev mckeown 1998 carbonell and goldstein 1998 mani and bloedorn 1999 mckeown et al 1999 which use techniques such as graph matching maximal marginal relevance or language generationfinally evaluation of multidocument summaries is a difficult problemthere is not yet a widely accepted evaluation schemewe propose a utilitybased evaluation scheme which can be used to evaluate both singledocument and multidocument summariesclusterbased sentence utility refers to the degree of relevance of a particular sentence to the general topic of the entire cluster a utility of 0 means that the sentence is not relevant to the cluster and a 10 marks an essential sentencea related notion to cbsu is crosssentence informational subsumption which reflects that certain sentences repeat some of the information present in other sentences and may therefore be omitted during summarizationif the information content of sentence a is contained within sentence b then a becomes informationally redundant and the content of b is said to subsume that of a in the example below subsumes because the crucial information in is also included in which presents additional content quotthe courtquot quotlast augustquot and quotsentenced him to lifequotthe cluster shown in figure i shows subsumption links across two articles about recent terrorist activities in algeria an arrow from sentence a to sentence b indicates that the information content of a is subsumed by the information content of b sentences 2 4 and 5 from the first article repeat the information from sentence the full text of these articles is shown in the appendix2 in the second article while sentence 9 from the former article is later repeated in sentences 3 and 4 of the latter articlesentences subsuming each other are said to belong to the same equivalence classan equivalence class may contain more than two sentences within the same or different articlesin the following example although sentences and are not exact paraphrases of each other they can be substituted for each other without crucial loss of information and therefore belong to the same equivalence class ie i c 1 and i c iin the user study section we will take a look at the way humans perceive csis and equivalence classthursday that 18 decapitated bodies have been found by the authoritiesmaximal marginal relevance is a technique similar to csis and was introduced in carbonell and goldstein 1998in that paper mmr is used to produce summaries of single documents that avoid redundancythe authors mention that their preliminary results indicate that multiple documents on the same topic also contain redundancy but they fall short of using mmr for multidocument summarizationtheir metric is used as an enhancement to a querybased summary whereas csis is designed for queryindependent summarieswe now describe the corpus used for the evaluation of mead and later in this section we present mead algorithmafp upi afp upi ap afp ap afp upi ap pri voa ap nyt algerian terrorists threaten belgium the fbi puts osama bin laden on the most wanted list explosion in a moscow apartment building explosion in a moscow apartment building general strike in denmark toxic spill in spain for our experiments we prepared a snail corpus consisting of a total of 558 sentences in 27 documents organized in 6 clusters all extracted by cidrfour of the clusters are from usenet newsgroupsthe remaining two clusters are from the official tdt corpus2among the factors for our selection of clusters are coverage of as many news sources as possible coverage of both tdt and nontdt data coverage of different types of news and diversity in cluster sizes the test corpus is used in the evaluation in such a way that each cluster is summarized at 9 different compression rates thus giving nine times as many sample points as one would expect from the size of the corpustable 2 shows a sample centroid produced by cidr radev et al 1999 from cluster athe quotcountquot column indicates the average number of occurrences of a word across the entire clusterthe idf values were computed from the tdt corpusa centroid in this context is a pseudodocument which consists of words which have countidf scores above a predefined threshold in the documents that constitute the clustercidr computes countidf in an iterative fashion updating its values as more articles are inserted in a given clusterwe hypothesize that sentences that contain the words from the centroid are more indicative of the topic of the cluster2 the selection of cluster e is due to an idea by the participants in the novelty detection workshop led by james allanmead decides which sentences to include in the extract by ranking them according to a set of parametersthe input to mead is a cluster of articles and a value for the compression rate r for example if the cluster contains a total of 50 sentences and the value of r is 20 the output of mead will contain 10 sentencessentences are laid in the same order as they appear in the original documents with documents ordered chronologicallywe benefit here from the time stamps associated with each document where i is the sentence number within the clusterinput cluster of d documents 3 with n sentences 3 note that currently mead requires that sentence boundaries be markedthe system performance s is one of the numbers6 described in the previous subsectionfor 13 the value of s is 0627 for 14 s is 0833 which is between r and jin the example only two of the six possible sentence selections 14 and 24 are between r and jthree others 13 are below r while 12 is better than jto restrict system performance between 0 and i we use a mapping between r and j in such a way that when s r the normalized system performance d is equal to 0 and when s j d becomes ithe corresponding linear function7 is figure 2 shows the mapping between system performance s on the left and normalized system performance d on the right a small part of the 0i segment is mapped to the entire 01 segment therefore the difference between two systems performing at eg 0785 and 0812 can be significantexample the normalized system performance for the 14 system then becomes or 0927since the score is close to 1 the 14 system is almost as good as the interjudge agreementthe normalized system performance for the 24 system is similarly 0732 or 0963of the two systems 24 outperforms 14to use csis in the evaluation we introduce a new parameter e which tells us how much to penalize a system that includes redundant informationin the example from table 7 a summarizer with are 20 needs to pick 2 out of 12 sentencessuppose that it picks 11 and 21 if e 1 it should get full credit of 20 utility pointsif e 0 it should get no credit for the second sentence as it is subsumed by the first sentenceby varying e between 0 and i the evaluation may favor or ignore subsumptionwe ran two user experimentsfirst six judges were each given six clusters and asked to ascribe an importance score from 0 to 10 to each sentence within a particular clusternext five judges had to indicate for each sentence which other sentence if any it subsumes 8using the techniques described in section 0 we computed the crossjudge agreement for the 6 clusters for various are overall interjudge agreement was quite highan interesting drop in interjudge agreement occurs for 2030 summariesthe drop most likely results from the fact that 10 summaries are typically easier to produce because the few most important sentences in a cluster are easier to identify8 we should note that both annotation tasks were quite time consuming and frustrating for the users who took anywhere from 6 to 10 hours each to complete their partin the second experiment we asked users to indicate all cases when within a cluster a sentence is subsumed by anotherthe judges data on the first seven sentences of cluster a are shown in table 8the quotf scorequot indicates the number of judges who agree on the most frequent subsumptionthe t scorequot indicates that the consensus was no subsumptionwe found relatively low interjudge agreement on the cases in which at least one judge indicated evidence of subsumptionoverall out of 558 sentences there was full agreement on 292 sentences unfortunately h 291 of these 292 sentences the agreement was that there is no subsumptionwhen the bar of agreement was lowered to four judges 23 out of 406 agreements are on sentences with subsumptionoverall out of 80 in conclusion we found very high interjudge agreement in the first experiment and moderately low agreement in the second experimentwe concede that the time necessary to do a proper job at the second task is partly to blamesince the baseline of random sentence selection is already included in the evaluation formulae we used the leadbased method sentences from each cluster where c number of clusters as the baseline to evaluate our systemin table 10 we show the normalized performance of mead for the six clusters at nine compression ratesmead performed better than lead in 29 out of 54 casesnote that for the largest cluster cluster d mead outperformed lead at all compression rates showed how mead sentence scoring weights can be modified to produce summaries significantly better than the alternativeswe also looked at a property of multidocument clusters namely crosssentence information subsumption and showed how it can be used in evaluating multidocument summariesall our findings are backed by the analysis of two experiments that we performed with human subjectswe found that the interjudge agreement on sentence utility is very high while the agreement on crosssentence subsumption is moderately low ahhough promisingin the future we would like to test our multidocument summarizer on a larger corpus and improve the summarization algorithmwe would also like to explore how the techniques we proposed here can be used for multiligual multidocument summarizationwe then modified the mead algorithm to include lead information as well as centroids in this case meadlead performed better than the lead baseline in 41 caseswe are in the process of running experiments with other score formulasit may seem that utilitybased evaluation requires too much effort and is prone to low interjudge agreementwe believe that our results show that interjudge agreement is quite highas far as the amount of effort required we believe that the larger effort on the part of the judges is more or less compensated with the ability to evaluate summaries offline and at variable compression ratesalternative evaluations do not make such evaluations possiblewe should concede that a utilitybased approach is probably not feasible for querybased summaries as these are typically done only onlinewe discussed the possibility of a sentence contributing negatively to the utility of another sentence due to redundancywe should also point out that sentences can also reinforce one another positivelyfor example if a sentence mentioning a new entity is included in a summary one might also want to include a sentence that puts the entity in the context of the reit of the article or clusterwe presented a new multidocument summarizer meadit summarizes clusters of news articles automatically grouped by a topic detection systemmead uses information from the centroids of the clusters to select sentences that are most likely to be relevant to the cluster topicwe used a new utilitybased technique cbsu for the evaluation of mead and of summarizers in generalwe found that mead produces summaries that are similar in quality to the ones produced by humanswe also compared mead performance to an alternative method multidocument lead andwe would like to thank inderjeet mani wlodek zadrozny rie kubota ando joyce chai and nanda kambhatla for their valuable feedbackwe would also like to thank carl sable minyen kan dave evans adam budzikowski and veronika horvath for their help with the evaluation
W00-0403
centroidbased summarization of multiple documents sentence extraction utilitybased evaluation and user studieswe present a multidocument summarizer called mead which generates summaries using cluster centroids produced by a topic detection and tracking systemwe also describe two new techniques based on sentence utility and subsumption which we have applied to the evaluation of both single and multiple document summariesfinally we describe two user studies that test our models of multidocument summarizationour centroidbased extractive summarizer scores sentences based on sentencelevel and intersentence features which indicate the quality of the sentence as a summary sentence
knowledgefree induction of morphology using latent semantic analysis morphology induction is a subproblem of important tasks like automatic learning of machinereadable dictionaries and grammar induction previous morphology induction approaches have relied solely on statistics of hypothesized stems and affixes to choose which affixes to consider legitimate relying on stemandaffix statistics rather than semantic knowledge leads to a number of problems such as the inappropriate use of valid affixes we introduce a semanticbased algorithm for learning morphology which only proposes affixes when the stem and stemplusaffix are sufficiently similar semantically we implement our approach using latent semantic analysis and show that our semanticsonly approach provides morphology induction results that rival a current stateoftheart system computational morphological analyzers have existed in various languages for years and it has been said that quotthe quest for an efficient method for the analysis and generation of wordforms is no longer an academic research topicquot however development of these analyzers typically begins with human intervention requiring time spans from days to weeksif it were possible to build such analyzers automatically without human knowledge significant development time could be savedon a larger scale consider the task of inducing machinereadable dictionaries using no humanprovided information in building an mrd quotsimply expanding the dictionary to encompass every word one is ever likely to encounter fails to take advantage of regularitiesquot hence automatic morphological analysis is also critical for selecting appropriate and nonredundant mrd headwordsfor the reasons expressed above we are interested in knowledgefree morphology inductionthus in this paper we show how to automatically induce morphological relationships between wordsprevious morphology induction approaches have focused on inflectional languages and have used statistics of hypothesized stems and affixes to choose which affixes to consider legitimateseveral problems can arise using only stemandaffix statistics valid affixes may be applied inappropriately morphological ambiguity may arise and nonproductive affixes may get accidentally pruned 1 some of these problems could be resolved if one could incorporate word semanticsfor instance quotallquot is not semantically similar to quotallyquot so with knowledge of semantics an algorithm could avoid conflating these two wordsto maintain the quotknowledgefreequot paradigm such semantics would need to be automatically inducedlatent semantic analysis landauer et at 1998 is a technique which automatically identifies semantic information from a corpuswe here show that incorporating lsabased semantics alone into the morphologyinduction process can provide results that rival a stateoftheart system based on stemandaffix statistics lerror examples are from goldsmith linguistica our algorithm automatically extracts potential affixes from an untagged corpus identifies word pairs sharing the same proposed stem but having different affixes and uses lsa to judge semantic relatedness between word pairsthis process serves to identify valid morphological relationsthough our algorithm could be applied to any inflectional language we here restrict it to english in order to perform evaluations against the humanlabeled celex database existing induction algorithms all focus on identifying prefixes suffixes and word stems in inflectional languages they also observe high frequency occurrences of some word endings or beginnings perform statistics thereon and propose that some of these appendages are valid morphemeshowever these algorithms differ in specificsdejean uses an approach derived from harris where wordsplitting occurs if the number of distinct letters that follows a given sequence of characters surpasses a thresholdhe uses these hypothesized affixes to resegment words and thereby identify additional affixes that were initially overlookedhis overall goal is different from ours he primarily seeks an affix inventorygoldsmith tries cutting each word in exactly one place based on probability and lengths of hypothesized stems and affixeshe applies the them algorithm to eliminate inappropriate parseshe collects the possible suffixes for each stem calling these a signature which aid in determining word classesgoldsmith later incorporates minimum description length to identify stemming characteristics that most compress the data but his algorithm otherwise remains similar in naturegoldsmith algorithm is practically knowledgefree though he incorporates capitalization removal and some word segmentationgaussier begins with an inflectional lexicon and seeks to find derivational morphologythe words and parts of speech from his inflectional lexicon serve for building relational families of words and identifying sets of word pairs and suffixes therefromgaussier splits words based on psimilarity words that agree in exactly the first p charactershe also builds a probabilistic model which indicates that the probability of two words being morphological variants is based upon the probability of their respective changes in orthography and morphosynt act ics our algorithm also focuses on inflectional languageshowever with the exception of word segmentation we provide it no human information and we consider only the impact of semanticsour approach can be decomposed into four components initially selecting candidate affixes identifying affixes which are potential morphological variants of each other computing semantic vectors for words possessing these candidate affixes and selecting as valid morphological variants those words with similar semantic vectorsto select candidate affixes we like gaussier identify psimilar wordswe insert words into a trie and extract potential affixes by observing those places in the trie where branching occursfigure 2 hypothesized suffixes are null quotsquot quotedquot quotesquot quotingquot quotequot and quotefulquot we retain only the k mostfrequent candidate affixes for subsequent processingthe value for k needs to be large enough to account for the number of expected regular affixes in any given language as well as some of the more frequent irregular affixeswe arbitrarily chose k to be 200 in our systemstage 3 stage 4 ind wor pairs that are possible morphowe next identify pairs of candidate affixes that descend from a common ancestor node in the triefor example constitutes such a pair from figure 2we call these pairs rulestwo words sharing the same root and the same affix rule such as quotcarsquot and quotcarquot form what we call a pair of potential morphological variants we define the ruleset of a given rule to be the set of all ppm vs that have that rule in commonfor instance from figure 2 the ruleset for would be the pairs quotcarscarquot and quotcarescarequot our algorithm establishes a list which identifies the rulesets for every hypothesized rule extracted from the data and then it must proceed to determine which rulesets or ppm vs describe true morphological relationshipsdeerwester et al showed that it is possible to find significant semantic relationships between words and documents in a corpus with virtually no human intervention this is typically done by applying singular value decomposition to a matrix m where each entry m contains the frequency of word i as seen in document j of the corpusthis methodology is referred to as latent semantic analysis and is welldescribed in the literature svds seek to decompose a matrix a into the product of three matrices you d and vt where you and vt are orthogonal matrices and d is a diagonal matrix containing the singular values of asince svd can be performed which identify singular values by descending order of size lsa truncates after finding the k largest singular valuesthis corresponds to projecting the vector representation of each word into a kdimensional subspace whose axes form k semantic directionsthese projections are precisely the rows of the matrix product ukdka typical k is 300 which is the value we usedhowever we have altered the algorithm somewhat to fit our needsfirst to stay as close to the knowledgefree scenario as possible we neither apply a stopword list nor remove capitalizationsecondly since svds are more designed to work on normallydistributed data we operate on zscores rather than countslastly instead of generating a termdocument matrix we build a termterm matrixschiitze achieved excellent performance at classifying words into quasipartofspeech classes by building and performing an svd on an nx4n termterm matrix mthe indices i and j represent the top n highest frequency wordsthe p values range from 0 to 3 representing whether the word indexed by j is positionally offset from the word indexed by i by 2 1 1 or 2 respectivelyfor example if quotthequot and quotpeoplequot were respectively the 1st and 100th highest frequency words then upon seeing the phrase quotthe peoplequot schiitze approach would increment the counts of m and mwe used schfitze general framework but tailored it to identify local semantic informationwe built an nx2n matrix and our p values correspond to those words whose offsets from word i are in the intervals 501 and 150 respectivelywe also reserve the nth position as a catchall position to account for all words that are not in the top an important issue to resolve is how large should n bewe would like to be able to incorporate semantics for an arbitrarily large number of words and lsa quickly becomes impractical on large setsfortunately it is possible to build a matrix with a smaller value of n perform an svd thereon and then fold in remaining terms since the you and v matrices of an svd are orthogonal matrices then uutvvtithis implies that avudthis means that for a new word w one can build a vector a which identifies how w relates to the top n words according to the p different conditions described abovefor example if w were one of the top n words then a would simply represent w particular row from the a matrixthe product aw avk is the projection of 6t into the kdimensional latent semantic spaceby storing an index to the words of the corpus as well as a sorted list of these words one can efficiently build a set of semantic vectors which includes each word of interestmorphologicallyrelated words frequently share similar semantics so we want to see how well semantic vectors of ppmvs correlateif we know how ppmvs correlate in comparison to other word pairs from their same rulesets we can actually determine the semanticbased probability that the variants are legitimatein this section we identify a measure for correlating ppmvs and illustrate how rulesetbased statistics help identify legitimate ppmvsthe cosine of the angle between two vectors v1 and v2 is given by we want to determine the correlation between each of the words of every ppmvwe use what we call a normalized cosine score as a correlationto obtain a ncs we first calculate the cosine between each semantic vector nw and the semantic vectors from 200 randomly chosen wordsby this means we obtain w correlation mean and standard deviation if v is one of w variants then we define the ncs between nw and itv to be by considering ncss for all word pairs coupled under a particular rule we can determine semanticbased probabilities that indicate which ppmvs are legitimatewe expect random ncss to be normallydistributed according to argiven that a particular ruleset contains nr ppmvs we can therefore approximate the number mean and standard deviation of true correlationsif we define to be iy e2dx then we can compute the probability that the particular correlation is legitimate it is possible that a rule can be hypothesized at the trie stage that is true under only certain conditionsa prime example of such a rule is observe from table 1 that the word quotcaresquot poorly correlates with quotcarquot yet it is true that quotesquot is a valid suffix for the words quotflashesquot quotcatchesquot quotkissesquot and many other words where the quotesquot is preceded by a voiceless sibilanthence there is merit to considering subrules that arise while performing analysis on a particular rulefor instance while evaluating the rule it is desirable to also consider potential subrules such as and one might expect that the average ncs for the subrule might be higher than the overall rule whereas the opposite will likely be true for table 2 confirms thiswe compare our algorithm to goldsmith linguistica by using celex suffixes as a gold standardcelex is a handtagged morphologicallyanalyzed database of english wordscelex has limited coverage of the words from our data set so we only considered words with frequencies of 10 or moremorphological relationships can be represented graphically as directed graphs developing a scoring algorithm to compare directed graphs is likely to be prone to disagreementstherefore we score only the vertex sets of directed graphswe will refer to these vertex sets as conflation setsfor example concern conflation set contains itself as well as quotconcernedquot quotconcernsquot and quotconcerningquot to evaluate an algorithm we sum the number of correct inserted and deleted words it predicts for each hypothesized conflation setif xu represents word w conflation set according to the algorithm and if yw represents its celexbased conflation set then however in making these computations we disregard any celex words that are not in the algorithm data set and vice versafor example suppose two algorithms were being compared on a data set where all the words from figure 3 were available except quotconcertingquot and quotconcertosquot suppose further that one algorithm proposed that abcdefgi formed a single conflation set whereas the other algorithm proposed the three sets abcdegi and fthen table 3 illustrates how the two algorithms would be scoredto explain table 3 consider algorithm one entries for aalgorithm one had proposed that xafabcdefgil when in reality ya abcdsince ix n ya i 4 and iyak4 then ca4 4the remaining values of the table can be computed accordinglyusing the values from table 3 we can also compute precision recall and fscoreprecision is defined to be c recall is c and fscore is the product of precision and recall divided by the average of the twofor the first algorithm the precision recall and fscore would have respectively been 13 1 and 12in the second algorithm these numbers would have been 57 56 and 1013table 4 uses the above scoring mechanism to compare between linguistica and our system note that since linguistica removes capitalization it will have a different total word count than our systemthese results suggest that semantics and lsa can play a key part in knowledgefree morphology inductionsemantics alone worked at least as well as goldsmith frequencybased approachyet we believe that semanticsbased and frequencybased approaches play complementary rolesin current work we are examining how to combine these two approaches
W00-0712
knowledgefree induction of morphology using latent semantic analysismorphology induction is a subproblem of important tasks like automatic learning of machinereadable dictionaries and grammar inductionprevious morphology induction approaches have relied solely on statistics of hypothesized stems and affixes to choose which affixes to consider legitimaterelying on stemandaffix statistics rather than semantic knowledge leads to a number of problems such as the inappropriate use of valid affixes we introduce a semanticbased algorithm for learning morphology which only proposes affixes when the stem and stemplusaffix are sufficiently similar semanticallywe implement our approach using latent semantic analysis and show that our semanticsonly approach provides morphology induction results that rival a current stateoftheart systemwe generate a list of n candidate suffixes and use this list to identify word pairs which share the same stemwe attempt to cluster morphologically related words starting with an unrefined trie search which contains a parameter of minimum possible stem length and an upper bound on potential affix candidates that is constrained by semantic similarity in a word context vector space
inducing syntactic categories by context distribution clustering this paper addresses the issue of the automatic induction of syntactic categories from unannotated corpora previous techniques give good results but fail to cope well with ambiguity or rare words an algorithm context distribution clustering is presented which can be naturally extended to handle these problems in this paper i present a novel program that induces syntactic categories from comparatively small corpora of unlabelled text using only distributional informationthere are various motivations for this task which affect the algorithms employedmany nlp systems use a set of tags largely syntactic in motivation that have been selected according to various criteriain many circumstances it would be desirable for engineering reasons to generate a larger set of tags or a set of domainspecific tags for a particular corpusfurthermore the construction of cognitive models of language acquisition that will almost certainly involve some notion of syntactic category requires an explanation of the acquisition of that set of syntactic categoriesthe amount of data used in this study is 12 million words which is consistent with a pessimistic lower bound on the linguistic experience of the infant language learner in the period from 2 to 5 years of age and has had capitalisation removed as being information not available in that circumstanceprevious work falls into two categoriesa number of researchers have obtained good results using pattern recognition techniquesfinch and chater and schiitze use a set of features derived from the cooccurrence statistics of common words together with standard clustering and information extraction techniquesfor sufficiently frequent words this method produces satisfactory resultsbrown et al use a very large amount of data and a wellfounded information theoretic model to induce large numbers of plausible semantic and syntactic clustersboth approaches have two flaws they cannot deal well with ambiguity though schiitze addresses this issue partially and they do not cope well with rare wordssince rare and ambiguous words are very common in natural language these limitations are seriouswhereas earlier methods all share the same basic intuition ie that similar words occur in similar contexts i formalise this in a slightly different way each word defines a probability distribution over all contexts namely the probability of the context given the wordif the context is restricted to the word on either side i can define the context distribution to be a distribution over all ordered pairs of words the word before and the word afterthe context distribution of a word can be estimated from the observed contexts in a corpuswe can then measure the similarity of words by the similarity of their context distributions using the kullbackleibler divergence as a distance functionunfortunately it is not possible to cluster based directly on the context distributions for two reasons first the data is too sparse to estimate the context distributions adequately for any but the most frequent words and secondly some words which intuitively are very similar have radically different context distributionsboth of these problems can be overcome in the normal way by using clusters approximate the context distribution as being a probability distribution over ordered pairs of clusters multiplied by the conditional distributions of the words given the clusters i use an iterative algorithm starting with a trivial clustering with each of the k clusters filled with the kth most frequent word in the corpusat each iteration i calculate the context distribution of each cluster which is the weighted average of the context distributions of each word in the clusterthe distribution is calculated with respect to the k current clusters and a further ground cluster of all unclassified words each distribution therefore has 2 parametersfor every word that occurs more than 50 times in the corpus i calculate the context distribution and then find the cluster with the lowest kl divergence from that distributioni then sort the words by the divergence from the cluster that is closest to them and select the best as being the members of the cluster for the next iterationthis is repeated gradually increasing the number of words included at each iteration until a high enough proportion has been clustered for example 80after each iteration if the distance between two clusters falls below a threshhold value the clusters are merged and a new cluster is formed from the most frequent unclustered wordsince there will be zeroes in the context distributions they are smoothed using goodturing smoothing to avoid singularities in the kl divergenceat this point we have a preliminary clustering no very rare words will be included and some common words will also not be assigned because they are ambiguous or have idiosyncratic distributional propertiesambiguity can be handled naturally within this frameworkthe context distribution p of a particular ambiguous word w can be modelled as a linear combination of the context distributions of the various clusterswe can find the mixing coefficients by minimising efficients that sum to unity and the qi are the context distributions of the clustersa minimum of this function can be found using the them algorithmthere are often several local minima in practice this does not seem to be a major problemnote that with rare words the kl divergence reduces to the log likelihood of the word context distribution plus a constant factorhowever the observed context distributions of rare words may be insufficient to make a definite determination of its cluster membershipin this case under the assumption that the word is unambiguous which is only valid for comparatively rare words we can use bayes rule to calculate the posterior probability that it is in each class using as a prior probability the distribution of rare words in each classthis incorporates the fact that rare words are much more likely to be adjectives or nouns than for example pronounsi used 12 million words of the british national corpus as training data and ran this algorithm with various numbers of clusters all of the results in this paper are produced with 77 clusters corresponding to the number of tags in the claws tagset used to tag the bnc plus a distinguished sentence boundary tokenin each case the clusters induced contained accurate classes corresponding to the major syntactic categories and various subgroups of them such as prepositional verbs first names last names and so onappendix a shows the five most frequent words in a clustering with 77 clustersin general as can be seen the clusters correspond to traditional syntactic classesthere are a few errors notably the right bracket is classified with adverbial particles like quotupquotfor each word w i then calculated the optimal coefficents crtvtable 1 shows some sample ambiguous words together with the clusters with largest values of aieach cluster is represented by the most frequent member of the clusternote that quotusquot is a proper noun clusteras there is more than one common noun cluster for many unambiguous nouns the optimum is a mixture of the various classes with tags nn1 and ajo table 2 shows the accuracy of cluster assignment for rare wordsfor two claws tags ajo and nn1 that occur frequently among rare words in the corpus i selected all of the words that occurred n times in the corpus and at least half the time had that claws tagi then tested the accuracy of my assignment algorithm by marking it as correct if it assigned the word to a plausible cluster for ajo either of the clusters quotnewquot or quotimportantquot and for nn1 one of the clusters quottimequot quotpeoplequot quotworldquot quotgroupquot or quotfactquoti did this for n in 1 2 3 5 10 20i proceeded similarly for the brown clustering algorithm selecting two clusters for nn1 and four for ajothis can only be approximate since the choice of acceptable clusters is rather arbitrary and the bnc tags are not perfectly accurate but the results are quite clear for words that occur 5 times or less the cdc algorithm is clearly more accurateevaluation is in general difficult with unsupervised learning algorithmsprevious authors have relied on both informal evaluations of the plausibility of the classes produced and more formal statistical methodscomparison against existing tagsets is not meaningful one set of tags chosen by linguists would score very badly against another without this implying any fault as there is no gold standardi therefore chose to use an objective statistical measure the perplexity of a very simple finite state model to compare the tags generated with this clustering technique against the bnc tags which uses the claws4 tag set which had 76 tagsi tagged 12 million words of bnc text with the 77 tags assigning each word to the cluster with the highest a posteriori probability given its prior cluster distribution and its contexti then trained 2ndorder markov models on the original bnc tags on the outputs from my algorithm and for comparision on the output from the brown algorithmthe perplexities on heldout data are shown in table 3as can be seen the perplexity is lower with the model trained on data tagged with the new algorithmthis does not imply that the new tagset is better it merely shows that it is capturing statistical significant generalisationsin absolute terms the perplexities are rather high i deliberately chose a rather crude model without backing off and only the minimum amount of smoothing which i felt might sharpen the contrastthe work of chater and finch can be seen as similar to the work presented here given an independence assumptionwe can model the context distribution as being the product of independent distributions for each relative position in this case the kl divergence is the sum of the divergences for each independent distributionthis independence assumption is most clearly false when the word is ambiguous this perhaps explains the poor performance of these algorithms with ambiguous wordsthe new algorithm currently does not use information about the orthography of the word an important source of informationin future work i will integrate this with a morphologylearning programi am currently applying this approach to the induction of phrase structure rules and preliminary experiments have shown encouraging resultsin summary the new method avoids the limitations of other approaches and is better suited to integration into a complete unsupervised language acquisition system
W00-0717
inducing syntactic categories by context distribution clusteringthis paper addresses the issue of the automatic induction of syntactic categories from unannotated corporaprevious techniques give good results but fail to cope well with ambiguity or rare wordsan algorithm context distribution clustering is presented which can be naturally extended to handle these problemswe apply syntactic clustering and dimensionality reduction in a knowledgefree setting to obtain meaningful clustersin our bootstrapping approach we first cluster the most distributionally reliable words and then incrementallly augment each cluster with words that are distributionally similar to those already in the cluster
introduction to the conll2000 shared task chunking text chunking is a useful preprocessing step for parsingthere has been a large interest in recognizing nonoverlapping noun phrases and followup papers but relatively little has been written about identifying phrases of other syntactic categoriesthe conll2000 shared task attempts to fill this gaptext chunking consists of dividing a text into phrases in such a way that syntactically related words become member of the same phrasethese phrases are nonoverlapping which means that one word can only be a member of one chunk here is an example sentence np he vp reckons np the current account deficit vp will narrow pp to np only 18 billion pp in np september chunks have been represented as groups of words between square bracketsa tag next to the open bracket denotes the type of the chunkas far as we know there are no annotated corpora available which contain specific information about dividing sentences into chunks of words of arbitrary typeswe have chosen to work with a corpus with parse information the wall street journal wsj part of the penn treebank ii corpus and to extract chunk information from the parse trees in this corpuswe will give a global description of the various chunk types in the next sectionthe chunk types are based on the syntactic category part of the bracket label in the treebank p35roughly a chunk contains everything to the left of and including the syntactic head of the constituent of the same namesome treebank constituents do not have related chunksthe head of s for example is normally thought to be the verb but as the verb is already part of the vp chunk no s chunk exists in our example sentencebesides the head a chunk also contains premodifiers but no postmodifiers or argumentsthis is why the pp chunk only contains the preposition and not the argument np and the sbar chunk consists of only the complementizerthere are several difficulties when converting trees into chunksin the most simple case a chunk is just a syntactic constituent without any further embedded constituents like the nps in our examplesin some cases the chunk contains only what is left after other chunks have been removed from the constituent cfquotquot above or adjps and pps belowwe will discuss some special cases during the following description of the individual chunk typesour np chunks are very similar to the ones of ramshaw and marcus specifically possessive np constructions are split in front of the possessive marker and the handling of coordinated nps follows the treebank annotatorshowever as ramshaw and marcus do not describe the details of their conversion algorithm results may differ in difficult cases eg involving nac and nx1 an adjp constituent inside an np constituent becomes part of the np chunk form np the most volatile form in the treebank verb phrases are highly embedded see eg the following sentence which contains four vp constituentsfollowing ramshaw and marcus vtype chunks this sentence will only contain one vp chunk np mr icahn vp may not want to sell it is still possible however to have one vp chunk directly follow another np the impression np i vp have got vp is np they vp would love to do prt away pp with np itin this case the two vp constituents did not overlap in the treebankadverbsadverbial phrases become part of the vp chunk vp could very well show in contrast to ramshaw and marcus predicative adjectives of the verb are not part of the vp chunk eg in quotnp they vp are 1 paw unhappy quotin inverted sentences the auxiliary verb is not part of any verb phrase in the treebankconsequently it does not belong to any vp chunk but conjp not only does np your product vp have to be adjp excellent but advp chunks mostly correspond to advp constituents in the treebankhowever advps inside adjps or inside vps if in front of the main verb are assimilated into the adjp respectively vp chunkon the other hand advps that contain an np make two chunks earlier np a year advp earlier adjps inside nps are assimilated into the npand parallel to advps adjps that contain an np make two chunks old np 68 years adjp old it would be interesting to see how changing these decisions influences the chunking taskmost pp chunks just consist of one word with the partofspeech tag inthis does not mean though that finding pp chunks is completely trivialins can also constitute an sbar chunk and some pp chunks contain more than one wordthis is the case with fixed multiword prepositions such as such as because of due to with prepositions preceded by a modifier well above just after even in particularly among or with coordinated prepositions inside and outsidewe think that pps behave sufficiently differently from nps in a sentence for not wanting to group them into one class and that on the other hand tagging all np chunks inside a pp as ipp would only confuse the chunkerwe therefore chose not to handle the recognition of true pps during this first chunking stepsbar chunks mostly consist of one word with the partofspeech tag in but like multiword prepositions there are also multiword complementizers even though so that just as even if as if only ifconjunctions can consist of more than one word as well as well as instead of rather than not only but alsooneword conjunctions are not annotated as conjp in the treebank and are consequently no conjp chunks in our datathe treebank uses the prt constituent to annotate verb particles and our prt chunk does the samethe only multiword particle is on and off this chunk type should be easy to recognize as it should coincide with the partofspeech tag rp but through tagging errors it is sometimes also assigned in or rb intj is an interjection phrasechunk like no oh hello alas good griefit is quite rarethe list marker lst is even rarerexamples are 1 2 3 first second a b c it might consist of two words the number and the periodthe ucp chunk is reminiscent of the ucp constituent in the treebankarguably the conjunction is the head of the ucp so most ucp chunks consist of conjunctions like and and orucps are the rarest chunks and are probably not very useful for other nlp taskstokens outside any chunk are mostly punctuation signs and the conjunctions in ordinary coordinated phrasesthe word not may also be outside of any chunkthis happens in two cases either not is not inside the vp constituent in the treebank annotation eg in not or not is not followed by another verb as the right chunk boundary is defined by the chunk head ie the main verb in this case not is then in fact a postmodifier and as such not included in the chunk quot sbar that np there vp were nt np any major problems quot all chunks were automatically extracted from the parsed version of the treebank guided by the tree structure the syntactic constituent labels the partofspeech tags and by knowledge about which tags can be heads of which constituentshowever some trees are very complex and some annotations are inconsistentwhat to think about a vp in which the main verb is tagged as nn either we allow nns as heads of vps or we have a vp without a headthe first solution might also introduce errors elsewhere as ramshaw and marcus already noted quotwhile this automatic derivation process introduced a small percentage of errors on its own it was the only practical way both to provide the amount of training data required and to allow for fullyautomatic testingquotfor the conll shared task we have chosen to work with the same sections of the penn treebank as the widely used data set for base noun phrase recognition wsj sections 1518 of the penn treebank as training material and section 20 as test materia13the chunks in the data were selected to match the descriptions in the previous sectionan overview of the chunk types in the training data can be found in table 1de data sets contain tokens information about the location of sentence boundaries and information about chunk boundariesadditionally a partofspeech tag was assigned to each token by a standard pos tagger trained on the penn treebankwe used these pos tags rather than the treebank ones in order to make sure that the performance rates obtained for this data are realistic estimates for data for which no treebank pos tags are availablein our example sentence in section 2 we have used brackets for encoding text chunksin the data sets we have represented chunks with three types of tags bx first word of a chunk of type x ix noninitial word in an x chunk 0 word outside of any chunk this representation type is based on a representation proposed by ramshaw and marcus for noun phrase chunksthe three tag groups are sufficient for encoding the chunks in the data since these are nonoverlappingusing these chunk tags makes it possible to approach the chunking task as a word classification taskwe can use chunk tags for representing our example sentence in the following way the output of a chunk recognizer may contain inconsistencies in the chunk tags in case a word tagged ix follows a word tagged 0 or iy with x and y being differentthese inconsistencies can be resolved by assuming that such ix tags start a new chunkthe performance on this task is measured with three ratesfirst the percentage of detected phrases that are correct second the percentage of phrases in the data that were found by the chunker and third the fo1 rate which is equal to precisionrecall with 01 the latter rate has been used as the target for optimization4the eleven systems that have been applied to the conll2000 shared task can be divided in four groups vilain and day approached the shared task in three different waysthe most successful was an application of the alembic parser which uses transformationbased rulesjohansson uses contextsensitive and contextfree rules for transforming partofspeech tag sequences to chunk tag sequencesdejean has applied the theory refinement system allis to the shared taskin order to obtain a system which could process xml formatted data while using context information he has used three extra toolsveenstra and van den bosch examined different parameter settings of a memorybased learning algorithmthey found that modified value difference metric applied to pos information only worked besta large number of the systems applied to the conll2000 shared task uses statistical methodspla molina and prieto use a finitestate version of markov modelsthey started with using pos information only and obtained a better performance when lexical information was usedzhou tey and su implemented a chunk tagger based on hmmsthe initial performance of the tagger was improved by a postprocess correction method based on error driven learning and by incorporating chunk probabilities generated by a memorybased learning processthe two other statistical systems use maximumentropy based methodsosborne trained ratnaparkhi maximumentropy pos tagger to output chunk tagskoeling used a standard maximumentropy learner for generating chunk tags from words and pos tagsboth have tested different feature combinations before finding an optimal one and their final results are close to each otherthree systems use system combinationtjong kim sang trained and tested five memorybased learning systems to produce different representations of the chunk tagsa combination of the five by majority voting performed better than the individual partsvan halteren used weighted probability distribution voting for combining the results of four wpdv chunk taggers and a memorybased chunk taggeragain the combination outperformed the individual systemskudoh and matsumoto created 231 support vector machine classifiers to predict the unique pairs of chunk tagsthe results of the classifiers were combined by a dynamic programming algorithmthe performance of the systems can be found in table 2a baseline performance was obtained by selecting the chunk tag most frequently associated with a pos tagall systems outperform the baselinethe majority of the systems reached an fo1 score between 9150 and 9250two approaches performed a lot better the combination system wpdv used by van halteren and the support vector machines used by kudoh and matsumotoin the early nineties abney proposed to approach parsing by starting with finding related chunks of wordsby then church had already reported on recognition of base noun phrases with statistical methodsramshaw and marcus approached chunking by using a machine learning methodtheir work has inspired many others to study the application of learning methods to noun phrase chunking5other chunk types have not received the same attention as np chunksthe most complete work is buchholz et al which presents results for np vp pp adjp and advp chunksveenstra works with np vp and pp chunksboth he and buchholz et al use data generated by the script that produced the conll2000 shared task data setsratnaparkhi has recognized arbitrary chunks as part of a parsing task but did not report on the chunking performancepart of the sparkle project has concentrated on finding various sorts of chunks for the different languages an elaborate overview of the work done on noun phrase chunking can be found on httplcgwwwuia acbereriktresearchnpchunkinghtml we have presented an introduction to the conll2000 shared task dividing text into syntactically related nonoverlapping groups of words socalled text chunkingfor this task we have generated training and test data from the penn treebankthis data has been processed by eleven systemsthe best performing system was a combination of support vector machines submitted by taku kudoh and yuji matsumotoit obtained an fo1 score of 9348 on this taskwe would like to thank the members of the cnts language technology group in antwerp belgium and the members of the ilk group in tilburg the netherlands for valuable discussions and commentstjong kim sang is funded by the european tomorrow network learning computational grammarsbuchholz is supported by the netherlands organization for scientific research
W00-0726
introduction to the conll2000 shared task chunkingwe give background information on the data sets present a general overview of the systems that have taken part in the shared task and briefly discuss their performancethe dataset is extracted from the wsj penn tree bank and contains 211727 training examples and 47377 test instances
use of support vector learning for chunk identification in this paper we explore the use of support vector machines for conll2000 shared task chunk identificationsvms are socalled large margin classifiers and are wellknown as their good generalization performancewe investigate how svms with a very large number of features perform with the classification task of chunk labellingsupport vector machines first introduced by vapnik are relatively new learning approaches for solving twoclass pattern recognition problemssvms are wellknown for their good generalization performance and have been applied to many pattern recognition problemsin the field of natural language processing svms are applied to text categorization and are reported to have achieved high accuracy without falling into overfitting even with a large number of words taken as the features first of all let us define the training data which belongs to either positive or negative class as follows xi is a feature vector of the ith sample represented by an n dimensional vector yi is the class or negative class label of the ith datain basic svms framework we try to separate the positive and negative examples by hyperplane written as b 0 wellnbe r svms find the quotoptimalquot hyperplane which separates the training data into two classes preciselywhat quotoptimalquot meansin order to define it we need to consider the margin between two classesfigures 1 illustrates this ideathe solid lines show two possible hyperplanes each of which correctly separates the training data into two classesthe two dashed lines parallel to the separating hyperplane show the boundaries in which one can move the separating hyperplane without misclassificationwe call the distance between each parallel dashed lines as marginsvms take a simple strategy that finds the separating hyperplane which maximizes its marginprecisely two dashed lines and margin can be written as svms can be regarded as an optimization problem finding w and b which minimize liwil under the constraints yirw xi 1furthermore svms have potential to cope with the linearly unseparable training datawe leave the details to the optimization problems can be rewritten into a dual form where all feature vectors appear in their dot productby simply substituting every dot product of xi and xi in dual form with any kernel function k svms can handle nonlinear hypothesesamong the many kinds of kernel functions available we will focus on the dth polynomial kernel k d use of dth polynomial kernel function allows us to build an optimal separating hyperplane which takes into account all combination of features up to d we believe svms have advantage over conventional statistical learning algorithms such as decision tree and maximum entropy models from the following two aspectsthe chunks in the conll2000 shared task are represented with job based model in which every word is to be tagged with a chunk label extended with i 0 and b each chunk type belongs to i or b tagsfor example np could be considered as two types of chunk inp or bnpin training data of conll2000 shared task we could find 22 types of chunk 1 considering all combinations of jobtags and chunk typeswe simply formulate the chunking task as a classification problem of these 22 types of chunkbasically svms are binary classifiers thus we must extend svms to multiclass classifiers in order to classify these 22 types of chunksit is precisely the number of combination becomes 23however we do not consider ilst tag since it dose not appear in training data known that there are mainly two approaches to extend from a binary classification task to those with k classesfirst approach is often used and typical one quotone class vs all othersquotthe idea is to build k classifiers that separate one class among from all otherssecond approach is pairwise classificationthe idea is to build k x 2 classifiers considering all pairs of classes and final class decision is given by their majority votingwe decided to construct pairwise classifiers for all the pairs of chunk labels so that the total number of classifiers becomes 2221 2 231the reasons that we use pairwise classifiers are as follows for the features we decided to use all the information available in the surrounding contexts such as the words their pos tags as well as the chunk labelsmore precisely we give the following for the features to identify chunk label ci at ith word cj where wi is the word appearing at ith word ti is the pos tag of wi and ci is the chunk label at ith wordsince the chunk labels are not given in the test data they are decided dynamically during the tagging of chunk labelsthis technique can be regarded as a sort of dynamic programming matching in which the best answer is searched by maximizing the total certainty score for the combination of tagsin using dp matching we decided to keep not all ambiguities but a limited number of themthis means that a beam search is employed and only the top n candidates are kept for the search for the best chunk tagsthe algorithm scans the test data from left to right and calls the svm classifiers for all pairs of chunk tags for obtaining the certainty scorewe defined the certainty score as the number of votes for the class obtained through the pairwise votingsince svms are vector based classifier they accept only numerical values for their featuresto cope with this constraints we simply expand all features as a binaryvalue taking either 0 or 1by taking all words and pos tags appearing in the training data as features the total dimension of feature vector becomes as large as 92837generally we need vast computational complexity and memories to handle such a huge dimension of vectorsin fact we can reduce these complexity considerably by holding only indices and values of nonzero elements since the feature vectors are usually sparse and svms only require the evaluation of dot products of each feature vectors for their trainingin addition although we could apply some cutoff threshold for the number of occurrence in the training set we decided to use everything not only pos tags but also words themselvesthe reasons are that we simply do not want to employ a kind of quotheuristicsquot and svms are known to have a good generalization performance even with very large featureswe have applied our proposed method to the test data of conll2000 shared task while training with the complete training datafor the kernel function we use the 2nd polynomial functionwe set the beam width n to 5 tentativelysvms training is carried out with the slight package which is designed and optimized to handle large sparse feature vector and large numbers of training examples it took about 1 day to train 231 classifiers with pclinux figure 1 shows the results of our experimentsthe all the values of the chunking fmeasure are almost 935especially our method performs well for the chunk types of high frequency such as np vp and ppin this paper we propose chunk identification analysis based on support vector machinesalthough we select features for learning in very straight way using all available features such as the words their pos tags without any cutoff threshold for the number of occurrence we archive high performance for test datawhen we use other learning methods such as decision tree we have to select feature set manually to avoid overfittingusually these feature selection depends on heuristics so that it is difficult to apply them to other classification problems in other domainsmemory based learning method can also handle all available featureshowever the function to compute the distance between the test pattern and the nearest cases in memory is usually optimized in an adhoc way through our experiments we have shown the high generalization performance and high feature selection abilities of svms
W00-0730
use of support vector learning for chunk identification
two statistical parsing models applied to the chinese treebank this paper presents the firstever results of applying statistical parsing models to the newlyavailable chinese treebank we have employed two models one extracted and adapted from bbn sift system and a tagbased parsing model adapted from on sentences with 40 words the former model performs at 69 precision 75 recall and the latter at 77 precision and 78 recall ever since the success of hmms application to partofspeech tagging in machine learning approaches to natural language processing have steadily become more widespreadthis increase has of course been due to their proven efficacy in many tasks but also to their engineering efficacymany machine learning approaches let the data speak for itself as it were allowing the modeler to focus on what features of the data are important rather than on the complicated interaction of such features as had often been the case with handcrafted nlp systemsthe success of statistical methods in particular has been quite evident in the area of syntactic parsing most recently with the outstanding results of and on the nowstandard english test set of the penn rkeebank a significant trend in parsing models has been the incorporation of linguisticallymotivated features however it is important to note that quotlinguisticallymotivatedquot does not necessarily mean languagedependentquotoften it means just the oppositefor example almost all statistical parsers make use of lexicalized nonterminals in some way which allows lexical items indiosyncratic parsing preferences to be modeled but the paring between head words and their parent nonterminals is determined almost entirely by the training data thereby making this featurewhich models preferences of particular words of a particular languagealmost entirely languageindependentin this paper we will explore the use of two parsing models which were originally designed for english parsing on parsing chinese using the newlyavailable chinese iyeebankwe will show that the languagedependent components of these parsers are quite compact and that with little effort they can be adapted to produce promising results for chinese parsingwe also discuss directions for future workwe will briefly describe the two parsing models employed and also for a full description of the tag model see both parsing models discussed in this paper inherit a great deal from this model so we briefly describe its quotprogenitivequot features here describing only how each of the two models of this paper differ in the subsequent two sectionsthe lexicalized pcfg that sits behind model 2 of has rules of the form where p l ri and h are all lexicalized nonterminals and p inherits its lexical head from its distinguished head child h in this generative model first p is generated then its headchild h then each of the left and rightmodifying nonterminals are generated from the head outwardthe modifying nonterminals li and ri are generated conditioning on p and h as well as a distance metric and an incremental subcat frame feature note that if the modifying nonterminals were generated completely independently the model would be very impoverished but in actuality by including the distance and subcat frame features the model captures a crucial bit of linguistic reality viz that words often have welldefined sets of complements and adjuncts dispersed with some welldefined distribution in the right hand sides of a rewriting systemthe bbn model is also of the lexicalized pcfg varietyin the bbn model as with model 2 of modifying nonterminals are generated conditioning both on the parent p and its head child h unlike model 2 of they are also generated conditioning on the previously generated modifying nonterminal or and there is no subcat frame or distance featurewhile the bbn model does not perform at the level of model 2 of on wall street journal text it is also less languagedependent eschewing the distance metric in favor of the quotbigrams on nonterminalsquot modelthis section briefly describes the toplevel parameters used in the bbn parsing modelwe use p to denote the unlexicalized nonterminal corresponding to p in and similarly for i ri and h we now present the toplevel generation probabilities along with examples from figure 1for brevity we omit the smoothing details of bbn model for a complete description we note that all smoothing weights are computed via the technique described in the probability of generating p as the root label is predicted conditioning on only top which is the hidden root of all parse trees the probability of generating a head node h with a parent p is the probability of generating a leftmodifier ii is when generating the np for np and the probability of generating a right modwhen generating the np for np1 the probabilities for generating lexical elements are as followsthe part of speech tag of the head of the entire sentence th is computed conditioning only on the topmost symbol p2 part of speech tags of modifier constituents and tri are predicted conditioning on the modifier constituent i or ri the tag of the head constituent th and the word of the head constituent wh the head word of the entire sentence wh is predicted conditioning only on the topmost symbol p and th head words of modifier constituents wii and wri are predicted conditioning on all the context used for predicting parts of speech in as well as the parts of speech themsleves and p the original english model also included a word feature to help reduce partofspeech ambiguity for unknown words but this component of the model was removed for chinese as it was languagedependentthe probability of an entire parse tree is the product of the probabilities of generating all of the elements of that parse tree the hidden nonterminal begin is used to provide a convenient mechanism for determining the initial probability of the underlying markov process generating the modifying nonterminals the hidden nonterminal end is used to provide consistency to the underlying markov process ie so that the probabilities of all possible nonterminal sequences sum to 12this is the one place where we altered the original model as the lexical components of the head of the entire sentence were all being estimated incorrectly causing an inconsistency in the modelwe corrected the estimation of th and wh in our implementation where an element is either a constituent label a part of speech tag or a wordwe obtain maximumlikelihood estimates of the parameters of this model using frequencies gathered from the training datathe model of is based on stochastic tag in this model a parse tree is built up not out of lexicalized phrasestructure rules but by tree fragments which are lexicalized in the sense that each fragment contains exactly one lexical item in the variant of tag we use there are three kinds of elementary tree initial auxiliary and modifier and three composition operations substitution adjunction and sisteradjunctionfigure 2 illustrates all three of these operations al is an initial tree which substitutes at the leftmost node labeled npi 13 is an auxiliary tree which adjoins at the node labeled vpsee for a more detailed explanationsisteradjunction is not a standard tag operation but borrowed from dtree grammar in figure 2 the modifier tree 7 is sister adjoined between the nodes labeled vb and ni4multiple modifier trees can adjoin at the same place in the spirit of in stochastic tag the probability of generating an elementary tree depends on the elementary tree itself and the elementary tree it attaches tothe parameters are as follows where a ranges over initial trees 3 over auxiliary trees 7 over modifier trees and n over nodespi is the probability of beginning a derivation with a ps is the probability of substituting a at 71 pa is the probability of adjoining 13 at 77 finally pa is the probability of nothing adjoining at n our variant adds another set of parameters this is the probability of sisteradjoining y between the ith and i 1 th children of 77 since multiple modifier trees can adjoin at the same location pa is also conditioned on a flag f which indicates whether y is the first modifier tree to adjoin at that locationfor our model we break down these probabilities further first the elementary tree is generated without its anchor and then its anchor is generatedsee for more detailsduring training each example is broken into elementary trees using head rules and argumentadjunct rules similar to those of the rules are interpreted as follows a head is kept in the same elementary tree in its parent an argument is broken off into a separate initial tree leaving a substitution node and an adjunct is broken off into a separate modifier treea different rule is used for extracting auxiliary trees see for detailsxia describes a similar process and in fact our rules for the xinhua corpus are based on hersthe primary languagedependent component that had to be changed in both models was the head table used to determine heads when trainingwe modified the head rules described in for the xinhua corpus and substituted these new rules into both modelsthe model had the following additional modifications was eliminated causing parts of speech for unknown words to be predicted solely on the head relations in the model the default beam size in the probabilistic cky parsing algorithm was widenedthe default beam pruned away chart entries whose scores were not within a factor of e5 of the topranked subtree this settings and lower unknown word threshold than the defaults3 of the 400 sentences were not parsed due to timeouts andor pruning problems t3 of the 348 sentences did not get parsed due to pruning problems and 2 other sentences had length mismatches tight limit was changed to e9also the default decoder pruned away all but the top 25ranked chart entries in each cell this limit was expanded to 50the chinese treebank consists of 4185 sentences of xinhua newswire textwe blindly separated this into training devtest and test sets with a roughly 801010 split putting files 001270 into the training set 301325 into the development test set and reserving 271300 for testingsee table 1 for resultsin order to put the new chinese treebank results into context with the unmodified parsing models we present results on two test sets from the wall street journal wsjall which is the complete section 23 and wsjsmall which is the first 400 sentences of section 23 and which is roughly comparable in size to the chinese test setfurthermore when testing on wsjsmall we trained on a subset of our english training data roughly equivalent in size to our chinese training set we have indicated models trained on all english training with quotallquot and models trained with the reduced english training set with quotsmallquottherefore by comparing the wsjsmall results with the chinese results one can reasonably gauge the performance gap between english parsing on the penn treebank and chinese parsing on the chinese treebankthe reader will note that the modified bbn model does significantly poorer than on chinesewhile more investigation is required we suspect part of the difference may be due to the fact that currently the bbn model uses languagespecific rules to guess part of speech tags for unknown wordsthere is no question that a great deal of care and expertise went into creating the chinese treebank and that it is a source of important grammatical information that is unique to the chinese languagehowever there are definite similarities between the grammars of english and chinese especially when viewed through the lens of the statistical models we employed herein both languages the nouns adjectives adverbs and verbs have preferences for certain arguments and adjuncts and these preferencesin spite of the potentially vastlydifferent configurations of these itemsare effectively modeledas discussed in the introduction lexical items idiosyncratic parsing preferences are modeled by lexicalizing the grammar formalism using a lexicalized pcfg in one case and a lexicalized stochastic tag in the otherlinguisticallyreasonable independence assumptions are made such as the independence of grammar productions in the case of the pcfg model or the independence of the composition operations in the case of the ltag model and we would argue that these assumptions are no less reasonable for the chinese grammar than they are for that of englishwhile results for the two languages are far from equal we believe that further tuning of the head rules and analysis of development test set errors will yield significant performance gains on chinese to close the gapfinally we fully expect that absolute performance will increase greatly as additional highquality chinese parse data becomes availablethis research was funded in part by nsf grant sbr892023015we would greatly like to acknowledge the researchers at bbn who allowed us to use their model ralph weischedel scott miller lance ramshaw heidi fox and sean boisenwe would also like to thank mike collins and our advisors aravind joshi and mitch marcus
W00-1201
two statistical parsing models applied to the chinese treebankthis paper presents the firstever results of applying statistical parsing models to the newlyavailable chinese treebankwe have employed two models one extracted and adapted from bbn sift system and a tagbased parsing model adapted from on sentences with 40 words the former model performs at 69 precision 75 recall and the latter at 77 precision and 78 recallour parser operates at wordlevel with the assumption that input sentences are presegmented
japanese dependency structure analysis based on support vector machines this paper presents a method of japanese dependency structure analysis based on support vector machines conventional parsing techniques based on machine learning framework such as decision trees and maximum entropy models have difficulty in selecting useful features as well as finding appropriate combination of selected features on the other hand it is wellknown that svms achieve high generalization performance even with input data of very high dimensional feature space furthermore by introducing the kernel principle svms can carry out the training in highdimensional spaces with a smaller computational cost independent of their dimensionality we apply svms to japanese dependency structure identification problem experimental results on kyoto university corpus show that our sysachieves the 8909 even with small training data dependency structure analysis has been recognized as a basic technique in japanese sentence analysis and a number of studies have been proposed for yearsjapanese dependency structure is usually defined in terms of the relationship between phrasal units called bunsetsu segments generally dependency structure analysis consists of two stepsin the first step dependency matrix is constructed in which each element corresponds to a pair of chunks and represents the probability of a dependency relation between themthe second step is to find the optimal combination of dependencies to form the entire sentencein previous approaches these probabilites of dependencies are given by manually constructed ruleshowever rulebased approaches have problems in coverage and consistency since there are a number of features that affect the accuracy of the final results and these features usually relate to one anotheron the other hand as largescale tagged corpora have become available these days a number of statistical parsing techniques which estimate the dependency probabilities using such tagged corpora have been developedthese approaches have overcome the systems based on the rulebased approachesdecision trees and maximum entropy models have been applied to dependency or syntactic structure analysishowever these models require an appropriate feature selection in order to achieve a high performancein addition acquisition of an efficient combination of features is difficult in these modelsin recent years new statistical learning techniques such as support vector machines and boosting are proposedthese techniques take a strategy that maximize the margin between critical examples and the separating hyperplanein particular compared with other conventional statistical learning algorithms svms achieve high generalization even with training data of a very high dimensionfurthermore by optimizing the kernel function svms can handle nonlinear feature spaces and carry out the training with considering combinations of more than one featurethanks to such predominant nature svms deliver stateoftheart performance in realworld applications such as recognition of handwritten letters or of three dimensional imagesin the field of natural language processing svms are also applied to text categorization and are reported to have achieved to maximize this margin we should minimize in other words this problem becomes equivalent to solving the following optimization problem furthermore this optimization problem can be rewritten into the dual form problem find the lagrange multipliers ai 0 so that in this dual form problem xi with nonzero ai is called a support vectorfor the support vectors w and b can thus be expressed as follows w e aiyi xi b w xi yixiesvs the elements of the set svs are the support vectors that lie on the separating hyperplanesfinally the decision function f 1 can be written as high accuracy without falling into overfitting even with a large number of words taken as the features in this paper we propose an application of svms to japanese dependency structure analysiswe use the features that have been studied in conventional statistical dependency analysis with a little modification on themlet us define the training data which belong either to positive or negative class as follows xi is a feature vector of ith sample which is represented by an n dimensional vector e right now yi is a scalar value that specifies the class or negative class of ith dataformally we can define the pattern recognition problem as a learning and building process of the decision function in basic svms framework we try to separate the positive and negative examples in the training data by a linear hyperplane written as b 0 wernbert it is supposed that the farther the positive and negative examples are separated by the discrimination function the more accurately we could separate unseen test examples with high generalization performancelet us consider two hyperplanes called separating hyperplanes distance from the separating hyperplane to the point xi can be written as in the case where we cannot separate training examples linearly quotsoft marginquot method forgives some classification errors that may be caused by some noise in the training examplesfirst we introduce nonnegative slack variables and are rewritten as in this case we minimize the following value instead of 111w112 the first term in specifies the size of margin and the second term evaluates how far the training data are away from the optimal separating hyperplanec is the parameter that defines the balance of two quantitiesif we make c larger the more classification errors are neglectedthough we omit the details here minimization of is reduced to the problem to minimize the objective function under the following constraintsusually the value of c is estimated experimentallyin general classification problems there are cases in which it is unable to separate the training data linearlyin such cases the training data could be separated linearly by expanding all combinations of features as new ones and projecting them onto a higherdimensional spacehowever such a naive approach requires enormous computational overheadlet us consider the case where we project the training data x onto a higherdimensional space by using projection function cio 1as we pay attention to the objective function and the decision function these functions depend only on the dot products of the input training vectorsif we could calculate the dot products from xi and x2 directly without considering the vectors and projected onto the higherdimensional space we can reduce the computational complexity considerablynamely we can reduce the computational overhead if we could find the function k that satisfies 4 k on the other hand since we do not need itself for actual learning and classification in general is a mapping into hilbert space all we have to do is to prove the existence of cl that satisfies provided the function k is selected properlyit is known that holds if and only if the function k satisfies the mercer condition in this way instead of projecting the training data onto the highdimensional space we can decrease the computational overhead by replacing the dot products which is calculated in optimization and classification steps with the function k such a function k is called a kernel functionamong the many kinds of kernel functions available we will focus on the dth polynomial kernel use of dth polynomial kernel function allows us to build an optimal separating hyperplane which takes into account all combination of features up to d using a kernel function we can rewrite the decision function asthis section describes a general formulation of the probability model and parsing techniques for japanese statistical dependency analysisfirst of all we let a sequence of chunks be b1 b2 bni by b and the sequence dependency pattern be dep dep dep by d where dep j means that the chunk bi depends on the chunk biin this framework we suppose that the dependency sequence d satisfies the following constraintsstatistical dependency structure analysis is defined as a searching problem for the dependency pattern d that maximizes the conditional probability p of the input sequence under the abovementioned constraintsif we assume that the dependency probabilities are mutually independent p could be rewritten as that bi depends on bi fij is an n dimensional feature vector that represents various kinds of linguistic features related with the chunks bi and bjwe obtain dbt taking into all the combination of these probabilitiesgenerally the optimal solution dbt can be identified by using bottomup algorithm such as cyk algorithmsekine suggests an efficient parsing technique for japanese sentences that parses from the end of a sentencewe apply sekine technique in our experimentsin order to use svms for dependency analysis we need to prepare positive and negative examples since svms is a binary classifierwe adopt a simple and effective method for our purpose out of all combination of two chunks in the training data we take a pair of chunks that are in a dependency relation as a positive example and two chunks that appear in a sentence but are not in a dependency relation as a negative example b ktfkiesvs shows that the distance between test data and the separating hyperplane is put into the sigmoid function assuming it represents the probability value of the dependency relationwe adopt this method in our experiment to transform the distance measure obtained in svms into a probability function and analyze dependency structure with a framework of conventional probability model 2features that are supposed to be effective in japanese dependency analysis are head words and their partsofspeech particles and inflection forms of the words that appear at the end of chunks distance between two chunks existence of punctuation marksas those are solely defined by the pair of chunks we refer to them as static featuresjapanese dependency relations are heavily constrained by such static features since the inflection forms and postpositional particles constrain the dependency relationhowever when a sentence is long and there are more than one possible dependents static features by themselves cannot determine the correct dependencylet us look at the following example watashiha konohonwo motteiru joseiwo sagasiteiru itop this bookacc have ladyacc be looking for in this example quotkonohonwoquot may modify either of quotmotteiruquot or quotsagasiteiruquot and cannot be determined only with the static featureshowever quotjoseiwo quot can modify the only the verb quotsagasiteiruquotknowing such information is quite useful for resolving syntactic ambiguity since two accusative noun phrses hardly modify the same verbit is possible to use such information if we add new features related to other modifiersin the above case the chunk quotsagasiteiruquot can receive a new feature of accusative modification during the parsing process which precludes the chunk quotkonohonwoquot from modifying quotsagasiteiruquot since there is a strict constraint about doubleaccusative modification that will be learned from training exampleswe decided to take into consideration all such modification information by using functional words or inflection forms of modifiersusing such information about modifiers in the training phase has no difficulty since they are clearly available in a treebankon the other hand they are not known in the parsing phase of the test datathis problem can be easily solved if we adopt a bottomup parsing algorithm and attach the modification information dynamically to the newly constructed phrases as we describe later we apply a beam search for parsing and it is possible to keep several intermediate solutions while suppressing the combinatorial explosionwe refer to the features that are added incrementally during the parsing process as dynamic featureswe use kyoto university text corpus consisting of articles of mainichi newspaper annotated with dependency structure7958 sentences from the articles on january 1st to january 7th are used for the training data and 1246 sentences from the articles on january 9th are used for the test datafor the kernel function we used the polynomial function we set the soft margin parameter c to be 1the feature set used in the experiments are shown in table 1the static features are basically taken from uchimoto list with little modificationin table 1 head means the rightmost content word in a chunk whose partofspeech is not a functional categorytype mewls the rightmost functional word or the inflectional form of the rightmost predicate if there is no functional word in the chunkthe static features include the information on existence of brackets question marks and punctuation marks etcbesides there are features that show the relative relation of two chunks such as distance and existence of brackets quotation marks and punctuation marks between themfor dynamic features we selected functional words or inflection forms of the rightmost predicates in the chunks that appear between two chunks and depend on the modifieeconsidering data sparseness problem we apply a simple filtering based on the partofspeech of functional words we use the lexical form if the word pos is particle adverb adnominal or conjunctionwe use the inflection form if the word has inflectionwe use the pos tags for otherstable 2 shows the result of passing accuracy under the condition k 5 and d 3 this table shows two types of dependency accuracy a and bthe training data size is measured by the number of sentencesthe accuracy a means the accuracy of the entire dependency relationssince japanese is a headfinal language the second chunk from the end of a sentence always modifies the last chunkthe accuracy b is calculated by excluding this dependency relationhereafter we use the accuracy a if it is not explicitly specified since this measure is usually used in other literaturetable3 shows the accuracy when only static features are usedgenerally the results with dynamic feature set is better than the results without themthe results with dynamic features constantly outperform that with static features onlyin most of cases the improvements is significantin the experiments we restrict the features only from the chunks that appear between two chunks being in consideration however dynamic features could be also taken from the chunks that appear not between the two chunksfor example we could also take into consideration the chunk that is modified by the right chunk or the chunks that modify the left chunkwe leave experiment in such a setting for the future workfigure 1 shows the relationship between the size of the training data and the parsing accuracythis figure shows the accuracy of with and without the dynamic featuresthe parser achieves 8652 accuracy for test data even with small training data this is due to a good characteristic of svms to cope with the data sparseness problemfurthermore it achieves almost 100 accuracy for the training data showing that the training data are completely separated by appropriate combination of featuresgenerally selecting those specific features of the training data tends to cause overfitting and accuracy for test data may fallhowever the svms method achieve a high accuracy not only on the training data but also on the test datawe claim that this is due to the high generalization ability of svmsin addition observing at the learning curve further improvement will be possible if we increase the size of the training datatable 4 shows the relationship between the dimension of the kernel function and the parsing accuracy under the condition k 5as a result the case of d 4 gives the best accuracywe could not carry out the training in realistic time for the case of d 1this result supports our intuition that we need a combination of at least two featuresin other words it will be hard to confirm a dependency relation with only the features of the modifier or the modfieeit is natural that a dependency relation is decided by at least the information from both of two chunksin addition further improvement has been possible by considering combinations of three or more featuressekine gives an interesting report about the relationship between the beam width and the parsing accuracygenerally high parsing accuracy is expected when a large beam width is employed in the dependency structure analysishowever the result is against our intuitionthey report that a beam width between 3 and 10 gives the best parsing accuracy and parsing accuracy falls down with a width larger than 10this result suggests that japanese dependency structures may consist of a series of local optimization processeswe evaluate the relationship between the beam width and the parsing accuracytable 5 shows their relationships under the condition d 3 along with the changes of the beam width from k 1 to 15the best parsing accuracy is achieved at k 5 and the best sentence accuracy is achieved at k 5 and k 7we have to consider how we should set the beam width that gives the best parsing accuracywe believe that the beam width that gives the best passing accuracy is related not only with the length of the sentence but also with the lexical entries and partsofspeech that comprise the chunksinstead of learning a single classifier using all training data we can make n classifiers dividing all training data by n and the final result is decided by their votingthis approach would reduce computational overheadthe use of multiprocessing computer would help to reduce their training time considerably since all individual training can be carried out in parallelto investigate the effectiveness of this method we perform a simple experiment dividing all training data by 4 the final dependency score is given by a weighted average of each scoresthis simple voting approach is shown to achieve the accuracy of 8866 which is nearly the same accuracy achieved 5540 training sentencesin this experiment we simply give an equal weight to each classifierhowever if we optimized the voting weight more carefully the further improvements would be achieved uchimoto and sekine report that using kyoto university corpus for their training and testing they achieve around 872 accuracy by building statistical model based on maximum entropy frameworkfor the training data we used exactly the same data that they used in order to make a fair comparisonin our experiments the accuracy of 8909 is achieved using same training dataour model outperforms uchimoto model as far as the accuracies are comparedalthough uchimoto suggests that the importance of considering combination of features in me framework we must expand these combination by introducing new feature setuchimoto heuristically selects quoteffectivequot combination of featureshowever such a manual selection does not always cover all relevant combinations that are important in the determination of dependency relationwe believe that our model is better than others from the viewpoints of coverage and consistency since our model learns the combination of features without increasing the computational complexityif we want to reconsider them all we have to do is just to change the kernel functionthe computational complexity depends on the number of support vectors not on the dimension of the kernel functionthe simplest and most effective way to achieve better accuracy is to increase the training datahowever the proposed method that uses all candidates that form dependency relation requires a great amount of time to compute the separating hyperplane as the size of the training data increasesthe experiments given in this paper have actually taken long training time 3to handle large size of training data we have to select only the related portion of examples that are effective for the analysisthis will reduce the training overhead as well as the analysis timethe committeebased approach discussed section 47 is one method of coping with this problemfor future research to reduce the computational overhead we will work on methods for sample selection as follows some pairs of chunks need not consider since there is no possibility of dependency between them from grammatical constraintssuch pairs of chunks are not necessary to use as negative examples in the training phasefor example a chunk within quotation marks may not modify a chunk that locates outside of the quotation marksof course we have to be careful in introducing such constraints and they should be learned from existing corpus integration with other simple models suppose that a computationally light and moderately accuracy learning model is obtainable we can use the system to output some redundant parsing results and use only those results for the positive and negative examplesthis is another way to reduce the size of training datawe can start with a small size of training data with a small size of feature setthen by analyzing heldout training data and selecting the features that affect the passing accuracythis kind of gradual increase of training data and feature set will be another method for reducing the computational overheadthis paper proposes japanese dependency analysis based on support vector machinesthrough the experiments with japanese bracketed corpus the proposed method achieves a high accuracy even with a small 3with alphaserver 8400 it took 15 days to train with 7958 sentences training data and outperforms existing methods based on maximum entropy modelsthe result shows that japanese dependency analysis can be effectively performed by use of svms due to its good generalization and nonoverfitting characteristics
W00-1303
japanese dependency structure analysis based on support vector machinesthis paper presents a method of japanese dependency structure analysis based on support vector machines conventional parsing techniques based on machine learning framework such as decision trees and maximum entropy models have difficulty in selecting useful features as well as finding appropriate combination of selected featureson the other hand it is wellknown that svms achieve high generalization performance even with input data of very high dimensional feature spacefurthermore by introducing the kernel principle svms can carry out the training in highdimensional spaces with a smaller computational cost independent of their dimensionalitywe apply svms to japanese dependency structure identification problemexperimental results on kyoto university corpus show that our system achieves the accuracy of 8909 even with small training data we introduce a new type of feature called dynamic features which are created dynamically during the parsing process
enriching the knowledge sources used in a maximum entropy partofspeech tagger this paper presents results for a maximumentropybased part of speech tagger which achieves superior performance principally by enriching the information sources used for tagging in particular we get improved results by incorporating these features more extensive treatment of capitalization for unknown words features for the disambiguation of the tense forms of verbs features for disambiguating particles from prepositions and adverbs the best resulting accuracy for the tagger on the penn treebank is 9686 overall and 8691 on previously unseen words there are now numerous systems for automatic assignment of parts of speech employing many different machine learning methodsamong recent top performing methods are hidden markov models maximum entropy approaches and transformationbased learning an overview of these and other approaches can be found in manning and schiitze however all these methods use largely the same information sources for tagging and often almost the same features as well and as a consequence they also offer very similar levels of performancethis stands in contrast to the engcg tagger which achieves better performance by using lexical and contextual information sources and generalizations beyond those available to such statistical taggers as samuelsson and voutilainen demonstrate we thank dan klein and michael saunders for useful discussions and the anonymous reviewers for many helpful commentsthis paper explores the notion that automatically built tagger performance can be further improved by expanding the knowledge sources available to the taggerwe pay special attention to unknown words because the markedly lower accuracy on unknown word tagging means that this is an area where significant performance gains seem possiblewe adopt a maximum entropy approach because it allows the inclusion of diverse sources of information without causing fragmentation and without necessarily assuming independence between the predictorsa maximum entropy approach has been applied to partofspeech tagging before but the approach ability to incorporate nonlocal and nonhmmtaggertype evidence has not been fully exploredthis paper describes the models that we developed and the experiments we performed to evaluate themwe started with a maximum entropy based tagger that uses features very similar to the ones proposed in ratnaparkhi the tagger learns a loglinear conditional probability model from tagged text using a maximum entropy methodthe model assigns a probability for every tag t in the set t of possible tags given a word and its context h which is usually defined as the sequence of several words and tags preceding the wordthis model can be used for estimating the probability of a tag sequence tit given a sentence w1 w as usual tagging is the process of assigning the maximum likelihood tag sequence to a string of wordsthe idea of maximum entropy modeling is to choose the probability distribution p that has the highest entropy out of those distributions that satisfy a certain set of constraintsthe constraints restrict the model to behave in accordance with a set of statistics collected from the training datathe statistics are expressed as the expected values of appropriate functions defined on the contexts h and tags t in particular the constraints demand that the expectations of the features for the model match the empirical expectations of the features over the training datafor example if we want to constrain the model to tag make as a verb or noun with the same frequency as the empirical model induced by the training data we define the features fl 1 iff w make and t nn f2 1 iff w make and t vb some commonly used statistics for part of speech tagging are how often a certain word was tagged in a certain way how often two tags appeared in sequence or how often three tags appeared in sequencethese look a lot like the statistics a markov model would usehowever in the maximum entropy framework it is possible to easily define and incorporate much more complex statistics not restricted to ngram sequencesthe constraints in our model are that the expectations of these features according to the joint distribution p are equal to the expectations of the features in the empirical distribution e p ei5 having defined a set of constraints that our model should accord with we proceed to find the model satisfying the constraints that maximizes the conditional entropy of p the intuition is that such a model assumes nothing apart from that it should satisfy the given constraintsfollowing berger et al we approximate p the joint distribution of contexts and tags by the product of are the empirical distribution of histories h and the conditional distribution p p p p then for the example above our constraints would be the following for j e 12 this approximation is used to enable efficient computationthe expectation for a feature f is where h is the space of possible contexts h when predicting a part of speech tag t since the contexts contain sequences of words and tags and other information the space h is hugebut using this approximation we can instead sum just over the smaller space of observed contexts x in the training sample because the empirical prior i5 is zero for unseen contexts h the model that is a solution to this constrained optimization task is an exponential model with the parametric form where the denominator is a normalizing term the parameters xi correspond to weights for the features fjwe will not discuss in detail the characteristics of the model or the parameter estimation procedure used improved iterative scalingfor a more extensive discussion of maximum entropy methods see berger et al and jelinek however we note that our parameter estimation algorithm directly uses equation ratnaparkhi suggests use of an approximation summing over the training data which does not sum over possible tags however we believe this passage is in error such an estimate is ineffective in the iterative scaling algorithmfurther we note that expectations of the form appear in ratnaparkhi in our baseline model the context available when predicting the part of speech tag of a word wi in a sentence of words w1 wn with tags t1 t is wi11the features that define the constraints on the model are obtained by instantiation of feature templates as in ratnaparkhi special feature templates exist for rare words in the training data to increase the model prediction capacity for unknown wordsthe actual feature templates for this model are shown in the next tablethey are a subset of the features used in ratnaparkhi nofeature type template general feature templates can be instantiated by arbitrary contexts whereas rare feature templates are instantiated only by histories where the current word wi is rarerare words are defined to be words that appear less than a certain number of times in the training data in order to be able to throw out features that would give misleading statistics due to sparseness or noise in the data we use two different cutoff values for general and rare feature templates as seen in table 1 the features are conjunctions of a boolean function on the history h and a boolean function on the tag t features whose first conjuncts are true for more than the corresponding threshold number of histories in the training data are included in the modelthe feature templates in ratnaparkhi that were left out were the ones that look at the previous word the word two positions before the current and the word two positions after the currentthese features are of the same form as template 4 in table 1 but they look at words in different positionsour motivation for leaving these features out was the results from some experiments on successively adding feature templatesadding template 4 to a model that incorporated the general feature templates 1 to 3 only and the rare feature templates 58 significantly increased the accuracy on the development set from 960 to 9652the addition of a feature template that looked at the preceding word and the current tag to the resulting model slightly reduced the accuracythe model was trained and tested on the partofspeech tagged wsj section of the penn treebankthe data was divided into contiguous parts sections 020 were used for training sections 2122 as a development test set and sections 2324 as a final test setthe data set sizes are shown below together with numbers of unknown wordsthe testing procedure uses a beam search to find the tag sequence with maximal probability given a sentencein our experiments we used a beam of size 5increasing the beam size did not result in improved accuracythe preceding tags for the word at the beginning of the sentence are regarded as having the pseudotag nain this way the information that a word is the first word in a sentence is available to the taggerwe do not have a special endofsentence symbolwe used a tag dictionary for known words in testingthis was built from tags found in the training data but augmented so as to capture a few basic systematic tag ambiguities that are found in englishnamely for regular verbs the ed form can be either a vbd or a vbn and similarly the stem form can be either a vbp or vbhence for words that had occurred with only one of these tags in the training data the other was also included as possible for assignmentthe results on the test set for the baseline model are shown in table 3this table also shows the results reported in ratnaparkhi for conveniencethe accuracy figure for our model is higher overall but lower for unknown wordsthis may stem from the differences between the two models feature templates thresholds and approximations of the expected values for the features as discussed in the beginning of the section or may just reflect differences in the choice of training and test sets the differences are not great enough to justify any definite statement about the different use of feature templates or other particularities of the model estimationone conclusion that we can draw is that at present the additional word features used in ratnaparkhi looking at words more than one position away from the current do not appear to be helping the overall performance of the modelsa large number of words including many of the most common words can have more than one syntactic categorythis introduces a lot of ambiguities that the tagger has to resolvesome of the ambiguities are easier for taggers to resolve and others are hardersome of the most significant confusions that the baseline model made on the test set can be seen in table 5the row labels in table 5 signify the correct tags and the column labels signify the assigned tagsfor example the number 244 in the position is the number of words that were nns but were incorrectly assigned the jj categorythese particular confusions shown in the table account for a large percentage of the total error table 6 shows part of the baseline model confusion matrix for just unknown wordstable 4 shows the baseline model overall assignment accuracies for different parts of speechfor example the accuracy on nouns is greater than the accuracy on adjectivesthe accuracy on nnps is a surprisingly low 411tagger errors are of various typessome are the result of inconsistency in labeling in the training data which usually reflects a lack of linguistic clarity or determination of the correct part of speech in contextfor instance the status of various noun premodifiers is of this typesome such as errors between nnnnpnnpsnns largely reflect difficulties with unknown wordsbut other cases such as vbnnbd and vbnbpnn represent systematic tag ambiguity patterns in english for which the right answer is invariably clear in context and for which there are in general good structural contextual clues that one should be able to use to disambiguatefinally in another class of cases of which the most prominent is probably the rpinrb ambiguity of words like up out and on the linguistic distinctions while having a sound empirical basis are quite subtle and often require semantic intuitionsthere are not good syntactic cues for the correct tag within this classification the greatest hopes for tagging improvement appear to come from minimizing errors in the second and third classes of this classificationin the following sections we discuss how we include additional knowledge sources to help in the assignment of tags to forms of verbs capitalized unknown words particle words and in the overall accuracy of part of speech assignmentsthe accuracy of the baseline model is markedly lower for unknown words than for previously seen onesthis is also the case for all other taggers and reflects the importance of lexical information to taggers in the best accuracy figures published for corpusbased taggers known word accuracy is around 97 whereas unknown word accuracy is around 85in following experiments we examined ways of using additional features to improve the accuracy of tagging unknown wordsas previously discussed in mikheev it is possible to improve the accuracy on capitalized words that might be proper nouns or the first word in a sentence etcfor example the error on the proper noun category accounts for a significantly larger percent of the total error for unknown words than for known wordsin the baseline model of the unknown word error 413 is due to words being nnp and assigned to some other category or being of other category and assigned nnpthe percentage of the same type of error for known words is 162the incorporation of the following two feature schemas greatly improved nnp accuracy conversely empirically it was found that the prefix features for rare words were having a net negative effect on accuracywe do not at present have a good explanation for this phenomenonthe addition of the features and and the removal of the prefix features considerably improved the accuracy on unknown words and the overall accuracythe results on the test set after adding these features are shown below 9676 8676 unknown word error is reduced by 15 as compared to the baseline modelit is important to note that is composed of information already known to the tagger in some sensethis feature can be viewed as the conjunction of two features one of which is already in the baseline model and the other of which is the negation of a feature existing in the baseline model since for words at the beginning of a sentence the preceding tag is the pseudotag na and there is a feature looking at the preceding tageven though our maximum entropy model does not require independence among the predictors it provides for free only a simple combination of feature weights and additional interaction terms are needed to model nonadditive interactions between featurestwo of the most significant sources of classifier errors are the vbnvbd ambiguity and the vbpvb ambiguityas seen in table 5 vbnnbd confusions account for 69 of the total word errorthe vbpvb confusions are a smaller 37 of the errorsin many cases it is easy for people to determine the correct formfor example if there is a to infinitive or a modal directly preceding the vbvbp ambiguous word the form is certainly nonfinitebut often the modal can be several positions away from the current position still obvious to a human but out of sight for the baseline modelto help resolve a vbvbp ambiguity in such cases we can add a feature that looks at the preceding several words but not across another verb and activates if there is a to there a modal verb or a form of do let make or help rather than having a separate feature look at each preceding position we define one feature that looks at the chosen number of positions to the leftthis both increases the scope of the available history for the tagger and provides a better statistic because it avoids fragmentationwe added a similar feature for resolving vbdnbn confusionsit activates if there is a have or be auxiliary form in the preceding several positions the form of these two feature templates was motivated by the structural rules of english and not induced from the training data but it should be possible to look for quotpredictorsquot for certain parts of speech in the preceding words in the sentence by for example computing association strengthsthe addition of the two feature schemas helped reduce the vbnbp and vbdvbn confusionsbelow is the performance on the test set of the resulting model when features for disambiguating verb forms are added to the model of section 2the number of vbnbp confusions was reduced by 231 as compared to the baselinethe number of vbdvbn confusions was reduced by 1239683 8687as discussed in section 13 above the task of determining rbrpin tags for words like down out up is difficult and in particular examples there are often no good local syntactic indicatorsfor instance in we find the exact same sequence of parts of speech but is a particle use of on while is a prepositional useconsequently the accuracy on the rarer rp category is as low as 415 for the baseline model a kim took on the monster b kim sat on the monsterwe tried to improve the tagger capability to resolve these ambiguities through adding information on verbs preferences to take specific words as particles or adverbs or prepositionsthere are verbs that take particles more than others and particular words like out are much more likely to be used as a particle in the context of some verb than other words ambiguous between these tagswe added two different feature templates to capture this information consisting as usual of a predicate on the history h and a condition on the tag t the first predicate is true if the current word is often used as a particle and if there is a verb at most 3 positions to the left which is quotknownquot to have a good chance of taking the current word as a particlethe verbparticle pairs that are known by the system to be very common were collected through analysis of the training data in a preprocessing stagethe second feature template has the form the last verb is v and the current word is w and w has been tagged as a particle and the current tag is t the last verb is the pseudosymbol na if there is no verb in the previous three positionsthese features were some help in reducing the rbinrp confusionsthe accuracy on the rp category rose to 443although the overall confusions in this class were reduced some of the errors were increased for example the number of ins classified as rbs rose slightlythere seems to be still considerable room to improve these results though the attainable accuracy is limited by the accuracy with which these distinctions are marked in the penn treebank the next table shows the final performance on the test setfor ease of comparison the accuracies of all models on the test and development sets are shown in table 7we note that accuracy is lower on the development setthis presumably corresponds with charniak observation that section 23 of the penn treebank is easier than some otherstable 8 shows the different number of feature templates of each kind that have been instantiated for the different models as well as the total number of features each model hasit can be seen that the features which help disambiguate verb forms which look at capitalization and the first of the feature templates for particles are a very small number as compared to the features of the other kindsthe improvement in classification accuracy therefore comes at the price of adding very few parameters to the maximum entropy model and does not result in increased model complexityeven when the accuracy figures for corpusbased partofspeech taggers start to look extremely similar it is still possible to move performance levels upthe work presented in this paper explored just a few information sources in addition to the ones usually used for taggingwhile progress is slow because each new feature applies only to a limited range of cases nevertheless the improvement in accuracy as compared to previous results is noticeable particularly for the individual decisions on which we focusedthe potential of maximum entropy methods has not previously been fully exploited for the task of assignment of parts of speechwe incorporated into a maximum entropybased tagger more linguistically sophisticated features which are nonlocal and do not look just at particular positions in the textwe also added features that model the interactions of previously employed predictorsall of these changes led to modest increases in tagging accuracythis paper has thus presented some initial experiments in improving tagger accuracy through using additional information sourcesin the future we hope to explore automatically discovering information sources that can be profitably incorporated into maximum entropy partofspeech prediction
W00-1308
enriching the knowledge sources used in a maximum entropy partofspeech taggerthis paper presents results for a maximumentropybased part of speech tagger which achieves superior performance principally by enriching the information sources used for taggingin particular we get improved results by incorporating these features more extensive treatment of capitalization for unknown words features for the disambiguation of the tense forms of verbs features for disambiguating particles from prepositions and adverbsthe best resulting accuracy for the tagger on the penn treebank is 9686 overall and 8691 on previously unseen wordswe achieve 969 on seen words and 869 on unseen with a memm
evaluation metrics for generation certain generation applications may profit from the use of stochastic methods in developing stochastic methods it is crucial to be able to quickly assess the relative merits of different approaches or models in this paper we present several types of intrinsic metrics which we have used for baseline quantitative assessment this quantitative assessment should then be augmented to a fuller evaluation that examines qualitative aspects to this end we describe an experiment that tests correlation between the quantitative metrics and human qualitative judgment the experiment confirms that intrinsic metrics cannot replace human evaluation but some correlate significantly with human judgments of quality and understandability and can be used for evaluation during development for many applications in natural language generation the range of linguistic expressions that must be generated is quite restricted and a grammar for a surface realization component can be fully specified by handmoreover in many cases it is very important not to deviate from very specific output in generation in which case handcrafted grammars give excellent controlin these cases evaluations of the generator that rely on human judgments or on human annotation of the test corpora are quite sufficienthowever in other nlg applications the variety of the output is much larger and the demands on the quality of the output are somewhat less stringenta typical example is nlg in the context of machine translationanother reason for relaxing the quality of the output may be that not enough time is available to develop a full grammar for a new target language in nlgin all these cases stochastic methods provide an alternative to handcrafted approaches to nlgto our knowledge the first to use stochastic techniques in an nlg realization module were langkilde and knight and as is the case for stochastic approaches in natural language understanding the research and development itself requires an effective intrinsic metric in order to be able to evaluate progressin this paper we discuss several evaluation metrics that we are using during the development of fergus fergus a realization module follows knight and langkilde seminal work in using an ngram language model but we augment it with a treebased stochastic model and a lexicalized syntactic grammarthe metrics are useful to us as relative quantitative assessments of different models we experiment with however we do not pretend that these metrics in themselves have any validityinstead we follow work done in dialog systems and attempt to find metrics which on the one hand can be computed easily but on the other hand correlate with empirically verified human judgments in qualitative categories such as readabilitythe structure of the paper is as followsin section 2 we briefly describe the architecture of fergus and some of the modulesin section 3 we present four metrics and some results obtained with these metricsin section 4 we discuss the for experimental validation of the metrics using human judgments and present a new metric based on the results of these experimentsin section 5 we discuss some of the many problematic issues related to the use of metrics and our metrics in particular and discuss ongoing workfergus is composed of three modules the tree chooser the unraveler and the linear precedence chooser the input to the system is a dependency tree as shown in figure 21 note that the nodes are unordered and are labeled only with lexemes not with any sort of syntactic annotations2 the tree chooser uses a stochastic tree model to choose syntactic properties for the nodes in the input structurethis step can be seen as analogous to quotsupertaggingquot except that now supertags must be found for words in a tree rather than for words in a linear sequencethe tree chooser makes the simplifying assumptions that the choice of a tree for a node depends only on its daughter nodes thus allowing for a topdown algorithmthe tree chooser draws on a tree model which is a analysis in terms of syntactic dependency for 1000000 words of the wall street journal 3 the supertagged tree which is output from the tree chooser still does not fully determine the surface string because there typically are different ways to attach a daughter node to her mother the unraveler therefore uses the xtag grammar of english to produce a lattice of all possible linearizations that are compatible with the supertagged treespecifically the daughter nodes are ordered with respect to the head at each level of the derivation treein cases where the xtag grammar allows a daughter node to be attached at more than one place in the mother supertag a disjunction of all these positions is assigned to the daughter nodea bottomup algorithm then constructs a lattice that encodes the strings represented by each level of the derivation treethe lattice at the root of the derivation tree is the result of the unravelerfinally the lp chooser chooses the most likely traversal of this lattice given a linear language the sentence generated by this tree is a predicative noun constructionthe xtag grammar analyzes these as being headed by the nounratherthanbythe copula and we follow the xtag analysishowever it would of course also be possible to use a grammar that allows for the copulaheaded analysis2in the system that we used in the experiments described in section 3 all words need to be present in the input representation fully inflectedfurthermore there is no indication of syntactic role at allthis is of course unrealistic for applications see section 5 for further remarks3this was constructed from the penn free i3ank using some heuristics since the penn tree bank does not contain full headdependent information as a result of the use of heuristics the tree model is not fully correct estimate there was no cost for phase the second model the lattice output from the unraveler encodes all possible word sequences permitted by the supertagged dependency structurewe rank these word sequences in the order of their likelihood by composing the lattice with a finitestate machine representing a trigram language modelthis model has been constructed from the 10000000 words wsj training corpuswe pick the best path through the lattice resulting from the composition using the viterbi algorithm and this top ranking word sequence is the output of the lp chooser and the generatorwe have used four different baseline quantitative metrics for evaluating our generatorthe first two metrics are based entirely on the surface stringthe next two metrics are based on a syntactic representation of the sentencewe employ two metrics that measure the accuracy of a generated stringthe first metric simple accuracy is the same string distance metric used for measuring speech recognition accuracythis metric has also been used to measure accuracy of mt systems it is based on string edit distance between the output of the generation system and the reference corpus stringsimple accuracy is the number of insertion deletion and substitutions errors between the reference strings in the test corpus and the strings produced by the generation modelan alignment algorithm using substitution insertion and deletion of tokens as operations attempts to match the generated string with the reference stringeach of these operations is assigned a cost value such that a substitution operation is cheaper than the combined cost of a deletion and an insertion operationthe alignment algorithm attempts to find the set of operations that minimizes the cost of aligning the generated string to the reference stringthe metric is summarized in equation r is the number of tokens in the target stringconsider the following examplethe target sentence is on top the generated sentence belowthe third line represents the operation needed to transform one sentence into another a period is used to indicate that no operation is needed there was no cost estimate for the there was estimate for phase the d d second phase second no cost i s note that the metric is symmetricwhen we tally the results we obtain the score shown in the first column of table 1note that if there are insertions and deletions the number of operations may be larger than the number of tokens involved for either one of the two stringsas a result the simple string accuracy metric may benegative the simple string accuracy metric penalizes a misplaced token twice as a deletion from its expected position and insertion at a different positionthis is particularly worrisome in our case since in our evaluation scenario the generated sentence is a permutation of the tokens in the reference stringwe therefore use a second metric generation string accuracy shown in equation which treats deletion of a token at one location in the string and the insertion of the same token at another location in the string as one single movement error this is in addition to the remaining insertions and deletions in our example sentence we see that the insertion and deletion of no can be collapsed into one movehowever the wrong positions of cost and of phase are not analyzed as two moves since one takes the place of the other and these two tokens still result in one deletion one substitution and one insertion5 thus the generation string accuracy depenalizes simple moves but still treats complex moves harshlyoverall the scores for the two metrics introduced so far are shown in the first two columns of table 1while the stringbased metrics are very easy to apply they have the disadvantage that they do not reflect the intuition that all token moves are not equally quotbadquotconsider the subphrase estimate for phase the second of the sentence in while this is 14adit seems better than an alternative such as estimate phase for the secondthe difference between the two strings is that the first scrambled string but not the second can be read off from the dependency tree for the sentence without violation of projectivity ie without then time simple string accuracy would have 6 errors instead or 5 but the generation string accuracy would have 3 errors instead of 4 speaking creating discontinuous constituentsit has long been observed that the dependency trees of a vast majority of sentences in the languages of the world are projective so that a violation of projectivity is presumably a more severe error than a word order variation that does not violate projectivitywe designed the treebasedaccuracy metrics in order to account for this effectinstead of comparing two strings directly we relate the two strings to a dependency tree of the reference stringfor each treelet of the reference dependency tree we construct strings of the head and its dependents in the order they appear in the reference string and in the order they appear in the result stringwe then calculate the number of substitutions deletions and insertions as for the simple string accuracy and the number of substitutions moves and remaining deletions and insertions as for the generation string metrics for all treelets that form the dependency treewe sum these scores and then use the values obtained in the formulas given above for the two stringbased metrics yielding the simple tree accuracy and generation tree accuracythe scores for our example sentence are shown in the last two columns of table 1here we summarize two experiments that we have performed that use different tree modelsthe simple accuracy generation accuracy simple tree accuracy and generation tree accuracy for the two experiments are tabulated in table 2the test corpus is a randomly chosen subset of 100 sentences from the section 20 of wsjthe dependency structures for the test sentences were obtained automatically from converting the penn treebank phrase structure trees in the same way as was done to create the training corpusthe average length of the test sentences is 167 words with a longest sentence being 24 words in lengthas can be seen the supertagbased model improves over the baseline lr model on all four baseline quantitative metricswe have presented four metrics which we can compute automaticallyin order to determine whether the metrics correlate with independent notions understandability or quality we have performed evaluation experiments with human subjectsin the webbased experiment we ask human subjects to read a short paragraph from the wsjwe present three or five variants of the last sentence of this paragraph on the same page and ask the subject to judge them along two dimensions the 35 variants of each of 6 base sentences are constructed by us to sample multiple values of each intrinsic metric as well as to contrast differences between the intrinsic measuresthus for one sentence quottumblequot two of the five variants have approximately identical values for each of the metrics but with the absolute values being high and medium respectivelyfor two other sentences we have contrasting intrinsic values for tree and string based measuresfor the _final sentence we have contrasts between the string measures with tree measures being approximately equalten subjects who were researchers from att carried out the experimenteach subject made a total of 24 judgmentsgiven the variance between subjects we first normalized the datawe subtracted the mean score for each subject from each observed score and then divided this by standard deviation of the scores for that subjectas expected our data showed strong correlations between normalized understanding and quality judgments for each sentence variant 094 p 005in contrast both of the tree metrics were significant 051 and are 048 for tree accuracy and generation tree accuracy for both p 005 045 and are 042 for tree accuracy and generation tree accuracy for both p 005a second aim of our qualitative evaluation was to test various models of the relationship between intrinsic variables and qualitative user judgmentswe proposed a number ofmodels7inwhich aratiou conibinations of intrinsic metrics were used to predict user judgments of understanding and qualitywe conducted a series of linear regressions with normalized judgments of understanding and quality as the dependent measures and as independent measures different combinations of one of our four metrics with sentence length and with the quotproblemquot variables that we used to define the string metrics one sentence variant was excluded from the data set on the grounds that the severely quotmangledquot sentence happened to turn out wellformed and with nearly the same meaning as the target sentencethe results are shown in table 3we first tested models using one of our metrics as a single intrinsic factor to explain the dependent variablewe then added the quotproblem variables and could boost the explanatory power while maintaining significancein table 3 we show only some combinations which show that the best results were obtained by combining the simple tree accuracy with the number of substitutions and the sentence lengthas we can see the number of substitutions has animportant effectonekplanatorypower while that of sentence length is much more modest furthermore the number of substitutions has more explanatory power than the number of moves the two regressions for understanding and writing show very similar resultsnormalized understanding was best modeled as normalized understanding 14728simple tree accuracy 01015substitutions 00228 length 02127this model was significant f 662 p 0005the model is plotted in figure 3 with the data point representing the removed outlier at the top of the diagramthis model is also intuitively plausiblethe simple tree metric was designed to measure the quality of a sentence and it has a positive coefficienta substitution represents a case in the string metrics in which not only a word is in the wrong place but the word that should have been in that place is somewhere elsetherefore substitutionsmore than moves or insertions or deletions represent grave cases of word order anomaliesthus it is plausible to penalize them separatelyoundfinally it is also plausible that longer sentences are more difficult to understand so that length has a negative coefficientwe now turn to model for qualitynormalized quality 12134simple tree accuracy 00839substitutions 00280 length 00689this model was also significant f 723 p 0005the model is plotted in figure 4 with the data point representing the removed outlier at the top of the diagramthe quality model is plausible for the same reasons that the understanding model isa further goal of these experiments was to obtain one or two metrics which can be automatically computed and which have been shown to significantly correlate with relevant human judgmentswe use as a starting point the two linear models for normalized understanding and quality given above but we make two changesfirst we observe that while it is plausible to model human judgments by penalizing long sentences this seems unmotivated in an accuracy metric we do not want to give a perfectly generated longer sentence a lower score than a perfectly generated shorter sentencewe therefore use models that just use the simple tree accuracy and the number of substitutions as independent variablessecond we note that once we have done so a perfect sentence gets a score of 08689 or 06639 we therefore divide by this score to assure that a perfect sentence gets a score of 1we obtain the following new metrics we reevaluated our system and the baseline model using the new metrics in order to verify whether the more motivated metrics we have developed still show that fergis improves performance over the baselinethis is indeed the case the results are summarized in table 4
W00-1401
evaluation metrics for generationcertain generation applications may profit from the use of stochastic methodsin developing stochastic methods it is crucial to be able to quickly assess the relative merits of different approaches or modelsin this paper we present several types of intrinsic metrics which we have used for baseline quantitative assessmentthis quantitative assessment should then be augmented to a fuller evaluation that examines qualitative aspectsto this end we describe an experiment that tests correlation between the quantitative metrics and human qualitative judgmentthe experiment confirms that intrinsic metrics cannot replace human evaluation but some correlate significantly with human judgments of quality and understandability and can be used for evaluation during developmentwe propose simple string accuracy as a baseline evaluation metric for natural language generation
robust applied morphological generation natural language generation sysit often advantageous to have a separate component that deals purely with morphological processing we present such a component a fast and robust morphological generator for english based on finitestate techniques that generates a word form given a specification of the lemma partofspeech and the type of inflection required we describe how this morphological generator is used in a prototype system for automatic simplification of english newspaper text and discuss practical morphological and orthographic issues we have encountered in generation of unrestricted text within this application most approaches to natural language generation ignore morphological variation during word choice postponing the computation of the actual word forms to be output to a final stage sometimes termed clinearisationthe advantage of this setup is that the syntacticlexical realisation component does not have to consider all possible word forms corresponding to each lemma in practice it is advantageous to have morphological generation as a postprocessing component that is separate from the rest of the nlg systema benefit is that since there are no competing claims on the representation framework from other types of linguistic and nonlinguistic knowledge the developer of the morphological generator is free to express morphological information in a perspicuous and elegant mannera further benefit is that localising morphological knowledge in a single component facilitates more systematic and reliable updatingfrom a software engineering perspective modularisation is likely to reduce system development costs and increase system reliabilityas an individual module the morphological generator will be more easily shareable between several different nlg applications and integrated into new onesfinally such a generator can be used on its own in other types of applications that do not contain a standard nlg syntacticlexical realisation component such as text simplification in this paper we describe a fast and robust generator for the inflectional morphology of english that generates a word form given a specification of a lemma a partofspeech label and an inflectional typethe morphological generator was built using data from several large corpora and machine readable dictionariesit does not contain an explicit lexicon or wordlist but instead comprises a set of morphological generalisations together with a list of exceptions for specific word formsthis organisation into generalisations and exceptions can save time and effort in system development since the addition of new vocabulary that has regular morphology does not require any changes to the generatorin addition the generalisationexception architecture can be used to specifyand also overridepreferences in cases where a lemma has more than one possible surface word form given a particular inflectional type and pos labelthe generator is packaged up as a unix filter making it easy to integrate into applicationsit is based on efficient finitestate techniques and is implemented using the widely available unix flex utility the generator is freely available to the nlg research community the paper is structured as followssection 2 describes the morphological generator and evaluates its accuracysection 3 outlines how the generator is put to use in a prototype system for automatic simplification of text and discusses a number of practical morphological and orthographic issues that we have encounteredsection 4 relates our work to that of others and we conclude with directions for future workthe morphological generator covers the productive english affixes s for the plural form of nouns and the third person singular present tense of verbs and ed for the past tense en for the past participle and ing for the present participle forms of verbsthe generator is implemented in flexthe standard use of flex is to construct canners programs that recognise lexical patterns in text a flex descriptionthe highlevel description of a scanner that flex takes as inputconsists of a set of rules pairs of regular expression patterns and actions consisting of arbitrary c codeflex creates as output a c program which at runtime scans a text looking for occurrences of the regular expressionswhenever it finds one it executes the corresponding c codeflex is part of the berkeley unix distribution and as a result flex programs are very portablethe standard version of flex works with any iso8559 character set unicode support is also availablethe morphological generator expects to receive as input a sequence of tokens of the form lemma inflection_label where lemma specifies the lemma of the word form to be generated inflection specifies the type of inflection and label specifies the pos of the word formthe pos labels follow the same pattern as in the lancaster claws tag sets with noun tags starting with n etcthe symbols and _ are delimitersan example of a morphological generator rule is given in we do not curreutly cover comparative and superlative forms of adjectives or adverbs since t heir pro ind ivit is much less predictablelireturn the lefthand side of the rule is a regular expressionthe braces signify exactly one occurrence of an element of the character set abbreviated by the symbol a we assume here that a abbreviates the upper and lower case letters of the alphabetthe next symbol specifies that there must be a sequerire of one anniore characters each belonging to the character set abbreviated by adouble quotes indicate literal character symbolsthe righthand side of the rule gives the c code to be executed when an input string matches the regular expressionwhen the flex rule matches the input addressi8_1v for example the c function np_vord_form is called to determine the word form corresponding to the input the function deletes the inflection type and pos label specifications and the delimiters removes the last character of the lemma and finally attaches the characters es the word form generated is thus addressesof course not all plural noun inflections are correctly generated by the rule in since there are many irregularities and subregularitiesthese are dealt with using additional more specific rulesthe order in which these rules are applied to the input follows exactly the order in which the rules appear in the flex descriptionthis makes for a very simple and perspicuous way to express generalizations and exceptionsfor instance the rule in generates the plural form of many english nouns that originate from latin such as stimuluswith the input stimulusfs_n the output is stimuli rather than the incorrect stimuluses that would follow from the application of the more general rule in by ensuring that this rule precedes the rule in in the description nouns such as stimutus get the correct plural form inflectionsome other words in this class though do not have the latinate plural form in these cases the generator contains rules specifying the correct forms as exceptionsthe rules constitutingquotthe generator do not necessarily have to be mutually exclusive so they can be used to capture the inflectional morphology of lemmata that have more than one possible inflected form given a specific pos label and inflectional typean example of this is the multiple inflections of the noun cactus which has not only the latinate plural form cacti but also the englishpluralform cactuses in addition inflections of some words differ according to dialectfor example the past participle form of the verb to bear is borne in british english whereas in american english the preferred word form is bornin cases where there is more than one possible inflection for a particular input lemma the order of the rules in the flex description determines the inflectional preferencefor example with the noun cactus the fact that the rule in precedes the one in causes the generator to output the word form cacti rather than cactuses even though both rules are applicable2 it is important to note though that the generator will always choose between multiple inflections there is no way for it to output all possible word forms for a particular input3 an important issue concerning morphological generation that is closely related to that of inflectional preference is consonant doublingthis phenomenon occurring mainly in british english involves the doubling of a consonant at the end of a lemma when the lemma is inflectedfor example the past tenseparticiple inflection of the verb to travel is travelled in british english where the final consonant of the lemma is doubled before the suffix is attachedin american english the past tenseparticiple inflection of the verb to travel is usually spelt traveledconsonant doubling is triggered on the basis of both orthographic and phonological information when a word ends in one vowel 2ftu1e choice based on ordering in the description can in fact be overridden by arranging for the second or subsequent match to cover a larger part of the input so that the longest match heuristic applies hut note that the rules in and will always match the same input span_ 3flex does not allow the use of rules that have heniteal lefthand side regular expressions followed by one consonant and the last part of theword is stressedin general the corisoriant is doubled however there are exceptions to this and in any case the input to the morphological generator does not contain information about stressconsider the flex rule in where the symbols c and v abbreviate the character sets consisting of consonants and vowels respectivelygiven the input submitied_v this rule correctly generates submittedhowever the verb to exhibit does not undergo consonant doubling so this rule will generate incorrectly the word form exhibittedin order to ensure that the correct inflection of a verb is generated the morphological generator uses a list of lemmata that allow consonant doubling extracted automatically from the british national corpus the list is checked before inflecting verbsgiven the fact that there are many more verbs that do not allow consonant doubling listing the verbs that do is the most economic solutionan added benefit is that if a lemma does allow consonant doubling but is not included in the list then the word form generated will still be correct with respect to american englishthe morphological generator comprises a set of of approximately 1650 rules expressing morphological regularities subregularities and exceptions for specific words also around 350 lines of cflex code for program initialisation and defining the functions called by the rule actionsthe rule set is in fact obtained by automatically reversing a morphological analyserthis is a much enhanced version of the analyser originally developed for the gate system minnen and carroll describe in detail how the reversal is performedthe generator executable occupies around 700kb on discthe analyserand therefore the generator includes exception lists derived from wordnet in addition we have incorporated data acquired semiautomatically from the following corpora and machine readable dictionaries the lob corpus the penn treebank the susanne corpus the spoken english corpus the oxford psycholinguistic database and the quotcomputerusablequot version of the oxford advanced learner dictionary of current english minnen and carroll report an evaluation of the accuracy of the morphological generator with respect to the celex lexical database this threw up a small number of errors which we have now fixedwe have rerun the celexbased evaluation against the past tense past and present participle and third person singular present tense inflections of verbs and all plural nounsafter excluding multiword entries we were left with 38882 out of the original 160595 word formsfor each of these word forms we fed the corresponding input to the generatorwe compared the generator output with the original celex word forms producing a list of mistakes apparently made by the generator which we then checked by handin a number of cases either the celex lemmatisation was wrong in that it disagreed with the relevant entry in the cambridge international dictionary of english or the output of the generator was correct even though it was not identical to the word form given in celexwe did not count these cases as mistakeswe also found that celex is inconsistent with respect to consonant doublingfor example it includes the word form pettifogged1 whereas it omits many consonant doubled words that are much more common for example the bnc contains around 850 occurrences of the word form programming tagged as a verb but this form is not present in celexthe form programing does occur in celex but does not in the bnca rare word meaning to be overly concerned with small unimportant detailswe did not count these cases as mistakes eitherof the remaining 359 mistakes 346coneerned word forms that do not occur at all in the 100m words of the bncwe categorised these as irrelevant for practical applications and so discarded themthus the type accuracy of the morphological analyser with respect to the celex lexical database is 9997the token accuracy is 9998 with respect to the 14825661 relevant tokens inthe bnc we tested the processing speed of the generator on a sun ultra 10 workstationin order to discount program startup times we used input files of 400k and 800k tokens and recorded the difference in timings we took the averages of 10 runsdespite its wide coverage the morphological generator is very fast it generates at a rate of more than 80000 words per second5the morphological generator forms part of a prototype system for automatic simplification of english newspaper text the goal is to help people with aphasia to better understand english newspaper textthe system comprises two main components an analysis module which downloads the source newspaper texts from the web and computes syntactic analyses for the sentences in them and a simplification module which operates on the output of the analyser to improve the comprehensibility of the textsyntactic simplification operates on the syntax trees produced in the analysis phase for example converting sentences in the passive voice to active and splitting long sentences at appropriate pointsa subsequent lexical simplification stage replaces difficult or rare content words with simpler synonymsthe analysis component contains a morphological analyser and it is the base forms of is likely that a modest increase in speed could he obtained by specifying optimisation levels in flex and gcc that are higher than the defaults words that are passed through the system this eases the task of the lexical simplification modulethe final processing stage in the system is therefore morphological generation using the generator described in the previous sectionwe are currently testing the components of the simplification system on a corpus of 1000 news stories downloaded from the sunderland echo in our testing we have found that newly encountered vocabulary only rarely necessitates any modification to the generator source if the word has regular morphology then it is handled by the rules expressing generalisationsalso a sideeffect of the fact that the generator is derived from the analyser is that the two modules have exactly the same coverage and are guaranteed to stay in step with each otherthis is important in the context of an applied systemthe accuracy of the generator is quite sufficient for this application our experience is that typographical mistakes in the original newspaper text are much more common than errors in morphological processingsome orthographic phenomena span more than one wordthese cannot be dealt with in morphological generation since this works strictly a word at a timewe have therefore implemented a final orthographic postprocessing stageconsider the sentence6 brian cookman is the attraction at the king 8 arms on saturday night and he will be back on sunday night for a acoustic jam sessionthis is incorrect orthographically because the determiner in the final noun phrase should be an as in an acoustic jam sessionin fact an must be used if the following word starts with a vowel sound and a otherwisewe achieve this again using a filter implemented in flex with a set of general rules keying off the next word first letter together with a list of exceptions collected us ingthepronunciabion information in the oaldce supplemented by further cases found in the bncin the case of abbreviations or acronyms we key off the pronunciation of the first letter considered in isolationsimilarly the orthography of the genitive marker cannot be determined without taking context into account since it depends on the identity of the last letter of the preceding wordin the sentence in we need only eliminate the space before the genitive marking obtaining king armsbut following the newspaper style guide if the preceding word ends in s or z we have to reduce the marker as in for example stacey edwards skilful fingersthe generation of contractions presents more of a problemfor example changing he will to he will would make more idiomaticbut there are cases where this type of contraction is not permissiblesince these cases seem to be dependent on syntactic context and we have syntactic structure from the analysis phase we are in a good position to make the correct choicehowever we have not yet tackled this issue and currently take the conservative approach of not contracting in any circumstanceswe are following a wellestablished line of research into the use of finitestate techniques for lexical and shallow syntactic nlp tasks lexical transducers have been used extensively for morphological analysis and in theory a finitestate transducer implementing an analyser can be reversed to produce a generatorhowever we are not aware of published research on finitestate morphological generators establishing whether in practice they perform with similar efficiency to morphological analysers quantifying their typetoken accuracy with respect to an independent extensive gold standard and indicating how easily they can be integrated into larger systemsfurthermore although a number of finitestate compilation toolkits are publicly available or can be licensed for research use associated largescale linguistic descrip lionsforexampleen glish morphological lexiconsare usually commercial products and are therefore not freely available to the nlg research communitythe work reported here is alsorelated to work on lexicon representation and morphological processing using the datr representation language however weadopt less of ar theoreti7 cal and more of an engineering perspective focusing on morphological generation in the context of widecoverage practical nlg applicationsthere are also parallels to research in the twolevel morphology framework although in contrast to our approach this framework has required exhaustive lexica and handcrafted morphological grammars in addition to orthographic descriptions the sri core language engine uses a set of declarative segmentation rules which are similar in content to our rules and are used in reverse to generate word formsthe system however is not freely available again requires an exhaustive stem lexicon and the rules are not compiled into an efficiently executable finitestate machine but are only interpretedthe work that is perhaps the most similar in spirit to ours is that of the ladl group in their compilation of large lexicons of inflected word forms into finitestate transducers the resulting analysers run at a comparable speed to our generator and the executables are of similar sizehowever a full form lexicon is unwieldy and inconvenient to update and a system derived from it cannot cope gracefully with unknown words because it does not contain generalisations about regular or subregular morphological behaviourthe morphological components of current widelyused nlg systems tend to consist of hardwired procedural code that is tightly bound to the workings of the rest of the systemfor instance the nigel grammar contains lisp code that classifies verb noun and adjective endings and these classes are picked up by further code inside the kpivel system itself which performs inflectional generation by stripping off variable length trailing strings and concatenating suf fixes_ anorphologicallysuhregular4orrns must be entered explicitly in the lexicon as well as irregular onesthe situation is similar in fufsurge morphological generation in the surge grammar being performed by procedures which inspect lemma endings strip off trailing strings when appropriate and concatenate suffixesimicurrentnlgsystesusprbhographic information is distributed throughout the lexicon and is applied via the grammar or by hardwired codethis makes orthographic processing difficult to decouple from the rest of the system compromising maintainability and ease of reusefor example in surge markers for aan usage can be added to lexical entries for nouns to indicate that their initial sound is consonantor vowellike and is contrary to what their orthography would suggestthe appropriate indefinite article is inserted by procedures associated with the grammaquotin drafter2 an aan feature can be associated with any lexical entry and its value is propagated up to the np level through leftmost rule daughters in the grammar both of these systems interleave orthographic processing with other processes in realisationin addition neither has a mechanism for stating exceptions for whole subclasses of words for example those starting us followed by a vowel such as use and usualwhich must be preceded by a kpml appears not to perform this type of processing at allwe are not aware of any literature describing nlg systems that generate contractionshowever interesting linguistic research in this direction is reported by pulinni and zwicky we have described a generatorfor english flectional morphologythe main features of the generator are wide coverage and high accuracy it incorporates data from several large corpora and machine readable dictionariesan evaluation has shown the error rate to be very low robustness the generator does not contain an explicit lexicon or wordlist but instead comprises a set of morphological generalisations together with a list of exceptions for specific wordsunknown words are very often handled correctly by the generalisations maintainability and ease of use the organisation into generalisations and exceptions can save development time since addition of new vocabulary that has regular morphology does not require any changes to be madethe generator is packaged up as a unix filter making it easy to integrate into applications speed and portability the generator is based on efficient finitestate techniques and implemented using the widely available unix flex utility freely available the morphological generator and the orthographic postprocessor are freely available to the nlg research communitysee in future work we intend to investigate the use of phonological information in machine readable dictionaries for a inore principled solution to the consonant doubling problemwe also plan to further increase the flexibility of the generator by including an option that allows the user to choose whether it has a preference for generating british or american englishthis work was funded by uk epsrc project grl53175 pset practical simplification of english text and by an epsrc advanced fellowship to the second authorthe original version of the morphological analyser was kindly provided to us by the university of sheffield gate projectchris brew dale gerdemann adam kilgarriff and ehud reiter have suggested improvements to the analysergeneratorthanks also to the anonymous reviewers for insightful comments
W00-1427
robust applied morphological generationin practical natural language generation systems it is often advantageous to have a separate component that deals purely with morphological processingwe present such a component a fast and robust morphological generator for english based on finitestate techniques that generates a word form given a specification of the lemma partofspeech and the type of inflection requiredwe describe how this morphological generator is used in a prototype system for automatic simplification of english newspaper text and discuss practical morphological and orthographic issues we have encountered in generation of unrestricted text within this application
limitations of cotraining for natural language learning from large datasets cotraining is a weakly supervised learning paradigm in which the redundancy of the learning task is captured by training two classifiers using separate views of the same data this enables bootstrapping from a small set of labeled training data via a large set of unlabeled data this study examines the learning behavior of cotraining on natural language processing tasks that typically require large numbers of training instances to achieve usable performance levels using base noun phrase bracketing as a case study we find that cotraining reduces by 36 the difference in error between classifiers and supervised clastrained on a labeled version all available data however degradation in the quality of the bootstrapped data arises as an obstacle to further improvement to address this we propose a moderately supervised variant of cotraining in which a human corrects the mistakes made during automatic labeling our analysis suggests that corrected cotraining and similar moderately supervised methods may help cotraining scale to large natural language learning tasks cotraining is a weakly supervised paradigm for learning a classification task from a small set of labeled data and a large set of unlabeled data using separate but redundant views of the datawhile previous research has investigated the theoretical basis of cotraining this study is motivated by practical concernswe seek to apply the cotraining paradigm to problems in natural language learning with the goal of reducing the amount of humanannotated data required for developing natural language processing componentsin particular many natural language learning tasks contrast sharply with the classification tasks previously studied in conjunction with cotraining in that they require hundreds of thousands rather than hundreds of training examplesconsequently our focus on natural language learning raises the question of how cotraining scales when a large number of training examples are required to achieve usable performance levelsthis case study of cotraining for natural language learning addresses the scalability question using the task of base noun phrase identificationfor this task cotraining reduces by 36 the difference in error between classifiers trained on 500 labeled examples and classifiers trained on 211000 labeled exampleswhile this result is satisfying further investigation reveals that deterioration in the quality of the labeled data accumulated by cotraining hinders further improvementwe address this problem with a moderately supervised variant corrected cotraining that employs a human annotator to correct the errors made during bootstrappingcorrected cotraining proves to be quite successful bridging the remaining gap in accuracyanalysis of corrected cotraining illuminates an interesting tension within weakly supervised learning between the need to bootstrap accurate labeled data and the need to cover the desired taskwe evaluate one approach using corrected cotraining to resolving this tension and as another approach we suggest combining weakly supervised learning with active learning the next section of this paper introduces issues and concerns surrounding cotrainingsections 3 and 4 describe the base noun phrase bracketing task and the application of cotraining to the task respectivelysection 5 contains an evaluation of cotraining for base noun identificationthe cotraining paradigm applies when accurate classification hypotheses for a task can be learned from either of two sets of features of the data each called a viewfor example blum and mitchell describe a web page classification task in which the goal is to determine whether or not a given web page is a university faculty member home pagefor this task they suggest the following two views the words contained in the text of the page for example research interests or publications the words contained in links pointing to the page for example my advisorthe intuition behind blum and mitchell cotraining algorithm ct is that two views of the data can be used to train two classifiers that can help each othereach classifier is trained using one view of the labeled datathen it predicts labels for instances of the unlabeled databy selecting its most confident predictions and adding the corresponding instances with their predicted labels to the labeled data each classifier can add to the other available training datacontinuing the above example web pages pointed to by my advisor links can be used to train the page classifier while web pages about research interests and publications can be used to train the link classifierinitial studies of cotraining focused on the applicability of the cotraining paradigm and in particular on clarifying the assumptions needed to ensure the effectiveness of the ct algorithmblum and mitchell presented a pacstyle analysis of cotraining introducing the concept of compatibility between the target function and the unlabeled data that is the target function should assign the same label to an instance regardless of which view it seesthey made two additional important points first that each view of the data should itself be sufficient for learning the classification task and repeat until done train classifier h1 on view v1 of l train classifier h2 on view v2 of l allow h1 to posit labels for examples in you allow h2 to posit labels for examples in you add hi most confidently labeled examples to l add h2 most confidently labeled examples to l second that the views should be conditionally independent of each other in order to be usefulthey proved that under these assumptions a task that is learnable with random classification noise is learnable with cotrainingin experiments with the ct algorithm they noticed that it is important to preserve the distribution of class labels in the growing body of labeled datafinally they demonstrated the effectiveness of cotraining on a web page classification task similar to that described abovecollins and singer were concerned that the ct algorithm does not strongly enforce the requirement that hypothesis functions should be compatible with the unlabeled datathey introduced an algorithm coboost that directly minimizes mismatch between views of the unlabeled data using a combination of ideas from cotraining and adaboost nigam and ghani performed the most thorough empirical investigation of the desideratum of conditional independence of views underlying cotrainingtheir experiments suggested that view independence does indeed affect the performance of cotraining but that ct when compared to other algorithms that use labeled and unlabeled data such as them may still prove effective even when an explicit feature split is unknown provided that there is enough implicit redundancy in the datain contrast to previous investigations of the theoretical basis of cotraining this study is motivated by practical concerns about the application of weakly supervised learning to problems in natural language learning many nll tasks contrast in two ways with the web page classification task studied in previous work on cotrainingfirst the web page task factors naturally into page and link views while other nll tasks may not have such natural viewssecond many nll problems require hundreds of thousands of training examples while the web page task can be learned using hundreds of examplesconsequently our focus on natural language learning introduces new questions about the scalability of the cotraining paradigmfirst can cotraining be applied to learning problems without natural factorizations into viewsnigam and ghani study suggests a qualified affirmative answer to this question for a text classification task designed to contain redundant information however it is desirable to continue investigation of the issue for largescale nll taskssecond how does cotraining scale when a large number of training examples are required to achieve usable performance levelsit is plausible to expect that the ct algorithm will not scale well due to mistakes made by the view classifiersto elaborate the view classifiers may occasionally add incorrectly labeled instances to the labeled dataif many iterations of ct are required for learning the task degradation in the quality of the labeled data may become a problem in turn affecting the quality of subsequent view classifiersfor largescale learning tasks the effectiveness of cotraining may be dulled over timefinally we note that the accuracy of automatically accumulated training data is an important issue for many bootstrapping learning methods riloff and jones suggesting that the rewards of understanding and dealing with this issue may be significantbase noun phrases are traditionally defined as nonrecursive noun phrases ienps that do not contain npsbase noun phrase identification is the task of locating the base nps in a sentence from the words of the sentence and their partofspeech tagsbase noun phrase identification is a crucial component of systems that employ partial syntactic analysis including information retrieval and question answering systemsmany corpusbased methods have been applied to the task including statistical methods transformationbased learning rote sequence learning memorybased sequence learning and memorybased learning among othersour case study employs a wellknown bracket representation introduced by ramshaw and marcus wherein each word of a sentence is tagged with one of the following tags i meaning the word is within a bracket 0 meaning the word is not within a bracket or b meaning the word is within a bracket but not the same bracket as the preceding word ie the word begins a new bracketthus the bracketing task is transformed into a word tagging taskfigure 2b repeats the example sentence showing the job tag representationtraining examples for job tagging have the form where wo is the focus word and to is its syntactic category tagwords to the left and right of the focus word are included for contextfinally is the job tag of wofigure 2c illustrates a few instances taken from the example sentencewe chose naive bayes classifiers for the study first because they are convenient to use and indeed have been used in previous cotraining studies and second because they are particularly wellsuited to cotraining by virtue of calculating probabilities for each predictionfor an instance x the classifier determines the maximum a posteriori label as followsin experiments with these naive bayes job classifiers we found that very little accuracy was sacrificed when the word information was ignored by the classifier2 we therefore substitute the simpler term p for p abovethe probabilities p are estimated from the training data by determining the fraction of the instances labeled 1 that have syntactic here n denotes the frequency of event x in the training datathis estimate smoothes the training probability by including virtual samples for each partofspeech tag to apply cotraining the base np classification task must first be factored into viewsfor the job instances a view corresponds to a subset of the set of indices k k the most natural views are perhaps k of and 0 k indicating that one classifier looks at the focus tag and the tags to its left while the other looks at the focus tag and the tags to its rightnote that these views certainly violate the desideratum of conditional independence between view features since both include the focus tagother views such as leftright views omitting the focus tag for example may be more theoretically attractive but we found that the leftright views including focus proved most effectual in practicethe job tagging task requires some minor modifications to the ct algorithmfirst it is impractical for the cotraining classifiers to predict labels for each instance from the enormous set of unlabeled datainstead a smaller data pool is maintained fed with randomly selected instances from the larger set3 second the job tagging task is a ternary rather than a binary classificationfurthermore the distribution of labels in the training data is more unbalanced than the distribution of positive and negative examples in the web page task namely 539 of examples are labeled i 440 0 and 21 bsince it is impractical to add say 27 i 22 0 and 1 b to the labeled data at each step of cotraining instead instances are selected by first choosing a label 1 at random according to the label distribution then adding the instance 3this standard modification was introduced by blum and mitchell in an effort to cover the underlying distribution of unlabeled instances however nigam and ghani found it to be unnecessary in their experiments train classifier h1 on view v1 of l train classifier h2 on view v2 of l transfer randomly selected examples from you to you until you for he h2 allow h to posit labels for all examples in you repeat g times select label 1 at random according to dl most confidently labeled 1 to the labeled datathis procedure preserves the distribution of labels in the labeled data as instances are labeled and addedthe modified ct algorithm is presented in figure 3we evaluate cotraining for job classification using a standard data set assembled by ramshaw and marcus from sections 15 18 and 20 of the penn treebank wall street journal corpus training instances consist of partofspeech tag and job label for a focus word along with contexts of two partofspeech tags to the left and right of the focusour goal accuracy of 9517 is the performance of a supervised job classifier trained on the correctly labeled version of the full training datafor initial labeled data the first l instances of the training data are given their correct labelswe determined the best setting for the parameters of the ct algorithm by testing multiple values l varied from 10 to 5000 then you from 200 to 5000 then g from 1 to 50the best setting in terms of effectiveness of cotraining in improving the accuracy of the classifier was l 500you 1000g 5these values are used throughout the evaluation unless noted otherwisecotrainingwe observe the progress of the cotraining process by determining at each iteration the accuracy of the cotraining classifiers over the test datawe also record the accuracy of the growing body of labeled datathese measurements can be plotted to depict a learning curve indicating the progress of cotraining as the classifier accuracy changesfigure 4 presents two representative curves one for the left context classifier and one for the labeled dataas shown cotraining results in improvement in test accuracy over the initial classifier after about 160 iterations reducing by 36 the difference in error between the cotraining classifier and the goal classifierunfortunately the improvement in test accuracy does not continue as cotraining progresses rather performance peaks then declines somewhat before stabilizing at around 925we hypothesize that this decline is due to degradation in the quality of the labeled datathis hypothesis is supported by figure 4b indicating that labeled data accuracy decreases steadily before stabilizing at around 94note that the accuracy of the classifier stabilizes at a point a bit lower than the stable accuracy of the labeled data as would be expected if labeled data quality hinders further improvement from cotrainingfurthermore cotraining for base np identification seems to be quite sensitive to the ct parameter settingsfor example with l 200 the cotraining classifiers appear not to be accurate enough to sustain cotraining while with l 1000 they are too accurate in the sense that cotraining contributes very little accuracy before the labeled data deteriorates in the next sections we address the problems of data degradation and parameter sensitivity for cotrainingcorrected cotrainingas shown above the degradation of the labeled data introduces a scalability problem for cotraining because successive view classifiers use successively poorer quality data for traininga straightforward solution to this problem is to have a human anized as cotraining achieves 9503 accuracy just 014 away from the goal after 600 iterations additionally the human annotator reviews 6000 examples and corrects only 358thus by limiting the number of unlabeled examples under consideration with the hope of forcing broader task coverage we achieve essentially the goal accuracy in fewer iterations and with fewer correctionssurprisingly the error rate of the view classifiers per iteration remains essentially unchanged despite the reduction of the pool of unlabeled examples to choose fromwe believe the preceding experiment illuminates a fundamental tension in weakly supervised learning between automatically obtaining reliable training data and adequately covering the learning task this tension suggests that combining weakly supervised learning methods with active learning methods might be a fruitful endeavoron one hand the goal of weakly supervised learning is to bootstrap a classifier from small amounts of labeled data and large amounts of unlabeled data often by automatically labeling some of the unlabeled dataon the other hand the goal of active learning is to process training examples in the order in which they are most useful or informative to the classifier usefulness is commonly quantified as the learner uncertainty about the class of an example this neatly dovetails with the criterion for selecting instances to label in ct we envision a learner that would alternate between selecting its most certain unlabeled examples to label and present to the human for acknowledgment and selecting its most uncertain examples to present to the human for annotationideally efficient automatic bootstrapping would be complemented by good coverage of the taskwe leave evaluation of this possibility to future workthis case study explored issues involved with applying cotraining to the natural language processing task of identifying base noun phrases particularly the scalability of cotraining for largescale problemsour experiments indicate that cotraining is an effective method for learning bracketers from small amounts of labeled datanaturally the resulting classifier does not perform as well as a fully supervised classifier trained on hundreds of times as much labeled data but if the difference in accuracy is less important than the effort required to produce the labeled training data cotraining is especially attractivefurthermore our experiments support the hypothesis that labeled data quality is a crucial issue for cotrainingour moderately supervised variant corrected cotraining maintains labeled data quality without unduly increasing the burden on the human annotatorcorrected cotraining bridges the gap in accuracy between weak initial classifiers and fully supervised classifiersfinally as an approach to resolving the tension in weakly supervised learning between accumulating accurate training data and covering the desired task we suggest combining weakly supervised methods such as cotraining or selftraining with active learningthanks to three anonymous reviewers for their comments and suggestionsthis work was supported in part by darpa tides contract n6600100c8009 and nsf grants 9454149 0081334 and 0074896
W01-0501
limitations of cotraining for natural language learning from large datasetscotraining is a weakly supervised learning paradigm in which the redundancy of the learning task is captured by training two classifiers using separate views of the same datathis enables bootstrapping from a small set of labeled training data via a large set of unlabeled datathis study examines the learning behavior of cotraining on natural language processing tasks that typically require large numbers of training instances to achieve usable performance levelsusing base noun phrase bracketing as a case study we find that cotraining reduces by 36 the difference in error between classifiers and supervised clastrained on a labeled version all available datahowever degradation in the quality of the bootstrapped data arises as an obstacle to further improvementto address this we propose a moderately supervised variant of cotraining in which a human corrects the mistakes made during automatic labelingour analysis suggests that corrected cotraining and similar moderately supervised methods may help cotraining scale to large natural language learning taskswe show that the quality of the automatically labeled training data is crucial for cotraining to perform well because too many tagging errors prevent a high performing model from being learned
classifying the semantic relations in noun compounds via a domainspecific lexical hierarchy we are developing corpusbased techniques for identifying semantic relations at an intermediate level of description in this paper we describe a classification algorithm for identifying relationships between twoword noun compounds we find that a very simple approach using a machine learning algorithm and a domainspecific lexical hierarchy successfully generalizes from training instances performing better on previously unseen words than a baseline consisting of training on the words themselves we are exploring empirical methods of determining semantic relationships between constituents in natural languageour current project focuses on biomedical text both because it poses interesting challenges and because it should be possible to make inferences about propositions that hold between scientific concepts within biomedical texts one of the important challenges of biomedical text along with most other technical text is the proliferation of noun compoundsa typical article title is shown below it consists a cascade of four noun phrases linked by prepositions openlabeled longterm study of the efficacy safety and tolerability of subcutaneous sumatriptan in acute migraine treatmentthe real concern in analyzing such a title is in determining the relationships that hold between different concepts rather than on finding the appropriate attachments and before we tackle the prepositional phrase attachment problem we must find a way to analyze the meanings of the noun compoundsour goal is to extract propositional information from text and as a step towards this goal we classify constituents according to which semantic relationships hold between themfor example we want to characterize the treatmentfordisease relationship between the words of migraine treatment versus the methodoftreatment relationship between the words of aerosol treatmentthese relations are intended to be combined to produce larger propositions that can then be used in a variety of interpretation paradigms such as abductive reasoning or inductive logic programming note that because we are concerned with the semantic relations that hold between the concepts as opposed to the more standard syntaxdriven computational goal of determining left versus right association this has the fortuitous effect of changing the problem into one of classification amenable to standard machine learning classification techniqueswe have found that we can use such algorithms to classify relationships between twoword noun compounds with a surprising degree of accuracya oneoutofeighteen classification using a neural net achieves accuracies as high as 62by taking advantage of lexical ontologies we achieve strong results on noun compounds for which neither word is present in the training setthus we think this is a promising approach for a variety of semantic labeling tasksthe reminder of this paper is organized as follows section 2 describes related work section 3 describes the semantic relations and how they were chosen and section 4 describes the data collection and ontologiesin section 5 we describe the method for automatically assigning semantic relations to noun compounds and report the results of experiments using this methodsection 6 concludes the paper and discusses future workseveral approaches have been proposed for empirical noun compound interpretationlauer and dras point out that there are three components to the problem identification of the compound from within the text syntactic analysis of the compound and the interpretation of the underlying semanticsseveral researchers have tackled the syntactic analysis usually using a variation of the idea of finding the subconstituents elsewhere in the corpus and using those to predict how the larger compounds are structuredwe are interested in the third task interpretation of the underlying semanticsmost related work relies on handwritten rules of one kind or anotherfinin examines the problem of noun compound interpretation in detail and constructs a complex set of rulesvanderwende uses a sophisticated system to extract semantic information automatically from an online dictionary and then manipulates a set of handwritten rules with handassigned weights to create an interpretationrindflesch et al use handcoded rule based systems to extract the factual assertions from biomedical textlapata classifies nominalizations according to whether the modifier is the subject or the object of the underlying verb expressed by the head noun1 in the related subarea of information extraction the main goal is to find every instance of particular entities or events of interestthese systems use empirical techniques to learn which terms signal entities of interest in order to fill in predefined templatesour goals are more general than those of information extraction and so this work should be helpful for that taskhowever our approach will not solve issues surrounding previously unseen proper nouns which are often important for information extraction tasksthere have been several efforts to incorporate lexical hierarchies into statistical processing primarily for the problem of prepositional phrase attachmentthe current standard formulation is given a verb followed by a noun and a prepositional phrase represented by the tuple v n1 p n2 determine which of v or n1 the pp consisting of p and n2 attaches to or is most closely associated withbecause the data is sparse empirical methods that train on word occurrences alone have been supplanted by algorithms that generalize one or both of the nouns according to classmembership measures but the statistics are computed for the particular preposition and verbit is not clear how to use the results of such analysis after they are found the semantics of the relationship between the terms must still be determinedin our framework we would cast this problem as finding the relationship r that best characterizes the preposition and the np that follows it and then seeing if the categorization algorithm determines their exists any relationship r or rthe algorithms used in the related work reflect the fact that they condition probabilities on a particular verb and nounresnik use classes in wordnet and a measure of conceptual association to generalize over the nounsbrill and resnik use brills transformationbased algorithm along with simple counts within a lexical hierarchy in order to generalize over individual wordsli and abe use a minimum description lengthbased algorithm to find an optimal tree cut over wordnet for each classification problem finding improvements over both lexical association and conceptual association and equaling the transformationbased resultsour approach differs from these in that we are using machine learning techniques to determine which level of the lexical hierarchy is appropriate for generalizing across nounsin this work we aim for a representation that is intermediate in generality between standard case roles and the specificity required for information extractionwe have created a set of relations that are sufficiently general to cover a significant number of noun compounds but that can be domain specific enough to be useful in analysiswe want to support relationships between entities that are shown to be important in cognitive linguistics in particular we intend to support the kinds of inferences that arise from talmys force dynamics it has been shown that relations of this kind can be combined in order to determine the directionality of a sentence in the medical domain this translates to for example mapping a sentence into a representation showing that a chemical removes an entity that is blocking the passage of a fluid through a channelthe problem remains of determining what the appropriate kinds of relations arein theoretical linguistics there are contradictory views regarding the semantic properties of noun compounds levi argues that there exists a small set of semantic relationships that ncs may implydowning argues that the semantics of ncs cannot be exhausted by any finite listing of relationshipsbetween these two extremes lies warrens taxonomy of six major semantic relations organized into a hierarchical structurewe have identified the 38 relations shown in table 1we tried to produce relations that correspond to the linguistic theories such as those of levi and warren but in many cases these are inappropriatelevis classes are too general for our purposes for example she collapses the location and time relationships into one single class in and therefore field mouse and autumnal rain belong to the same classwarrens classification schema is much more detailed and there is some overlap between the top levels of warrens hierarchy and our set of relationsfor example our because for flu virus corresponds to her causerresult of hay fever and our person afflicted can be thought as warrens belongingpossessor of gunmanwarren differentiates some classes also on the basis of the semantics of the constituents so that for example the time relationship is divided up into timeanimate entity of weekend guests and timeinanimate entity of sunday paperour classification is based on the kind of relationships that hold between the constituent nouns rather than on the semantics of the head nounsfor the automatic classification task we used only the 18 relations for which an adequate number of examples were found in the current collectionmany ncs were ambiguous in that they could be described by more than one semantic relationshipin these cases we simply multilabeled them for example cell growth is both activity and change tumor regression is endingreduction and change and bladder dysfunction is location and defectour approach handles this kind of multilabeled classificationtwo relation types are especially problematicsome compounds are noncompositional or lexicalized such as vitamin k and e2 protein others defy classification because the nouns are subtypes of one anotherthis group includes migraine headache guinea pig and hbv carrierwe placed all these ncs in a catchall categorywe also included a wrong category containing word pairs that were incorrectly labeled as ncs2 the relations were found by iterative refinement based on looking at 2245 extracted compounds and finding commonalities among themlabeling was done by the authors of this paper and a biology student the ncs were classified out of contextwe expect to continue development and refinement of these relationship types based on what ends up clearly being use2the percentage of the word pairs extracted that were not true ncs was about 6 some examples are treat migraine ten patient headache morewe do not know however how many ncs we missedthe errors occurred when the wrong label was assigned by the tagger ful downstream in the analysisthe end goal is to combine these relationships in ncs with more that two constituent nouns like in the example intranasal migraine treatment of section 1to create a collection of noun compounds we performed searches from medline which contains references and abstracts from 4300 biomedical journalswe used several query terms intended to span across different subfieldswe retained only the titles and the abstracts of the retrieved documentson these titles and abstracts we ran a partofspeech tagger and a program that extracts only sequences of units tagged as nounswe extracted ncs with up to 6 constituents but for this paper we consider only ncs with 2 constituentsthe unified medical language system is a biomedical lexical resource produced and maintained by the national library of medicine we use the metathesaurus component to map lexical items into unique concept ids 3 the umls also has a mapping from these cuis into the mesh lexical hierarchy we mapped the cuis into mesh termsthere are about 19000 unique main terms in mesh as well as additional modifiersthere are 15 main subhierarchies in mesh each corresponding to a major branch of medical ontologyfor example tree a corresponds to anatomy tree b to organisms and so onthe longer the name of the mesh term the longer the path from the root and the more precise the descriptionfor example migraine is c10228140546800525 that is c c10 c10228 and so onwe use the mesh hierarchy for generalization across classes of nouns we use it instead of the other resources in the umls primarily because of meshs hierarchical structurefor these experiments we considered only those noun compounds for which both nouns can be mapped into mesh terms resulting in a total of 2245 ncsbecause we have defined noun compound relation determination as a classification problem we can make use of standard classification algorithmsin particular we used neural networks to classify across all relations simultaneously shown in boldface are those used in the experiments reported on hererelation id numbers are shown in parentheses by the relation namesthe second column shows the number of labeled examples for each class the last row shows a class consisting of compounds that exhibit more than one relationthe notation and indicates the directionality of the relationsfor example because indicates that the first noun causes the second and because indicates the conversewe ran the experiments creating models that used different levels of the mesh hierarchyfor example for the nc flu vaccination flu maps to the mesh term d48085479429154349 and vaccination to g3770670310890flu vaccination for model 4 would be represented by a vector consisting of the concatenation of the two descriptors showing only the first four levels d48085479 g3770670310 when a word maps to a general mesh term zeros are appended to the end of the descriptor to stand in place of the missing values the numbers in the mesh descriptors are categorical values we represented them with indicator variablesthat is for each variable we calculated the number of possible categories c and then represented an observation of the variable as a sequence of c binary variables in which one binary variable was one and the remaining c 1 binary variables were zerowe also used a representation in which the words themselves were used as categorical input variables for this collection of ncs there were 1184 unique nouns and therefore the feature vector for each noun had 1184 componentsin table 3 we report the length of the feature vectors for one noun for each modelthe entire nc was described by concatenating the feature vectors for the two nouns in sequencethe ncs represented in this fashion were used as input to a neural networkwe used a feedforward network trained with conjugate gradient descent number corresponds to the level of the mesh hierarchy used for classificationlexical nn is neural network on lexical and lexical log reg is logistic regression on nnacc1 refers to how often the correct relation is the topscoring relation acc2 refers to how often the correct relation is one of the top two according to the neural net and so onguessing would yield a result of 0077the network had one hidden layer in which a hyperbolic tangent function was used and an output layer representing the 18 relationsa logistic sigmoid function was used in the output layer to map the outputs into the interval the number of units of the output layer was the number of relations and therefore fixedthe network was trained for several choices of numbers of hidden units we chose the bestperforming networks based on training set error for each of the modelswe subsequently tested these networks on heldout testing datawe compared the results with a baseline in which logistic regression was used on the lexical featuresgiven the indicator variable representation of these features this logistic regression essentially forms a table of logodds for each lexical itemwe also compared to a method in which the lexical indicator variables were used as input to a neural networkthis approach is of interest to see to what extent if any the meshbased features affect performancenote also that this lexical neuralnetwork approach is feasible in this setting because the number of unique words is limited such an approach would not scale to larger problemsin table 4 and in figure 1 we report the results from these experimentsneural network using lexical features only yields 62 accuracy on average across all 18 relationsa neural net trained on model 6 using the mesh terms to represent the nouns yields an accuracy of 61 on average across all 18 relationsnote that reasonable performance is also obtained for model 2 which is a much more general representationtable 4 shows that both methods achieve up to 78 accuracy at including the correct relation among the top three hypothesizedmulticlass classification is a difficult problem in this problem a baseline in which testing set performance on the best models for each mesh level levels of the mesh hierarchy the algorithm guesses yields about 5 accuracywe see that our method is a significant improvement over the tabular logisticregressionbased approach which yields an accuracy of only 31 percentadditionally despite the significant reduction in raw information content as compared to the lexical representation the meshbased neural network performs as well as the lexicalbased neural networkfigure 2 shows the results for each relationmeshbased generalization does better on some relations and lexical on others it turns out that the test set for relationship 7 is dominated by ncs containing the words alleles and mrna and that all the ncs in the training set containing these words are assigned relation label 7a similar situation is seen for relation 22 timein the test set examples the second noun is either recurrence season or timein the training set these nouns appear only in ncs that have been labeled as belonging to relation 22on the other hand if we look at relations 14 and 15 we find a wider range of words and in some cases the words in the test set are not present in the training setin relationship 14 for example vaccine appears 6 times in the test set in the training set ncs with vaccine in it have also been classified as instrument as object as subtype of and as wrong other words in the test set for 14 are varicella which is present in the trainig set only in varicella serology labeled as attribute of clinical study drainage which is in the training set only as location and activity other test set words such as immunisation and carcinogen do not appear in the training set at allin other words it seems that the meshkbased categorization does better when generalization is requiredadditionally this data set is dense in the sense that very few testing words are not present in the training datathis is of course an unrealistic situation and we wanted to test the robustness of the method in a more realistic settingthe results reported in table 4 and in figure 1 were obtained splitting the data into 50 training and 50 testing for each relation and we had a total of 855 training points and 805 test pointsof these only 75 examples in the testing set consisted of ncs in which both words were not present in the training setwe decided to test the robustness of the meshbased model versus the lexical model in the case of unseen words we are also interested in seeing the relative importance of the first versus the second nountherefore we split the data into 5 training and 95 testing and partitioned the testing set into 4 subsets as follows table 5 and figures 3 and 4 present the accuracies for these test set partitionsfigure 3 shows that the meshbased models are more robust than the lexical when the number of unseen words is high and when the size of training set is smallin this more realistic situation the mesh models are able to generalize over previously unseen wordsfor unseen words lexical reduces to guessing4 figure 4 shows the accuracy for the mesh basedmodel for the the four cases of table 5it is interesting to note that the accuracy for case 1 is much higher than the accuracy for case 2 this seems to indicate that the second noun is more important for the classification that the first onewe have presented a simple approach to corpusbased assignment of semantic relations for noun compoundsthe main idea is to define a set of relations that can hold between the terms and use standard machine learning techniques and a lexical hierarchy to generalize from training instances to new examplesthe initial results are quite promisingin this task of multiclass classification we achieved an accuracy of about 60these results can be compared with vanderwende note that for unseen words the baseline lexicalbased logistic regression approach which essentially builds a tabular representation of the logodds for each class also reduces to random guessingtesting set performances for different partitions on the test set levels of the mesh hierarchy els accuracies and the dashed lines represent the corresponding lexical accuraciesthe accuracies are smaller than the previous case of table 4 because the training set is much smaller but the point of interest is the difference in the performance of mesh vs lexical in this more difficult settingnote that lexical for case 4 reduces to random guessingtesting set performances for different partitions on the test set for the meshbased model levels of the mesh hierarchy who reports an accuracy of 52 with 13 classes and lapata whose algorithm achieves about 80 accuracy for a much simpler binary classificationwe have shown that a classbased representation performes as well as a lexicalbased model despite the reduction of raw information content and despite a somewhat errorful mapping from terms to conceptswe have also shown that representing the nouns of the compound by a very general representation achieves a reasonable performance of aout 52 accuracy on averagethis is particularly important in the case of larger collections with a much bigger number of unique words for which the lexicalbased model is not a viable optionour results seem to indicate that we do not lose much in terms of accuracy using the more compact mesh representationwe have also shown how meshbesed models out perform a lexicalbased approach when the number of training points is small and when the test set consists of words unseen in the training datathis indicates that the mesh models can generalize successfully over unseen wordsour approach handles mixedclass relations naturallyfor the mixed class defect in location the algorithm achieved an accuracy around 95 for both defect and location simultaneouslyour results also indicate that the second noun is more important in determining the relationships than the first onein future we plan to train the algorithm to allow different levels for each noun in the compoundwe also plan to compare the results to the tree cut algorithm reported in which allows different levels to be identified for different subtreeswe also plan to tackle the problem of noun compounds containing more than two termswe would like to thank nu lai for help with the classification of the noun compound relationsthis work was supported in part by nsf award number iis9817353
W01-0511
classifying the semantic relations in noun compounds via a domainspecific lexical hierarchywe are developing corpusbased techniques for identifying semantic relations at an intermediate level of description in this paper we describe a classification algorithm for identifying relationships between twoword noun compoundswe find that a very simple approach using a machine learning algorithm and a domainspecific lexical hierarchy successfully generalizes from training instances performing better on previously unseen words than a baseline consisting of training on the words themselveswe classify noun compounds from the domain of medicine using 13 classes that describe the semantic relation between the head noun and the modifier in a given noun compoundwe use a discriminative classifier to assing 18 relations for noun compounds from biomedical text and achieve 60 accuracy
is knowledgefree induction of multiword unit dictionary headwords a solved problem we seek a knowledgefree method for inducing multiword units from text corpora for use as machinereadable dictionary headwords we provide two major evaluations of nine existing collocationfinders and illustrate the continuing need for improvement we use latent semantic analysis to make modest gains in performance but we show the significant challenges encountered in trying this approach a multiword unit is a connected collocation a sequence of neighboring words whose exact and unambiguous meaning or connotation cannot be derived from the meaning or connotation of its components in other words mwus are typically noncompositional at some linguistic levelfor example phonological noncompositionality has been observed where words like got gat and to tu change phonetically to got to gave when combinedwe have interest in inducing headwords for machinereadable dictionaries so our interest is in semantic rather than phonological noncompositionalityas an example of semantic noncompositionality consider compact disk one could not deduce that it was a music medium by only considering the semantics of compact and disk mwus may also be nonsubstitutable andor nonmodifiable nonsubstitutability implies that substituting a word of the mwu with its synonym should no longer convey the same original content compact disk does not readily imply denselypacked disk nonmodifiability on the other hand suggests one cannot modify the mwus structure and still convey the same content compact disk does not signify disk that is compact mwu dictionary headwords generally satisfy at least one of these constraintsfor example a compositional phrase would typically be excluded from a hardcopy dictionary since its constituent words would already be listedthese strategies allow hardcopy dictionaries to remain compactas mentioned we wish to find mwu headwords for machinereadable dictionaries although space is not an issue in mrds we desire to follow the lexicographic practice of reducing redundancyas sproat indicated quotsimply expanding the dictionary to encompass every word one is ever likely to encounter is wrong it fails to take advantage of regularitiesquot our goal is to identify an automatic knowledgefree algorithm that finds all and only those collocations where it is necessary to supply a definitionknowledgefree means that the process should proceed without human input this seems like a solved problemmany collocationfinders exist so one might suspect that most could suffice for finding mwu dictionary headwordsto verify this we evaluate nine existing collocationfinders to see which best identifies valid headwordswe evaluate using two completely separate gold standards wordnet and a compendium of internet dictionariesalthough webbased resources are dynamic and have better coverage than wordnet we show that wordnetbased scores are comparable to those using internet mrdsyet the evaluations indicate that significant improvement is still needed in mwuinductionas an attempt to improve mwu headword induction we introduce several algorithms using latent semantic analysis lsa is a technique which automatically induces semantic relationships between wordswe use lsa to try to eliminate proposed mwus which are semantically compositionalunfortunately this does not helpyet when we use lsa to identify substitutable delimitersthis suggests that in a language with mwus we do show modest performance gains whitespace one might prefer to begin at the wordfor decades researchers have explored various techniques for identifying interesting collocationsthere have essentially been three separate kinds of approaches for accomplishing this taskthese approaches could be broadly classified into segmentationbased wordbased and knowledgedriven or wordbased and probabilisticwe will illustrate strategies that have been attempted in each of the approachessince we assume knowledge of whitespace and since many of the first and all of the second categories rely upon human input we will be most interested in the third categorysome researchers view mwufinding as a natural byproduct of segmentationone can regard text as a stream of symbols and segmentation as a means of placing delimiters in that stream so as to separate logical groupings of symbols from one anothera segmentation process may find that a symbol stream should not be delimited even though subcomponents of the stream have been seen elsewherein such cases these larger units may be mwusthe principal work on segmentation has focused either on identifying words in phonetic streams or on tokenizing asian and indian languages that do not normally include word delimiters in their orthography such efforts have employed various strategies for segmentation including the use of hidden markov models minimum description length dictionarybased approaches probabilistic automata transformationbased learning and text compressionsome of these approaches require significant sources of human knowledge though others especially those that follow data compression or hmm schemes do notthese approaches could be applied to languages where word delimiters exist however in such languages it seems more prudent to simply take advantage of delimiters rather than introducing potential errors by trying to find word boundaries while ignoring knowledge of the level and identify appropriate word combinationssome researchers start with words and propose mwu induction methods that make use of parts of speech lexicons syntax or other linguistic structure for example justeson and katz indicated that the patterns noun noun and adj noun are very typical of mwusdaille also suggests that in french technical mwus follow patterns such as noun de nounquot to find word combinations that satisfy such patterns in both of these situations necessitates the use of a lexicon equipped with part of speech tagssince we are interested in knowledgefree induction of mwus these approaches are less directly related to our workfurthermore we are not really interested in identifying constructs such as general noun phrases as the above rules might generate but rather in finding only those collocations that one would typically need to definethe third category assumes at most whitespace and punctuation knowledge and attempts to infer mwus using word combination probabilitiestable 1 shows nine commonlyused probabilistic mwuinduction approachesin the table fx and px signify frequency and probability of a word xa variable xy indicates a word bigram and 4xy indicates its expected frequency at randoman overbar signifies a variables complementfor more details one can consult the original sources as well as ferreira and pereira and manning and schütze prior to applying the algorithms we lemmatize using a weaklyinformed tokenizer that knows only that whitespace and punctuation separate wordspunctuation can either be discarded or treated as wordssince we are equally interested in finding units like dr and yous we opt to treat punctuation as wordsonce we tokenize we use churchs suffix array approach to identify word ngrams that occur at least t times we then rankorder the ngram list in accordance to each probabilistic algorithmthis task is nontrivial since most algorithms were originally suited for finding twoword collocationswe must therefore decide how to expand the algorithms to identify general ngrams we can either generalize or approximatesince generalizing requires exponential compute time and memory for several of the algorithms approximation is an attractive alternativeone approximation redefines x and y to be respectively the word sequences w1w2 wi and wi1wi2wn where i is chosen to maximize pxpythis has a natural interpretation of being the expected probability of concatenating the two most probable substrings in order to form the larger unitsince it can be computed rapidly with low memory costs we use this approximationtwo additional issues need addressing before evaluationthe first regards document sourcingif an ngram appears in multiple sources its likelihood of accuracy should increasethis is particularly true if we are looking for mwu headwords for a general versus specialized dictionaryphrases that appear in one source may in fact be general mwus but frequently they are textspecific unitshence precision gained by excluding singlesource ngrams may be worth losses in recallwe will measure this tradeoffsecond evaluating with punctuation as words and applying no filtering mechanism may unfairly bias against some algorithmspre or postprocessing of ngrams with a linguistic filter has shown to improve some induction algorithms performance since we need knowledgepoor induction we cannot use humansuggested filtering rules as in section 22yet we can filter by pruning ngrams whose beginning or ending word is among the top n most frequent wordsthis unfortunately eliminates acronyms like yous and phrasal verbs like throw up however discarding some words may be worthwhile if the final list of ngrams is richer in terms of mrd headwordswe therefore evaluate with such an automatic filter arbitrarily choosing n75a natural scoring standard is to select a language and evaluate against headwords from existing dictionaries in that languageothers have used similar standards but to our knowledge none to the extent described herewe evaluate thousands of hypothesized units from an unconstrained corpusfurthermore we use two separate evaluation gold standards wordnet and a collection of internet mrdsusing two gold standards helps valid mwusit also provides evaluation using both static and dynamic resourceswe choose to evaluate in english due to the wealth of linguistic resourcesin particular we use a randomlyselected corpus the first five columns as informationlike consisting of a 67 million word subset of the trec similarly since the last four columns share databases properties of the frequency approach we will refer table 2 illustrates a sample of rankordered output to them as frequencylike from each of the different algorithms note that algorithms in the first four columns reflects our interest in general word dictionaries so produce results that are similar to each other as do results we obtain may differ from results we might those in the last four columnsalthough the mutual have obtained using terminology lexicons information results seem to be almost in a class of if our gold standard contains k mwus with their own they actually are similar overall to the corpus frequencies satisfying threshold our first four sets of results therefore we will refer to figure of merit is given by where pi equals ihi and hi is the number of hypothesized mwus required to find the ith correct mwuthis fom corresponds to area under a precisionrecall curvewordnet has definite advantages as an evaluation resourceit has in excess of 50000 mwus is freely accessible widely used and is in electronic formyet it obviously cannot contain every mwufor instance our corpus contains 177331 ngrams discards any proposed capitalized ngram whose uncapitalized version is not in wordnetthe second mode n disregards all capitalized ngramstable 3 illustrates algorithmic performance as compared to the 2610 mwus from wordnetthe first double column illustrates outofthebox performance on all 177331 possible ngramsthe second double column shows crosssourcing only hypothesizing mwus that appear in at least two separate datasets but being evaluated against all of the 2610 valid unitsdouble columns 3 and 4 show effects from highfrequency filtering the ngrams of the first and second columns respectivelyas table 3 suggests for every condition the informationlike algorithms seem to perform best at identifying valid general mwu headwordsmoreover they are enhanced when crosssourcing is considered but since much of their strength comes from identifying proper nouns filtering has little or even negative impacton the other hand data sourcethey also improve significantly with filteringoverall though after the algorithms are judged even the best score of 0265 is far short of the maximum possible namely 10since wordnet is static and cannot report on all of a corpus ngrams one may expect different performance by using a more allencompassing dynamic resourcethe internet houses dynamic resources which can judge practically every induced ngramwith permission and sufficient time one can repeatedly query websites that host large collections of mrds and evaluate each ngramhaving approval we queried onelookcom acronymfindercom and infopleasecomthe first website interfaces with over 600 electronic dictionariesthe second is devoted to identifying proper acronymsthe third focuses on world facts such as historical figures and organization namesto minimize disruption to websites by reducing the total number of queries needed for evaluation we use an evaluation approach from the information retrieval community each algorithm reports its top 5000 mwu choices and the union of these choices is looked up on the internetvalid mwus identified at any website are assumed to be the only valid units in the data i the frequencylike approaches are independent of algorithms are then evaluated based on this showed how one could compute latent semantic collectionalthough this strategy for evaluation is vectors for any word in a corpus using the same approach we evaluation tractabletable 4 shows the algorithms compute semantic vectors for every proposed word performance ngram cx x x since lsa involves word though internet dictionaries and wordnet are counts we can also compute semantic vectors completely separate gold standards results are surprisingly consistentone can conclude that wordnet may safely be used as a gold standard in future mwu headword evaluationsalso scp have virtually identical results and seem to best identify mwu headwords yet there is still significant room for improvementcan performance be improvednumerous strategies could be exploredan idea we discuss here tries using induced semantics to rescore the output of the best algorithm and eliminate semantically compositional or modifiable mwu hypothesesdeerwester et al introduced latent semantic analysis as a computational technique for inducing semantic relationships between words and documentsit forms highdimensional vectors using word counts and uses singular value decomposition to project those vectors into an optimal kdimensional semantic subspace following an approach from schütze we for cs subcomponentsthese can either include or exclude qx n cs countswe seek to see if induced sema4ics can help eliminate incorrectlychosen mwusas will be shown the effort using semantics in this nature has a very small payoff for the expended costnoncompositionality is a key component of valid mwus so we may desire to emphasize ngrams that are semantically noncompositionalsuppose we wanted to determine if c were noncompositionalthen given some meaning function i c should satisfy an equation like where h combines the semantics of cs subcomponents and g measures semantic differencesif c were a bigram then if g is defined to be ab if h is the sum of c and d and if t is set to log pe then equation would become the pointwise mutual information of the bigramif g were defined to be b and if habn and ifx we essentially get zscoresthese formulations suggest that several of the probabilistic algorithms we have seen include noncompositionality measures alreadyhowever since the probabilistic algorithms rely only on distributional information obtained by considering juxtaposed words they tend to incorporate a significant amount of nonsemantic information such as syntaxcan semanticonly rescoring helpto find out we must select g h and t since we want to eliminate mwus that are compositional we want hs output to correlate well with c when there is compositionality and correlate poorly otherwisefrequently lsa vectors are correlated using the cosine between them a large cosine indicates strong correlation so large values for g1cos should signal weak correlation or noncompositionality h could represent a weighted vector sum of the components required for this taskthis seems to be a significant semantic vectors with weights set to either 10 componentyet there is still another maybe or the reciprocal of the words frequencies semantic compositionality is not always badtable 5 indicates several results using these interestingly this is often the caseconsider settingsas the first four rows indicate and as vice_president organized crime and desired noncompositionality is more apparent for marine_corpsalthough these are mwus one 52x than for 52xyet performance overall is horrible particularly considering we are rescoring zscore output whose score was 0269rescoring caused fivefold degradationwhat happens if we instead emphasize compositionalityrows 58 illustrate the effect there is a significant recovery in performancethe most reasonable explanation for this is that if mwus and their components are strongly correlated the components may rarely occur except in context with the mwuit takes about 20 hours to compute the 52x for each possible ngram combinationsince the probabilistic algorithms already identify ngrams that share strong distributional properties with their components it seems imprudent to exhaust resources on this lsabased strategy for noncompositionalitythese findings warrant some discussionwhy did noncompositionality failcertainly there is the possibility that better choices for g h and t could yield improvementswe actually spent months trying to find an optimal combination as well as a strategy for coupling lsabased scores with the zscores but without availanother possibility although lsa can find semantic relationships it may not make semantic decisions at the level would still expect that the first is related to president the second relates to crime and the last relates to marinesimilarly tokens such as johns_hopkins and elvis are anaphors for johns_hopkins_university and elvis_presley so they should have similar meaningsthis begs the question can induced semantics help at allthe answer is yes the key is using lsa where it does best finding things that are similar or substitutablefor every collocation cx1x2xi1xixi1xn we attempt to find other similar patterns in the data x1x2xi1yxi1xnif xi and y are semantically related chances are that c is substitutablesince lsa excels at finding semantic correlations we can compare 52xi and sly to see if c is substitutablewe use our earlier approach for performing the comparison namely for every word w we compute cos for 200 randomly chosen words r this allows for computation of a correlaton mean and standard deviation between w and other wordsas before we then compute a normalized cosine score between words of interest defined by with this setup we now look for substitutivitynote that phrases may be substitutable and still be headword if their substitute phrases are themselves mwusfor example dioxide in carbon_dioxide is semantically similar to monoxide in carbon_monoxidemoreover there are other important instances of valid substitutivity however guilty and innocent are semantically related but pleaded_guilty and pleaded_innocent are not mwuswe would like to emphasize only ngrams whose substitutes are valid mwusto show how we do this using lsa suppose we want to rescore a list l whose entries are potential mwusfor every entry x in l we seek out all other entries whose sorted order is less than some maximum value that have all but one word in commonfor example suppose x is bachelor__s_degree the only other entry that matches in all but one word is master__s_degree if the semantic vectors for bachelor and master have a normalized cosine score greater than a threshold of 20 we then say that the two mwus are in each others substitution setto rescore we assign a new score to each entry in substitution seteach element in the substitution set gets the same scorethe score is derived using a combination of the previous zscores for each element in the substitution setthe combining function may be an averaging or a computation of the median the maximum or something elsethe maximum outperforms the average and the median on our databy applying in to our data we observe a small but visible improvement of 13 absolute to 282 it is also possible that other improvements could be gained using other combining strategiesthis paper identifies several new results in the area of mwufindingwe saw that mwu headword evaluations using wordnet provide similar results to those obtained from far more extensive webbased resourcesthus one could safely use wordnet as a gold standard for future evaluationswe also noted that informationlike algorithms particularly zscores scp and x2 seem to perform best at finding mrd headwords regardless of filtering mechanism but that improvements are still neededwe proposed two new lsabased approaches which attempted to address issues of noncompositionality and nonsubstitutivityapparently either current algorithms already capture much noncompositionality or lsabased models of noncompositionality are of little helplsa does help somewhat as a model of substitutivityhowever lsabased gains are small compared to the effort required to obtain themthe authors would like to thank the anonymous reviewers for their comments and insights
W01-0513
is knowledgefree induction of multiword unit dictionary headwords a solved problemwe seek a knowledgefree method for inducing multiword units from text corpora for use as machinereadable dictionary headwordswe provide two major evaluations of nine existing collocationfinders and illustrate the continuing need for improvementwe use latent semantic analysis to make modest gains in performance but we show the significant challenges encountered in trying this approachwe show that wordnet is as effective an evaluation resource as the web for mwe detection methods despite its inherent size limitations and static naturewe compare the semantic vector of a phrase and the vectors of its component words in two ways one includes the phrases contexts in the construction of the semantic vectors of the parts and one does not
latent semantic analysis for text segmentation this paper describes a method for linear text segmentation that is more accurate or at least as accurate as stateoftheart methods intersentence similarity is estimated by latent semantic analysis boundary locations are discovered by divisive clustering test results show lsa is a more accurate similarity measure than the the aim of linear text segmentation is to partition a document into blocks such that each segment is coherent and consecutive segments are about different topicsthis procedure is useful in information retrieval summarisation text understanding anaphora resolution language modelling and text navigation this paper presents a new algorithm for segmenting written textthe method builds on previous work by choi the primary distinction is the use of latent semantic analysis in formulating the similarity matrixwe discovered that lsa is a more accurate measure of similarity than the cosine metric stemming does not always improve segmentation accuracy and ranking is crucial to cosine but not lsaa text segmentation algorithm has three main partsfirst the input text is divided into elementary blockssecond a similarity metric identifies blocks that are about the same topicfinally topic boundaries are discovered by a clustering algorithman elementary block is the smallest text segment that can describe an entire topic eg sentences paragraphs and arbitrarysized segments linguistic theories and work in information retrieval suggest a coherent text segment is represented by paragraphswe argue that a paragraph can address multiple topics and is motivated by content writing style and presentationthus a topic segment is a collection of sentencesthis view is supported by previous work in text segmentation a similarity metric estimates the likelihood of two segments describing the same topicexisting methods fall into one of two categorieslexical cohesion methods stem from the work of halliday and hasan in which a coherent topic segment is believed to contain parts with similar vocabularyimplementations of this use word stem repetition context vectors entity repetition thesaurus relations spreading activation over dictionary a word distance model or a word frequency model to detect cohesionthese methods are typically applied in information retrieval to segment written textmultisource methods use cue phrases prosodic features ellipsis anaphora syntactic features language models and lexical cohesion metrics to detect topic boundariesfeatures are combined using decision trees probabilistic models and maximum entropy models the aim is to improve segmentation accuracy by combining multiple indicators of topic shiftthese methods are typically applied in topic detection and tracking to segment transcribed text and broadcast news storiestopic boundaries are discovered by merging consecutive elementary blocks that are about the same topicexisting algorithms used a sliding window lexical chains dynamic programming agglomerative clustering and divisive clustering to determine the optimal segmentationthe main difficulty in clustering is automatic termination ie determining the number of topic boundaries in a documentthe input to our algorithm is a list of tokenised sentences s 818content words are identified by removing punctuation marks and stopwords from s a term frequency vector f is then constructed for each sentence i fij denotes the number of times content word j occurs in sthe c99 algorithm uses the cosine metric to compute a nxn similarity matrix m for s represents the similarity between s and sithe assumption is two sentences with similar word usage are likely to be about the same topicthis idea has two main problemsfirst the estimate is inaccurate for short passagessecond synonyms are considered negative evidence eg car e s and automobile e si implies s and si are dissimilarthe first problem was addressed by replacing with its rank rii the idea is the difference in magnitude is inaccurate thus one can only use the order as evidence for segmentationconsider x xlx2x3 136 as the length of three objectsif x was measured with an ordinary ruler one can conclude that x2 is three times longer than x1this is a quantitative analysis of x ie the quantity is significanthowever if the ruler was warped but the order of the markings is preserved one can only conclude that x1 combin depart department departthus similar surface forms are considered positive evidence in the similarity estimatewe propose that latent semantic analysis offers a better solution to the term matching problemlsa stems from work in information retrieval where the main difficulty is formulating a similarity metric that associates a user query with the relevant documents in a databasethe basic keyword search approach retrieves all documents which contain some or all of the query termsthis is inaccurate since the same concept may be described using different termsto circumvent this jing and croft developed an association thesaurus for matching semantically related wordsxu and croft offered a trainable method call local context analysis which replaces each query term with frequently cooccurring wordsroughly speaking lca computes a word cooccurrence matrix c for a training corpusa threshold is then applied such that large values in c are replaced by 1 and other values become 0each row c can be considered as a feature vector for word ithe meaning of a text is approximated by the sum of the word feature vectorssimilarity between two texts is estimated by the distance between the corresponding feature vectors lsa is a classification approach to query expansionthe method is similar to lca in that the quotmeaningquot of a word w is represented by its relation to other wordsthe primary distinction is lsa applies principle components analysis to a word similarity matrix to identify the best features for distinguishing dissimilar wordslike lca the meaning of a text is computed as the sum of the word feature vectorstext similarity is measured by the cosine of the corresponding feature vectorslsa has been shown to match human similarity judgements on a wide range of tasks lsa is trained on a set of texts a y1 with vocabulary twi turdanxm matrix a is calculated in which ai is the number of times to occurs in sithe values are scaled according to a general form of inverse document frequency singular value decomposition or svd is then applied to yield b uevt where xt denotes the transposed matrix of xthe columns of you and v are the eigenvectors of bbt and btb respectivelythe diagonal values of e are the corresponding singular values ie the nonnegative square roots of the eigenvalues of bbtthese are sorted in descending orderbbt is a word similarity matrix where the quotmeaningquot of a word to is expressed in terms of its dotproduct with all other words w1 107as a classification problem the eigenvectors in you are the principle axes for distinguishing the word feature vectors or rows in bbtin other words the first k columns of you or ak is the best approximation of bbt in kdimensional spaceak is the kdimensional lsa space for athe ith row in ak or ak is the lsa feature vector for word toapplying svd to w has three main benefitsfirst ak is a concise representation of w thus storage and computational complexity of the similarity metric is reducedsecond words which occur in similar contexts are represented by similar feature vectors in akfinally noise in w is removed by simply omitting the less salient dimensions in youa sentence s is represented by its term frequency vector a where li is the frequency of term j in sgiven ak the quotmeaningquot of s is computed by eq3informally s is represented by the sum of the lsa feature vectorsintersentence similarity is estimated by the cosine of the corresponding x since ak is derived from the cooccurrence matrix a the size of each training text si e a is crucial to its performancework in information retrieval uses sz document since the aim is to distinguish entire textssz paragraph is popular in psychology experimentshowever we suspect the segmentation task may benefit from sz sentencethus two training corpora were derived from the brown corpus annotations were first removed to leave a set of tokenised raw text this was partitioned into 35000 paragraphs or 104000 sentences as two training corporathe parameter k adjusts the accuracy of aka large k implies minor differences in the feature space are significantthus they should be taken into account in the formulation of akthis is appropriate when the vocabulary is small and there is sufficient training dataa small k is used when a is sparse and the values in a are inaccurateonce the similarity matrix m is calculated for the input text s the image ranking procedure in c99 is then applied to obtain a rank matrix are rzi is the proportion of neighbours of mzi with a lower value than mzithe motivation for applying image ranking in the new algorithm is to test whether a quantitative or qualitative interpretation of the similarity values has any impact on segmentation accuracythe hypothesis is that lsa similarity values are more accurate than cosine similarity valuesthus image ranking should have a smaller impact on lsa than the cosine metricthe input matrix x can either be the similarity matrix m or the rank matrix r depending on whether ranking is applied to m topic boundaries are identified by the divisive clustering procedure in c99a topic segment tk is defined by its start and end sentences sz and si or its range tk i jthe number of intersentence similarity values in tk is o itk12 ie2the sum of the values in tk is 3 ezet1 k eebk x thus the average intersentence similarity value for a segmentation t t t is defined as the divisive clustering algorithm begins by considering the entire input document s as a coherent topic segmentthis is partitioned into two segments t ftl t21 at a sentence boundary that maximises it ie the most prominent topic boundarythe recursive procedure proceeds until s can no longer be subdividedthe optimal segmentation is signalled by a sharp change in itfor implementation details and optimisations see the following experiments aim to establish the relationship between linguistic processes and segmentation error ratethe test procedure is based on that presented in which was derived from work in tdt and previous experiments in text segmentation the task is to find the most prominent topic boundaries in a concatenated textthe accuracy of a segmentation algorithm is assessed by the experiment package described in a test sample is a concatenation of ten text segmentseach segment is the first n sentences of a randomly selected document from a subset2 of the brown corpus table 1 presents the corpus statisticsa sample is characterised by the range of n tij is a set of samples with i 6 sentencesthe second set of experiments focused on lsa as a similarity metricthe cosine metric in c99 was replaced by lsaten different lsa spaces were examinedwe discovered that lsa is twice as accurate as the cosine metricthe results also showed vocabulary difference between paragraphs is a good feature for training a similarity metricfurther investigation into the relationship between ranking lsa dimensionality and error rate revealed that lsa values become less accurate as more dimensions are incorporated into the feature vectorsthis implies the training data is noisyhowever with ranking error rate decreasesthis shows the order of lsa values becomes more accurate when more features are usedfuture work will focus on document specific lsa and the termination strategy of the new algorithmtest results have shown the termination procedure in c99 works well on lsa similarity values but not on the ranked valueswe suspect the threshold selection method has to be modifiedin terms of clustering dynamic programming approaches will be examinedfinally a lsa procedure for computing document specific similarity values will be evaluatedthanks are due to the anonymous reviewers for their invaluable comments masao utiyama and hitoshi isahara for providing the u00 algorithm and detailed results marti hearst for guidance on the evaluation problem mary mcgee wood for support and hcrc for making this work possible
W01-0514
latent semantic analysis for text segmentationthis paper describes a method for linear text segmentation that is more accurate or at least as accurate as stateoftheart methods intersentence similarity is estimated by latent semantic analysis boundary locations are discovered by divisive clusteringtest results show lsa is a more accurate similarity measurewe use all vocabulary words to compute lowdimensional document vectors
corpus variation and parser performance most work in statistical parsing has focused on a single corpus the wall street journal portion of the penn treebank while this has allowed for quantitative comparison of parsing techniques it has left open the question of how other types of text might affect parser performance and how portable parsing models are across corpora we examine these questions by comparing results for the brown and wsj corpora and also consider which parts of the parser probability model are particularly tuned to the corpus on which it was trained this leads us to a technique for pruning parameters to reduce the size of the parsing model the past several years have seen great progress in the field of natural language parsing through the use of statistical methods trained using large corpora of handparsed training datathe techniques of charniak collins and ratnaparkhi achieved roughly comparable results using the same sets of training and test datain each case the corpus used was the penn treebank handannotated parses of wall street journal articlesrelatively few quantitative parsing results have been reported on other corpora for results on switchboard as well as collins et al for results on czech and hwa for bootstrapping from wsj to atisthe inclusion of parses for the brown corpus in the penn treebank allows us to compare parser performance across corporain this paper we examine the following questions our investigation of these questions leads us to a surprising result about parsing the wsj corpus over a third of the model parameters can be eliminated with little impact on performanceaside from crosscorpus considerations this is an important finding if a lightweight parser is desired or memory usage is a considerationa great deal of work has been done outside of the parsing community analyzing the variations between corpora and different genres of textbiber investigated variation in a number syntactic features over genres or registers of languageof particular importance to statistical parsers is the investigation of frequencies for verb subcategorizations such as roland and jurafsky roland et al find that subcategorization frequencies for certain verbs vary significantly between the wall street journal corpus and the mixedgenre brown corpus but that they vary less so between genrebalanced british and american corporaargument structure is essentially the task that automatic parsers attempt to solve and the frequencies of various structures in training data are reflected in a statistical parser probability modelthe variation in verb argument structure found by previous research caused us to wonder to what extent a model trained on one corpus would be useful in parsing anotherthe probability models of modern parsers include not only the number and syntactic type of a word arguments but lexical information about their fillersalthough we are not aware of previous comparisons of the frequencies of argument fillers we can only assume that they vary at least as much as the syntactic subcategorization frameswe take as our baseline parser the statistical model of model 1 of collins the model is a historybased generative model in which the probability for a parse tree is found by expanding each node in the tree in turn into its child nodes and multiplying the probabilities for each action in the derivationit can be thought of as a variety of lexicalized probabilistic contextfree grammar with the rule probabilities factored into three distributionsthe first distribution gives probability of the syntactic category h of the head child of a parent node with category p head word hhw with the head tag hht the head word and head tag of the new node h are defined to be the same as those of its parentthe remaining two distributions generate the nonhead children one after the othera special stop symbol is generated to terminate the sequence of children for a given parenteach child is generated in two steps first its syntactic category c and head tag cht are chosen given the parent and head child features and a function a representing the distance from the head child then the new child head word chw is chosen for each of the three distributions the empirical distribution of the training data is interpolated with less specific backoff distributions as we will see in section 5further details of the model including the distance features used and special handling of punctuation conjunctions and base noun phrases are described in collins the fundamental features of used in the probability distributions are the lexical heads and head tags of each constituent the cooccurrences of parent nodes and their head children and the cooccurrences of child nodes with their head siblings and parentsthe probability models of charniak magerman and ratnaparkhi differ in their details but are based on similar featuresmodels 2 and 3 of collins add some slightly more elaborate features to the probability model as do the additions of charniak to the model of charniak our implementation of collins model 1 performs at 86 precision and recall of labeled parse constituents on the standard wall street journal training and test setswhile this does not reflect the stateoftheart performance on the wsj task achieved by the more the complex models of charniak and collins we regard it as a reasonable baseline for the investigation of corpus effects on statistical parsingwe conducted separate experiments using wsj data brown data and a combination of the two as training materialfor the wsj data we observed the standard division into training and test setsfor the brown data we reserved every tenth sentence in the corpus as test data using the other nine for trainingthis may underestimate the difficulty of the brown corpus by including sentences from the same documents in training and test setshowever because of the variation within the brown corpus we felt that a single contiguous test section might not be representativeonly the subset of the brown corpus available in the treebank ii bracketing format was usedthis subset consists primarily of various fiction genrescorpus sizes are shown in results for the brown corpus along with wsj results for comparison are shown in table 2the basic mismatch between the two corpora is shown in the significantly lower performance of the wsjtrained model on brown data than on wsj data a model trained on brown data only does significantly better despite the smaller size of the training setcombining the wsj and brown training data in one model improves performance further but by less than 05 absolutesimilarly adding the brown data to the wsj model increased performance on wsj by less than 05thus even a large amount of additional data seems to have relatively little impact if it is not matched to the test materialthe more varied nature of the brown corpus also seems to impact results as all the results on brown are lower than the wsj resultthe parsers cited above all use some variety of lexical dependency feature to capture statistics on the cooccurrence of pairs of words being found in parentchild relations within the parse treethese word pair relations also called lexical bigrams are reminiscent of dependency grammars such as melcuk and the link grammar of sleator and temperley in collins model 1 the word pair statistics occur in the distribution where hhw represent the head word of a parent node in the tree and chw the head word of its childbecause this is the only part of the model that involves pairs of words it is also where the bulk of the parameters are foundthe large number of possible pairs of words in the vocabulary make the training data necessarily sparsein order to avoid assigning zero probability to unseen events it is necessary to smooth the training datathe collins model uses linear interpolation to estimate probabilities from empirical distributions of varying specificities where p represents the empirical distribution derived directly from the counts in the training datathe interpolation weights a1 a2 are chosen as a function of the number of examples seen for the conditioning events and the number of unique values seen for the predicted variableonly the first distribution in this interpolation scheme involves pairs of words and the third component is simply the probability of a word given its part of speechbecause the word pair feature is the most specific in the model it is likely to be the most corpusspecificthe vocabularies used in corpora vary as do the word frequenciesit is reasonable to expect word cooccurrences to vary as wellin order to test this hypothesis we removed the distribution p from the parsing model entirely relying on the interpolation of the two less specific distributions in the parser we performed crosscorpus experiments as before to determine whether the simpler parsing model might be more robust to corpus effectsresults are shown in table 3perhaps the most striking result is just how little the elimination of lexical bigrams affects the baseline system performance on the wsj corpus decreases by less than 05 absolutemoreover the performance of a wsjtrained system without lexical bigrams on brown test data is identical to the wsjtrained system with lexical bigramslexical cooccurrence statistics seem to be of no benefit when attempting to generalize to a new corpusthe relatively high performance of a parsing model with no lexical bigram statistics on the wsj task led us to explore whether it might be possible to significantly reduce the size of the parsing model by selectively removing parameters without sacrificing performancesuch a technique reduces the parser memory requirements as well as the overhead of loading and storing the model which could be desirable for an application where limited computing resources are availablesignificant effort has gone into developing techniques for pruning statistical language models for speech recognition and we borrow from this work using the weighted difference technique of seymore and rosenfeld this technique applies to any statistical model which estimates probabilities by backing off that is using probabilities from a less specific distribution when no data are available are available for the full distribution as the following equations show for the general case here e is the event to be predicted h is the set of conditioning events or history a is a backoff weight and h is the subset of conditioning events used for the less specific backoff distributionbo is the backoff set of events for which no data are present in the specific distribution p1in the case of ngram language modeling e is the next word to be predicted and the conditioning events are the n 1 preceding wordsin our case the specific distribution p1 of the backoff model is pcw of equation 1 itself a linear interpolation of three empirical distributions from the training datathe less specific distribution p2 of the backoff model is pcw2 of equation 2 an interpolation of two empirical distributionsthe backoff weight a is simply 1 a1 in our linear interpolation modelthe seymorerosenfeld pruning technique can be used to prune backoff probability models regardless of whether the backoff weights are derived from linear interpolation weights or discounting techniques such as goodturingin order to ensure that the model probabilities still sum to one the backoff weight a must be adjusted whenever a parameter is removed from the modelin the seymorerosenfeld approach parameters are pruned according to the following criterion where p represents the new backed off probability estimate after removing p from the model and adjusting the backoff weight and n is the count in the training datathis criterion aims to prune probabilities that are similar to their backoff estimates and that are not frequently usedas shown by stolcke this criterion is an approximation of the relative entropy between the original and pruned distributions but does not take into account the effect of changing the backoff weight on other events probabilitiesadjusting the threshold 0 below which parameters are pruned allows us to successively remove more and more parametersresults for different values of 0 are shown in table 4the complete parsing model derived from the wsj training set has 735850 parameters in a total of nine distributions three levels of backoff for each of the three distributions ph p and pthe lexical bigrams are contained in the most specific distribution for premoving all these parameters reduces the total model size by 43the results show a gradual degradation as more parameters are prunedthe ten lexical bigrams with the highest scores for the pruning metric are shown in table 5 for wsj and table 6the pruning metric of equation 3 has been normalized by corpus size to allow comparison between wsj and brownthe only overlap between the two sets is for pairs of unknown word tokensthe wsj bigrams are almost all specific to finance are all word pairs that are likely to appear immediately adjacent to one another and are all children of the base np syntactic categorythe brown bigrams which have lower correlation values by our metric include verbsubject and prepositionobject relations and seem more broadly applicable as a model of englishhowever the pairs are not strongly related semantically no doubt because the first term of the pruning criterion favors the most frequent words such as forms of the verbs quotbequot and quothavequotour results show strong corpus effects for statistical parsing models a small amount of matched training data appears to be more useful than a large amount of unmatched datathe standard wsj task seems to be simplified by its homogenous styleadding training data from from an unmatched corpus does not hurt but does not help a great deal eitherin particular lexical bigram statistics appear to be corpusspecific and our results show that they are of no use when attempting to generalize to new training datain fact they are of surprisingly little benefit even for matched training and test data removing them from the model entirely reduces performance by less than 05 on the standard wsj parsing taskour selective pruning technique allows for a more fine grained tuning of parser model size and would be particularly applicable to cases where large amounts of training data are available but memory usage is a considerationin our implementation pruning allowed models to run within 256mb that unpruned required larger machinesthe parsing models of charniak and collins add more complex features to the parsing model that we use as our baselinean area for future work is investigation of the degree to which such features apply across corpora or on the other hand further tune the parser to the peculiarities of the wall street journalof particular interest are the automatic clusterings of lexical cooccurrences used in charniak and magerman crosscorpus experiments could reveal whether these clusters uncover generally applicable semantic categories for the parser useacknowledgments this work was undertaken as part of the framenet project at icsi with funding from national science foundation grant itrhci 0086132
W01-0521
corpus variation and parser performancemost work in statistical parsing has focused on a single corpus the wall street journal portion of the penn treebankwhile this has allowed for quantitative comparison of parsing techniques it has left open the question of how other types of text might affect parser performance and how portable parsing models are across corporawe examine these questions by comparing results for the brown and wsj corpora and also consider which parts of the parser probability model are particularly tuned to the corpus on which it was trainedthis leads us to a technique for pruning parameters to reduce the size of the parsing modelwe show that the accuracy of parsers trained on the penn treebank degrades when applied to different genres and domainswe report results on sentences of 40 or less words on all the brown corpus sections combined for which we obtain 803810 recallprecision when training only on data from the wsj corpus and 839848 when training on data from the wsj corpus and all sections of the brown corpus
assigning timestamps to eventclauses we describe a procedure for arranging into a timeline the contents of news stories describing the development of some situation we describe the parts of the system that deal with 1 breaking sentences into eventclauses and 2 resolving both explicit and implicit temporal references evaluations show a performance of 52 compared to humans linguists who have analyzed news stories noticed that narratives1 are about more than one event and these events are temporally orderedthough it seems most logical to recapitulate events in the order in which they happened ie in chronological order the events are often presented in a different sequencethe same paper states that it is important to reconstruct the underlying event order2 for narrative analysis to assign meaning to the sequence in which the events are narrated at the level of discourse structureif the underlying event structure cannot be reconstructed it may well be impossible to understand the narrative at all let alone assign meaning to its structureseveral psycholinguistic experiments show the influence of eventarrangement in news stories on the ease of comprehension by readersduszak had readers reconstruct a news story from the randomized sentencesaccording to his experiments readers have a default strategy by whichin the absence of cues to the contrarythey reimpose chronological order on events in the discoursethe problem of reconstructing the chronological order of events becomes more complicated if we have to deal with separate news stories written at different times and describing the development of some situation as is the case for multidocument summarizationby judicious definition one can make this problem easy or hardselecting only specific items to assign timepoints to and then measuring correctness on them alone may give high performance but leave much of the text unassignedwe address the problem of assigning a timepoint to every clause in the textour approach is to break the news stories into their constituent events and to assign timestampseither timepoints or timeintervalsto these eventswhen assigning timestamps we analyze both implicit time references and explicit ones such as on monday in 1998 etcthe result of the work is a prototype program which takes as input set of news stories broken into separate sentences and produces as output a text that combines all the events from all the articles organized in chronological orderas data we used a set of news stories about an earthquake in afghanistan that occurred at the end of may in 1998these news stories were taken from cnn abc and apw websites for the duc2000 meetingthe stories were all written within one weeksome of the texts were written on the same dayin addition to a description of the may earthquake these texts contain references to another earthquake that occurred in the same region in february 1998to divide sentences into eventclauses we use contex a parser that produces a syntactic parse tree augmented with semantic labelscontex uses machine learning techniques to induce a grammar from a given treebanksa sample output of contex is given in appendix 1to divide a sentence into eventclauses the parse tree output by contex is analyzed from left to right the cat field for each node provides the necessary information about whether the node under consideration forms a part of its upper level event or whether it introduces a new eventcat features that indicate new events are sclause ssnt ssubclause spartclause srelclausethese features mark clauses which contain both subject and predicate the above procedure classifies a clause containing more than one verb as a simple clausesuch clauses are treated as one event and only one timepoint will be assigned to themthis is fine when the second verb is used in the same tense as the first but may be wrong in some cases as in he lives in this house now and will stay here for one more yearthere are no such clauses in the analyzed data so we ignore this complication for the presentthe parse tree also gives information about the tense of verbs used later for time assignmentin order to facilitate subsequent processing we wish to rephrase relative clauses as full independent sentenceswe therefore have to replace pronouns where it is possible by their antecedentsvery often the parser gives information about the referential antecedents therefore we introduced the rule if it is possible to identify the referent put it into the eventclause here the antecedent for which is identified as the relief and gives which was costing lives instead of which was costing livesfortunately in most cases our rule works correctlyalthough the eventidentifier works reasonably well breaking text into eventclauses needs further investigationtable 1 shows the performance of the systemtwo kinds of mistakes are made by the event identifier those caused by contex and those caused by the fact that our clause identifier does too shallow analysis of the parse treeaccording to time is expressed at different levelsin the morphology and syntax of the verb phrase in time adverbials whether lexical or phrasal and in the discourse structure of the stories above the sentencefor the present work we use slightly modified time representations suggested in formats used for time representation we use two anchoring time points we require that the first sentence for each article contains time informationfor example the date information is in boldwe denote by ti the reference timepoint for the article where i 3 yyyyyear number dddabsolute number of the day within the year wumber of the day in a week if it is impossible to point out the day of the week then w is assigned 0 means that it is the time point of article ithe symbol ti is used as a comparative timepoint if the time the article was written is unknownthe information in brackets gives the exact date the article was written which is the main anchor point for the timestamperthe information about hours minutes and seconds is ignored for the present2last time point assigned in the same sentence while analyzing different eventclauses within the same sentence we keep track of what timepoint was most recently assigned within this sentenceif needed we can refer to this timepointin case the most recent time information assigned is not a date but an interval we record information about both time boundarieswhen the program proceeds to the next sentence the variable for the most recently assigned date becomes undefinedin most cases this assumption works correctly the last time interval assigned for sentence 52 is 19985301998710 which gives an approximate range of days when the previous earthquake happenedbut the information in sentence 53 is about the recent earthquake and not about the previous one of 3 months earlier which is why it would be a mistake to point monday and tuesday within that rangemani and wilson point out over half of the errors made by his timestamper were due to propagation of spreading of an incorrect event time to neighboring eventsthe rule of dropping the most recently assigned date as an anchor point when proceeding to the next sentence very often helps us to avoid this problemthere are however cases where dropping the most recent time as an anchor when proceeding to the next sentence causes errors it is clear that sentence 49 is the continuation of sentence 48 and refers to the same time point in this case our rule assigns the wrong time to 491still we retain this rule because it is more frequently correct than incorrectfirst the text divided into eventclauses is run through a program that extracts all the datestamps in most cases this program does not miss any datestamps and extracts only the correct onesthe only cases in which it did not work properly for the texts were here the modal verb may was assumed to be the month given that it started with a capital lettertuberculosis is already common in the area where people live in close quarters and have poor hygiene here the noun quarters which in this case is used in the sense immediate contact or close range was assumed to be used in the sense the fourth part of a measure of time after extracting all the datephrases we proceed to time assignmentwhen assigning a time to an event we select the time to be either the most recently assigned date or if the value of the most recently assigned date is undefined to the date of the articlewe use a set of rules to perform this selectionthese rules can be divided into two main categories those that work for sentences containing explicit date information and those that work for sentences that do notif the dayoftheweek used in the eventclause is the same as that of the article and there no words before it could signal that the described event happened earlier or will happen later then the timepoint of the article is assigned to this eventif before or after a dayoftheweek there is a wordwords signaling that the event happened earlier of will happen later then the timepoint is assigned in accordance with this signalword and the most recently assigned date if it is definedif the dayoftheweek used in the eventclause is not the same as that of the article then if there are words pointing out that the event happened before the article was written or the tense used in the clause is past then the time for the eventclause is assigned in accordance with this word or the most recent day corresponding to the current dayoftheweek is chosenif the signalword points out that the event will happen after the article was written or the tense used in the clause is future then the time for the eventclause is assigned in accordance with the signal word or the closest subsequent day corresponding to the current dayoftheweek helicopters evacuated 50 of the most seriously injured to emergency medical centersthe time for article 5 is so the time assigned to this eventclause is 531 19981511 19981522the rules are the same as for a dayoftheweek but in this case a timerange is assigned to the eventclausethe left boundary of the range is the first day of the month the right boundary is the last day of the month and though it is possible to figure out the days of weeks for these boundaries this aspect is ignored for the present earthquake in the same region killed 2300 people and left thousands of people homelessthe time for article 4 is so the time assigned to this eventclause is 481 19983201998600in the analyzed corpus there is a case where the presence of a name of month leads to a wrong timestamping because of february a wrong timeinterval is assigned to clause 633 namely 19983201998600as this eventclause is a description of the latest news as compared to some figures it should have the timepoint of the articlesuch cases present a good possibility for the use of machine learning techniques to disambiguate between the cases where we should take into account datephrase information and where notwe might have datestamps where the words weeks days months years are used with modifiersfor example remote mountainous area rocked three months earlier by another massive quake 524 that claimed some 2300 victimsin eventclause 523 the expression three months earlier is usedit is clear that to get the time for the event it is not enough to subtract 3 months from the time of the article because the above expression gives an approximate range within which this event could happen and not a particular datefor such cases we invented the following rule for event 523 the time range will be 19985301998710 if the modifier used with weeks days months or years is several then the multiplier used in is equal to 2if an eventclause does not contain any datephrase but contains one of the words when since after before etc it might mean that this clause refers to an event the time of which can be used as a reference point for the event under analysisin this case we ask the user to insert the time for this reference event manuallythis rule can cause problems in cases where after or before are used not as temporal connectors but as spatial ones though in the analyzed texts we did not face this problemif the current eventclause refers to a timepoint in presentpast perfect tense then an openended timeinterval is assigned to this eventthe starting point is unknown the endpoint is either the most recently assigned date or the timepoint of the articleif the current eventclause contains a verb in future tense then the openended timeinterval assigned to this eventclause has the starting point at either the most recently assigned date or the date of the articleother tenses that can be identified with the help of contex are present and past indefinitein the analyzed data all the verbs in present indefinite are given the most recently assigned date the situation with past indefinite is much more complicated and requires further investigation of more datanews stories usually describe the events that already took place at some time in the past which is why even if the day when the event happened is not over past tense is very often used for the description this means that very often an eventclause containing a verb in past indefinite tense can be assigned the most recently assigned date it might prove useful to use machine learned rules for such casesif there is no verb in the eventclause then the most recently assigned date is assigned to the eventclausewe ran the timestamper program on two types of data list of eventclauses extracted by the event identifier and list of eventclauses created manuallytables 2 and 3 show the resultsin the former case we analyzed only the correctly identified clausesone can see that even on manually created data the performance of the timestamper is not 100whysome errors are caused by assigning the time based on the datephrase present in the eventclause when this datephrase is not an adverbial time modifier but an attributefor example the third event describes the may 30 earthquake but the time interval given for this event is 19983201998600 it might be possible to use machine learned rules to correct such casesone more significant source of errors is the writing style when the reader sees early this morning he or she tends to assign to this clause the time of the article but later as seeing looked for two days realizes that the time of the clause containing early this morning is two days earlier than the time of the articleit seems that the errors caused by the writing style can hardly be avoidedif an event happened at some timepoint but according to the information in the sentence we can assign only a timeinterval to this event then we say that the timeinterval is assigned correctly if the necessary timepoint is within this timeintervalafter stamping all the news stories from the analyzed set we arrange the eventclauses from all the articles into a chronological orderafter doing that we obtain a new set of eventclauses which can easily be divided into two subsetsthe first one containing all the references to the february earthquake the second one containing the list of eventclauses in chronological order describing what happened in maysuch a text where all the events are organized in a chronological order might be very helpful in multidocument summarization where it is important to include into the final summary not only the most important information but also the most recent onethe output of the presented system gives the information about the timeorder of the events described in several documentsseveral linguistic and psycholinguistic studies deal with the problem of timearrangement of different textsthe research presented in these studies highlights many problems but does not solve themas for computational applications of time theories most work was done on temporal expressions that appear in scheduling dialogues there are many constraints on temporal expressions in this domainthe most relevant prior work is who implemented their system on news stories introduced rules spreading timestamps obtained with the help of explicit temporal expressions throughout the whole article and invented machine learning rules for disambiguating between specific and generic use of temporal expressions they also mention a problem of disambiguating between temporal expression and proper name as in usa todaybell notices more research is needed on the effects of time structure on news comprehensionthe hypothesis that the noncanonical news format does adversely affect understanding is a reasonable one on the basis of comprehension research into other narrative genres but the degree to which familiarity with news models may mitigate these problems is unclearthis research can greatly improve the performance of timestamper and might lead to a list of machine learning rules for time detectionin this paper we made an attempt to not just analyze and decode temporal expressions but to apply this analysis throughout the whole text and assign timestamps to such type of clauses which later could be used as separate sentences in various natural language applications for example in multidocument summarization text number of manually number of time point percentage of number created eventclauses correctly assigned to correct manually created clauses assignment target 1 7 6 8571 target 2 27 20 7407 target 3 5 4 8000 target 4 28 26 9285 target 5 33 30 9091 target 6 58 37 6379 total 158 123 7785
W01-1313
assigning timestamps to eventclauseswe describe a procedure for arranging into a timeline the contents of news stories describing the development of some situationwe describe the parts of the system that deal with 1 breaking sentences into eventclauses and 2 resolving both explicit and implicit temporal referencesevaluations show a performance of 52 compared to humanswe infer time values based on the most recently assigned date of the date of the article
building a discoursetagged corpus in the framework of rhetorical structure theory and anne anderson 1997 the reliability of a dialogue structure coding linguistics 1332 giacomo ferrari 1998 preliminary steps toward the creation of a discourse and text in of the first international conference on language the advent of largescale collections of annotated data has marked a paradigm shift in the research community for natural language processingthese corpora now also common in many languages have accelerated development efforts and energized the communityannotation ranges from broad characterization of documentlevel information such as topic or relevance judgments to discrete analysis of a wide range of linguistic phenomenahowever rich theoretical approaches to discoursetext analysis have yet to be applied on a large scaleso far the annotation of discourse structure of documents has been applied primarily to identifying topical segments intersentential relations and hierarchical analyses of small corpora in this paper we recount our experience in developing a large resource with discourselevel annotation for nlp researchour main goal in undertaking this effort was to create a reference corpus for communitywide usetwo essential considerations from the outset were that the corpus needed to be consistently annotated and that it would be made publicly available through the linguistic data consortium for a nominal fee to cover distribution coststhe paper describes the challenges we faced in building a corpus of this level of complexity and scope including selection of theoretical approach annotation methodology training and quality assurancethe resulting corpus contains 385 documents of american english selected from the penn treebank annotated in the framework of rhetorical structure theorywe believe this resource holds great promise as a rich new source of textlevel information to support multiple lines of research for language understanding applicationstwo principle goals underpin the creation of this discoursetagged corpus 1 the corpus should be grounded in a particular theoretical approach and 2 it should be sufficiently large enough to offer potential for widescale use including linguistic analysis training of statistical models of discourse and other computational linguistic applicationsthese goals necessitated a number of constraints to our approachthe theoretical framework had to be practical and repeatable over a large set of documents in a reasonable amount of time with a significant level of consistency across annotatorsthus our approach contributes to the community quite differently from detailed analyses of specific discourse phenomena in depth such as anaphoric relations or style types analysis of a single text from multiple perspectives or illustrations of a theoretical model on a single representative text our annotation work is grounded in the rhetorical structure theory framework we decided to use rst for three reasons can play a crucial role in building natural language generation systems and text summarization systems can be used to increase the naturalness of machine translation outputs and can be used to build essayscoring systems that provide students with discoursebased feedback we suspect that rst trees can be exploited successfully in the context of other applications as wellin the rst framework the discourse structure of a text can be represented as a tree defined in terms of four aspects supporting or background unit of informationbelow we describe the protocol that we used to build consistent rst annotationsthe first step in characterizing the discourse structure of a text in our protocol is to determine the elementary discourse units which are the minimal building blocks of a discourse treemann and thompson state that rst provides a general way to describe the relations among clauses in a text whether or not they are grammatically or lexically signalled yet applying this intuitive notion to the task of producing a large consistently annotated corpus is extremely difficult because the boundary between discourse and syntax can be very blurrythe examples below which range from two distinct sentences to a single clause all convey essentially the same meaning packaged in different ways in example 1 there is a consequential relation between the first and second sentencesideally we would like to capture that kind of rhetorical information regardless of the syntactic form in which it is conveyedhowever as examples 24 illustrate separating rhetorical from syntactic analysis is not always easyit is inevitable that any decision on how to bracket elementary discourse units necessarily involves some compromisesreseachers in the field have proposed a number of competing hypotheses about what constitutes an elementary discourse unitwhile some take the elementary units to be clauses others take them to be prosodic units turns of talk sentences intentionally defined discourse segments or the contextually indexed representation of information conveyed by a semiotic gesture asserting a single state of affairs or partial state of affairs in a discourse world regardless of their theoretical stance all agree that the elementary discourse units are nonoverlapping spans of textour goal was to find a balance between granularity of tagging and ability to identify units consistently on a large scalein the end we chose the clause as the elementary unit of discourse using lexical and syntactic clues to help determine boundaries relative clauses nominal postmodifiers or clauses that break up other legitimate edus are treated as embedded discourse units a relaxation of controls on exports to the soviet bloc is questioningwsj_2326 finally a small number of phrasal edus are allowed provided that the phrase begins with a strong discourse marker such as because in spite of as a result of according towe opted for consistency in segmenting sacrificing some potentially discourserelevant phrases in the processonce the elementary units of discourse have been determined adjacent spans are linked together via rhetorical relations creating a hierarchical structurerelations may be mononuclear or multinuclearmononuclear relations hold between two spans and reflect the situation in which one span the nucleus is more salient to the discourse structure while the other span the satellite represents supporting informationmultinuclear relations hold among two or more spans of equal weight in the discourse structurea total of 53 mononuclear and 25 multinuclear relations were used for the tagging of the rst corpusthe final inventory of rhetorical relations is data driven and is based on extensive analysis of the corpusalthough this inventory is highly detailed annotators strongly preferred keeping a higher level of granularity in the selections available to them during the tagging processmore extensive analysis of the final tagged corpus will demonstrate the extent to which individual relations that are similar in semantic content were distinguished consistently during the tagging processthe 78 relations used in annotating the corpus can be partitioned into 16 classes that share some type of rhetorical meaning attribution background because comparison condition contrast elaboration enablement evaluation explanation joint mannermeans topiccomment summary temporal topicchangefor example the class explanation includes the relations evidence explanationargumentative and reason while topiccomment includes problemsolution questionanswer statementresponse topiccomment and commenttopicin addition three relations are used to impose structure on the tree textualorganization span and sameunit our methodology for annotating the rst corpus builds on prior corpus work in the rhetorical structure theory framework by marcu et al because the goal of this effort was to build a highquality consistently annotated reference corpus the task required that we employ people as annotators whose primary professional experience was in the area of language analysis and reporting provide extensive annotator training and specify a rigorous set of annotation guidelinesthe annotators hired to build the corpus were all professional language analysts with prior experience in other types of data annotationthey underwent extensive handson training which took place roughly in three phasesduring the orientation phase the annotators were introduced to the principles of rhetorical structure theory and the discoursetagging tool used for the project the tool enables an annotator to segment a text into units and then build up a hierarchical structure of the discoursein this stage of the training the focus was on segmenting hard copy texts into edus and learning the mechanics of the toolin the second phase annotators began to explore interpretations of discourse structure by independently tagging a short document based on an initial set of tagging guidelines and then meeting as a group to compare resultsthe initial focus was on resolving segmentation differences but over time this shifted to addressing issues of relations and nuclearitythese exploratory sessions led to enhancements in the tagging guidelinesto reinforce new rules annotators retagged the documentduring this process we regularly tracked interannotator agreement in the final phase the annotation team concentrated on ways to reduce differences by adopting some heuristics for handling higher levels of the discourse structurewiebe et al present a method for automatically formulating a single best tag when multiple judges disagree on selecting between binary featuresbecause our annotators had to select among multiple choices at each stage of the discourse annotation process and because decisions made at one stage influenced the decisions made during subsequent stages we could not apply wiebe et als methodour methodology for determining the best guidelines was much more of a consensusbuilding process taking into consideration multiple factors at each stepthe final tagging manual over 80 pages in length contains extensive examples from the corpus to illustrate text segmentation nuclearity selection of relations and discourse cuesthe manual can be downloaded from the following web site httpwwwisiedumarcudiscoursethe actual tagging of the corpus progressed in three developmental phasesduring the initial phase of about four months the team created a preliminary corpus of 100 tagged documentsthis was followed by a onemonth reassessment phase during which we measured consistency across the group on a select set of documents and refined the annotation rulesat this point we decided to proceed by presegmenting all of the texts on hard copy to ensure a higher overall quality to the final corpuseach text was presegmented by two annotators discrepancies were resolved by the author of the tagging guidelinesin the final phase all 100 documents were retagged with the new approach and guidelinesthe remainder of the corpus was tagged in this mannerannotators developed different strategies for analyzing a document and building up the corresponding discourse treethere were two basic orientations for document analysis hard copy or graphical visualization with the toolhard copy analysis ranged from jotting of notes in the margins to marking up the document into discourse segmentsthose who preferred a graphical orientation performed their analysis simultaneously with building the discourse structure and were more likely to build the discourse tree in chunks rather than incrementallywe observed a variety of annotation styles for the actual building of a discourse treetwo of the more representative styles are illustrated below discourse tree by immediately attaching the current node to a previous nodewhen building the tree in this fashion the annotator must anticipate the upcoming discourse structure possibly for a large spanyet often an appropriate choice of relation for an unseen segment may not be obvious from the current unit that needs to be attachedthat is why annotators typically used this approach on short documents but resorted to other strategies for longer documents2the annotator segments multiple units at a time then builds discourse subtrees for each sentenceadjacent sentences are then linked and larger subtrees begin to emergethe final tree is produced by linking major chunks of the discourse corp18 this is in part because of the effect19 of having to average the number of shares outstanding20 she said21 in addition22 mrs lidgerwood said23 norfolk is likely to draw down its cash initially24 to finance the purchases25 and thus forfeit some interest income26 wsj_1111 the discourse subtree for this text fragment is given in figure 1using style 1 the annotator upon segmenting unit 17 must anticipate the upcoming example relation which spans units 1726however even if the annotator selects an incorrect relation at that point the tool allows great flexibility in changing the structure of the tree later onusing style 2 the annotator segments each sentence and builds up corresponding subtrees for spans 16 1718 1921 and 2226the elaborationobjectattributeembedded structurethis strategy allows the annotator to see the emerging discourse structure more globally thus it was the preferred approach for longer documentsconsider the text fragment below consisting of four sentences and 11 edus still analysts do not expect the buyback to significantly affect pershare earnings in the short term16 the impact will not be that great17 said graeme lidgerwood of first boston second and third subtrees are then linked via an explanationargumentative relation after which the fourth subtree is linked via an elaborationadditional relationthe resulting span 1726 is finally attached to node 16 as an example satellitea number of steps were taken to ensure the quality of the final discourse corpusthese involved two types of tasks checking the validity of the trees and tracking interannotator consistencyannotators reviewed each tree for syntactic and semantic validitysyntactic checking involved ensuring that the tree had a single root node and comparing the tree to the document to check for missing sentences or fragments from the end of the textsemantic checking involved reviewing nuclearity assignments as well as choice of relation and level of attachment in the treeall trees were checked with a discourse parser and tree traversal program which often identified errors undetected by the manual validation processin the end all of the trees worked successfully with these programswe tracked interannotator agreement during each phase of the project using a method developed by marcu et al for computing kappa statistics over hierarchical structuresthe kappa coefficient has been used extensively in previous empirical studies of discourse it measures pairwise agreement among a set of coders who make category judgments correcting for chance expected agreementthe method described in marcu et al maps hierarchical structures into sets of units that are labeled with categorial judgmentsthe strengths and shortcomings of the approach are also discussed in detail thereresearchers in content analysis suggest that values of kappa 08 reflect very high agreement while values between 06 and 08 reflect good agreementtable 1 shows average kappa statistics reflecting the agreement of three annotators at various stages of the tasks on selected documentsdifferent sets of documents were chosen for each stage with no overlap in documentsthe statistics measure annotation reliability at four levels elementary discourse units hierarchical spans hierarchical nuclearity and hierarchical relation assignmentsat the unit level the initial scores and final scores represent agreement on blind segmentation and are shown in boldfacethe interim june and november scores represent agreement on hard copy presegmented textsnotice that even with presegmenting the agreement on units is not 100 perfect because of human errors that occur in segmenting with the toolas table 1 shows all levels demonstrate a marked improvement from april to november ranging from about 077 to 092 at the span level from 070 to 088 at the nuclearity level and from 060 to 079 at the relation levelin particular when relations are combined into the 16 rhetoricallyrelated classes discussed in section 22 the november results of the annotation process are extremely goodthe fewerrelations column shows the improvement in scores on assigning relations when they are grouped in this manner with november results ranging from 078 to 082in order to see how much of the improvement had to do with presegmenting we asked the same three annotators to annotate five previously unseen documents in january without reference to a presegmented documentthe results of this experiment are given in the last row of table 1 and they reflect only a small overall decline in performance from the november resultsthese scores reflect very strong agreement and represent a significant improvement over previously reported results on annotating multiple texts in the rst framework table 2 reports final results for all pairs of taggers who doubleannotated four or more documents representing 30 out of the 53 documents that were doubletaggedresults are based on presegmented documentsour team was able to reach a significant level of consistency even though they faced a number of challenges which reflect differences in the agreement scores at the various levelswhile operating under the constraints typical of any theoretical approach in an applied environment the annotators faced a task in which the complexity increased as support from the guidelines tended to decreasethus while rules for segmenting were fairly precise annotators relied on heuristics requiring more human judgment to assign relations and nuclearityanother factor is that the cognitive challenge of the task increases as the tree takes shapeit is relatively straightforward for the annotator to make a decision on assignment of nuclearity and relation at the interclausal level but this becomes more complex at the intersentential level and extremely difficult when linking large segmentsthis tension between task complexity and guideline underspecification resulted from the practical application of a theoretical model on a broad scalewhile other discourse theoretical approaches posit distinctly different treatments for various levels of the discourse rst relies on a standard methodology to analyze the document at all levelsthe rst relation set is rich and the concept of nuclearity somewhat interpretivethis gave our annotators more leeway in interpreting the higher levels of the discourse structure thus introducing some stylistic differences which may prove an interesting avenue of future researchthe rst corpus consists of 385 wall street journal articles from the penn treebank representing over 176000 words of textin order to measure interannotator consistency 53 of the documents were doubletaggedthe documents range in size from 31 to 2124 words with an average of 45814 words per documentthe final tagged corpus contains 21789 edus with an average of 5659 edus per documentthe average number of words per edu is 81the articles range over a variety of topics including financial reports general interest stories businessrelated news cultural reviews editorials and letters to the editorin selecting these documents we partnered with the linguistic data consortium to select penn treebank texts for which the syntactic bracketing was known to be of high caliberthus the rst corpus provides an additional level of linguistic annotation to supplement existing annotated resourcesfor details on obtaining the corpus annotation software tagging guidelines and related documentation and resources see httpwwwisiedumarcudiscoursea growing number of groups have developed or are developing discourseannotated corpora for textthese can be characterized both in terms of the kinds of features annotated as well as by the scope of the annotationfeatures may include specific discourse cues or markers coreference links identification of rhetorical relations etcthe scope of the annotation refers to the levels of analysis within the document and can be characterized as follows topical segments linking of large text segments via specific relations or defining text objects with a text architecture developing corpora with these kinds of rich annotation is a laborintensive effortbuilding the rst corpus involved more than a dozen people on a full or parttime basis over a oneyear time frame annotation of a single document could take anywhere from 30 minutes to several hours depending on the length and topicretagging of a large number of documents after major enhancements to the annotation guidelines was also time consumingin addition limitations of the theoretical approach became more apparent over timebecause the rst theory does not differentiate between different levels of the tree structure a fairly finegrained set of relations operates between edus and edu clusters at the macrolevelthe procedural knowledge available at the edu level is likely to need further refinement for higherlevel text spans along the lines of other work which posits a few macrolevel relations for text segments such as ferrari or meyer moreover using the rst approach the resultant tree structure like a traditional outline imposed constraints that other discourse representations would notin combination with the tree structure the concept of nuclearity also guided an annotator to capture one of a number of possible stylistic interpretationswe ourselves are eager to explore these aspects of the rst and expect new insights to appear through analysis of the corpuswe anticipate that the rst corpus will be multifunctional and support a wide range of language engineering applicationsthe added value of multiple layers of overt linguistic phenomena enhancing the penn treebank information can be exploited to advance the study of discourse to enhance language technologies such as text summarization machine translation or information retrieval or to be a testbed for new and creative natural language processing techniques
W01-1605
building a discoursetagged corpus in the framework of rhetorical structure theorywe describe our experience in developing a discourseannotated corpus for communitywide useworking in the framework of rhetorical structure theory we were able to create a large annotated resource with very high consistency using a welldefined methodology and protocolthis resource is made publicly available through the linguistic data consortium to enable researchers to develop empirically grounded discoursespecific applicationsin our discourse tree bank only 26 of contrast relations are indicated by cue phrases while in ntc7 about 70 of contrast were indicated by cue phrasesour corpus contains 385 wall street journal articles annotated following the rhetorical structure theory
nltk the natural language toolkit nltk the natural language toolkit is a suite of open source program modules tutorials and problem sets providing readytouse computational linguistics courseware nltk covers symbolic and statistical natural language processing and is interfaced to annotated corpora students augment and replace existing components learn structured programming by example and manipulate sophisticated models from the outset teachers of introductory courses on computational linguistics are often faced with the challenge of setting up a practical programming component for student assignments and projectsthis is a difficult task because different computational linguistics domains require a variety of different data structures and functions and because a diverse range of topics may need to be included in the syllabusa widespread practice is to employ multiple programming languages where each language provides native data structures and functions that are a good fit for the task at handfor example a course might use prolog for parsing perl for corpus processing and a finitestate toolkit for morphological analysisby relying on the builtin features of various languages the teacher avoids having to develop a lot of software infrastructurean unfortunate consequence is that a significant part of such courses must be devoted to teaching programming languagesfurther many interesting projects span a variety of domains and would require that multiple languages be bridgedfor example a student project that involved syntactic parsing of corpus data from a morphologically rich language might involve all three of the languages mentioned above perl for string processing a finite state toolkit for morphological analysis and prolog for parsingit is clear that these considerable overheads and shortcomings warrant a fresh approachapart from the practical component computational linguistics courses may also depend on software for inclass demonstrationsthis context calls for highly interactive graphical user interfaces making it possible to view program state observe program execution stepbystep and even make minor modifications to programs in response to what if questions from the classbecause of these difficulties it is common to avoid live demonstrations and keep classes for theoretical presentations onlyapart from being dull this approach leaves students to solve important practical problems on their own or to deal with them less efficiently in office hoursin this paper we introduce a new approach to the above challenges a streamlined and flexible way of organizing the practical component of an introductory computational linguistics coursewe describe nltk the natural language toolkit which we have developed in conjunction with a course we have taught at the university of pennsylvaniathe natural language toolkit is available under an open source license from httpnltksfnetnltk runs on all platforms supported by python including windows os x linux and unixthe most basic step in setting up a practical component is choosing a suitable programming languagea number of considerations influenced our choicefirst the language must have a shallow learning curve so that novice programmers get immediate rewards for their effortssecond the language must support rapid prototyping and a short developtest cycle an obligatory compilation step is a serious detractionthird the code should be selfdocumenting with a transparent syntax and semanticsfourth it should be easy to write structured programs ideally objectoriented but without the burden associated with languages like cfinally the language must have an easytouse graphics library to support the development of graphical user interfacesin surveying the available languages we believe that python offers an especially good fit to the above requirementspython is an objectoriented scripting language developed by guido van rossum and available on all platforms python offers a shallow learning curve it was designed to be easily learnt by children as an interpreted language python is suitable for rapid prototypingpython code is exceptionally readable and it has been praised as executable pseudocode python is an objectoriented language but not punitively so and it is easy to encapsulate data and methods inside python classesfinally python has an interface to the tk graphics toolkit and writing graphical interfaces is straightforwardseveral criteria were considered in the design and implementation of the toolkitthese design criteria are listed in the order of their importanceit was also important to decide what goals the toolkit would not attempt to accomplish we therefore include an explicit set of nonrequirements which the toolkit is not expected to satisfyease of usethe primary purpose of the toolkit is to allow students to concentrate on building natural language processing systemsthe more time students must spend learning to use the toolkit the less useful it isconsistencythe toolkit should use consistent data structures and interfacesextensibilitythe toolkit should easily accommodate new components whether those components replicate or extend the toolkits existing functionalitythe toolkit should be structured in such a way that it is obvious where new extensions would fit into the toolkits infrastructuredocumentationthe toolkit its data structures and its implementation all need to be carefully and thoroughly documentedall nomenclature must be carefully chosen and consistently usedsimplicitythe toolkit should structure the complexities of building nlp systems not hide themtherefore each class defined by the toolkit should be simple enough that a student could implement it by the time they finish an introductory course in computational linguisticsmodularitythe interaction between different components of the toolkit should be kept to a minimum using simple welldefined interfacesin particular it should be possible to complete individual projects using small parts of the toolkit without worrying about how they interact with the rest of the toolkitthis allows students to learn how to use the toolkit incrementally throughout a coursemodularity also makes it easier to change and extend the toolkitcomprehensivenessthe toolkit is not intended to provide a comprehensive set of toolsindeed there should be a wide variety of ways in which students can extend the toolkitefficiencythe toolkit does not need to be highly optimized for runtime performancehowever it should be efficient enough that students can use their nlp systems to perform real tasksclevernessclear designs and implementations are far preferable to ingenious yet indecipherable onesthe toolkit is implemented as a collection of independent modules each of which defines a specific data structure or taska set of core modules defines basic data types and processing systems that are used throughout the toolkitthe token module provides basic classes for processing individual elements of text such as words or sentencesthe tree module defines data structures for representing tree structures over text such as syntax trees and morphological treesthe probability module implements classes that encode frequency distributions and probability distributions including a variety of statistical smoothing techniquesthe remaining modules define data structures and interfaces for performing specific nlp tasksthis list of modules will grow over time as we add new tasks and algorithms to the toolkitthe parser module defines a highlevel interface for producing trees that represent the structures of textsthe chunkparser module defines a subinterface for parsers that identify nonoverlapping linguistic groups in unrestricted textfour modules provide implementations for these abstract interfacesthe srparser module implements a simple shiftreduce parserthe chartparser module defines a flexible parser that uses a chart to record hypotheses about syntactic constituentsthe pcfgparser module provides a variety of different parsers for probabilistic grammarsand the rechunkparser module defines a transformational regularexpression based implementation of the chunk parser interfacethe tagger module defines a standard interface for augmenting each token of a text with supplementary information such as its part of speech or its wordnet synset tag and provides several different implementations for this interfacethe fsa module defines a data type for encoding finite state automata and an interface for creating automata from regular expressionsdebugging time is an important factor in the toolkits ease of useto reduce the amount of time students must spend debugging their code we provide a type checking module which can be used to ensure that functions are given valid argumentsthe type checking module is used by all of the basic data types and processing classessince type checking is done explicitly it can slow the toolkit downhowever when efficiency is an issue type checking can be easily turned off and with type checking is disabled there is no performance penaltyvisualization modules define graphical interfaces for viewing and manipulating data structures and graphical tools for experimenting with nlp tasksthe drawtree module provides a simple graphical interface for displaying tree structuresthe drawtree edit module provides an interface for building and modifying tree structuresthe drawplot graph module can be used to graph mathematical functionsthe drawfsa module provides a graphical tool for displaying and simulating finite state automatathe drawchart module provides an interactive graphical tool for experimenting with chart parsersthe visualization modules provide interfaces for interaction and experimentation they do not directly implement nlp data structures or taskssimplicity of implementation is therefore less of an issue for the visualization modules than it is for the rest of the toolkitthe classifier module defines a standard interface for classifying texts into categoriesthis interface is currently implemented by two modulesthe classifiernaivebayes module defines a text classifier based on the naive bayes assumptionthe classifiermaxent module defines the maximum entropy model for text classification and implements two algorithms for training the model generalized iterative scaling and improved iterative scalingthe classifierfeature module provides a standard encoding for the information that is used to make decisions for a particular classification taskthis standard encoding allows students to experiment with the differences between different text classification algorithms using identical feature setsthe classifierfeatureselection module defines a standard interface for choosing which features are relevant for a particular classification taskgood feature selection can significantly improve classification performancethe toolkit is accompanied by extensive documentation that explains the toolkit and describes how to use and extend itthis documentation is divided into three primary categories tutorials teach students how to use the toolkit in the context of performing specific taskseach tutorial focuses on a single domain such as tagging probabilistic systems or text classificationthe tutorials include a highlevel discussion that explains and motivates the domain followed by a detailed walkthrough that uses examples to show how nltk can be used to perform specific tasksreference documentation provides precise definitions for every module interface class method function and variable in the toolkitit is automatically extracted from docstring comments in the python source code using epydoc technical reports explain and justify the toolkits design and implementationthey are used by the developers of the toolkit to guide and document the toolkits constructionstudents can also consult these reports if they would like further information about how the toolkit is designed and why it is designed that waynltk can be used to create student assignments of varying difficulty and scopein the simplest assignments students experiment with an existing modulethe wide variety of existing modules provide many opportunities for creating these simple assignmentsonce students become more familiar with the toolkit they can be asked to make minor changes or extensions to an existing modulea more challenging task is to develop a new modulehere nltk provides some useful starting points predefined interfaces and data structures and existing modules that implement the same interfaceas an example of a moderately difficult assignment we asked students to construct a chunk parser that correctly identifies base noun phrase chunks in a given text by defining a cascade of transformational chunking rulesthe nltk rechunkparser module provides a variety of regularexpression based rule types which the students can instantiate to construct complete rulesfor example chunkrule excises verbs from existing chunks splitrule splits any existing chunk that contains a singular noun followed by determiner into two pieces and mergerule combines two adjacent chunks where the first chunk ends and the second chunk starts with adjectivesthe chunking tutorial motivates chunk parsing describes each rule type and provides all the necessary code for the assignmentthe provided code is responsible for loading the chunked partofspeech tagged text using an existing tokenizer creating an unchunked version of the text applying the chunk rules to the unchunked text and scoring the resultstudents focus on the nlp task only providing a rule set with the best coveragein the remainder of this section we reproduce some of the cascades created by the studentsthe first example illustrates a combination of several rule types chunkrule chunkrule chunkrule unchunkrule unchunkrule mergerule splitrule the next example illustrates a bruteforce statistical approachthe student calculated how often each partofspeech tag was included in a noun phrasethey then constructed chunks from any sequence of tags that occurred in a noun phrase more than 50 of the timein the third example the student constructed a single chunk containing the entire text and then excised all elements that did not belongnltk provides graphical tools that can be used in class demonstrations to help explain basic nlp concepts and algorithmsthese interactive tools can be used to display relevant data structures and to show the stepbystep execution of algorithmsboth data structures and control flow can be easily modified during the demonstration in response to questions from the classsince these graphical tools are included with the toolkit they can also be used by studentsthis allows students to experiment at home with the algorithms that they have seen presented in classthe chart parsing tool is an example of a graphical tool provided by nltkthis tool can be used to explain the basic concepts behind chart parsing and to show how the algorithm workschart parsing is a flexible parsing algorithm that uses a data structure called a chart to record hypotheses about syntactic constituentseach hypothesis is represented by a single edge on the charta set of rules determine when new edges can be added to the chartthis set of rules controls the overall behavior of the parser the chart parsing tool demonstrates the process of parsing a single sentence with a given grammar and lexiconits display is divided into three sections the bottom section displays the chart the middle section displays the sentence and the top section displays the partial syntax tree corresponding to the selected edgebuttons along the bottom of the window are used to control the execution of the algorithmthe main display window for the chart parsing tool is shown in figure 1this tool can be used to explain several different aspects of chart parsingfirst it can be used to explain the basic chart data structure and to show how edges can represent hypotheses about syntactic constituentsit can then be used to demonstrate and explain the individual rules that the chart parser uses to create new edgesfinally it can be used to show how these individual rules combine to find a complete parse for a given sentenceto reduce the overhead of setting up demonstrations during lecture the user can define a list of preset chartsthe tool can then be reset to any one of these charts at any timethe chart parsing tool allows for flexible control of the parsing algorithmat each step of the algorithm the user can select which rule or strategy they wish to applythis allows the user to experiment with mixing different strategies the user can exercise finegrained control over the algorithm by selecting which edge they wish to apply a rule tothis flexibility allows lecturers to use the tool to respond to a wide variety of questions and allows students to experiment with different variations on the chart parsing algorithmnltk provides students with a flexible framework for advanced projectstypical projects involve the development of entirely new functionality for a previously unsupported nlp task or the development of a complete system out of existing and new modulesthe toolkits broad coverage allows students to explore a wide variety of topicsin our introductory computational linguistics course topics for student projects included text generation word sense disambiguation collocation analysis and morphological analysisnltk eliminates the tedious infrastructurebuilding that is typically associated with advanced student projects by providing students with the basic data structures tools and interfaces that they needthis allows the students to concentrate on the problems that interest themthe collaborative opensource nature of the toolkit can provide students with a sense that their projects are meaningful contributions and not just exercisesseveral of the students in our course have expressed interest in incorporating their projects into the toolkitfinally many of the modules included in the toolkit provide students with good examples of what projects should look like with well thoughtout interfaces clean code structure and thorough documentationthe probabilistic parsing module was created as a class project for a statistical nlp coursethe toolkit provided the basic data types and interfaces for parsingthe project extended these adding a new probabilistic parsing interface and using subclasses to create a probabilistic version of the context free grammar data structurethese new components were used in conjunction with several existing components such as the chart data structure to define two implementations of the probabilistic parsing interfacefinally a tutorial was written that explained the basic motivations and concepts behind probabilistic parsing and described the new interfaces data structures and parserswe used nltk as a basis for the assignments and student projects in cis530 an introductory computational linguistics class taught at the university of pennsylvaniacis530 is a graduate level class although some advanced undergraduates were also enrolledmost students had a background in either computer science or linguistics students were required to complete five assignments two exams and a final projectall class materials are available from the course website httpwwwcisupenneducis530the experience of using nltk was very positive both for us and for the studentsthe students liked the fact that they could do interesting projects from the outsetthey also liked being able to run everything on their computer at homethe students found the extensive documentation very helpful for learning to use the toolkitthey found the interfaces defined by nltk intuitive and appreciated the ease with which they could combine different components to create complete nlp systemswe did encounter a few difficulties during the semesterone problem was finding large clean corpora that the students could use for their assignmentsseveral of the students needed assistance finding suitable corpora for their final projectsanother issue was the fact that we were actively developing nltk during the semester some modules were only completed one or two weeks before the students used themas a result students who worked at home needed to download new versions of the toolkit several times throughout the semesterluckily python has extensive support for installation scripts which made these upgrades simplethe students encountered a couple of bugs in the toolkit but none were serious and all were quickly correctedthe computational component of computational linguistics courses takes many formsin this section we briefly review a selection of approaches classified according to the target audiencelinguistics studentsvarious books introduce programming or computing to linguiststhese are elementary on the computational side providing a gentle introduction to students having no prior experience in computer scienceexamples of such books are using computers in linguistics and programming for linguistics java technology for language researchers grammar developersinfrastructure for grammar development has a long history in unificationbased grammar frameworks from dcg to hpsg recent work includes a concurrent development has been the finite state toolkits such as the xerox toolkit this work has found widespread pedagogical applicationother researchers and developersa variety of toolkits have been created for research or rd purposesexamples include the cmucambridge statistical language modeling toolkit the emu speech database system the general architecture for text engineering the maxent package for maximum entropy models and the annotation graph toolkit although not originally motivated by pedagogical needs all of these toolkits have pedagogical applications and many have already been used in teachingnltk provides a simple extensible uniform framework for assignments projects and class demonstrationsit is well documented easy to learn and simple to usewe hope that nltk will allow computational linguistics classes to include more handson experience with using and building nlp components and systemsnltk is unique in its combination of three factorsfirst it was deliberately designed as courseware and gives pedagogical goals primary statussecond its target audience consists of both linguists and computer scientists and it is accessible and challenging at many levels of prior computational skillfinally it is based on an objectoriented scripting language supporting rapid prototyping and literate programmingwe plan to continue extending the breadth of materials covered by the toolkitwe are currently working on nltk modules for hidden markov models language modeling and tree adjoining grammarswe also plan to increase the number of algorithms implemented by some existing modules such as the text classification modulefinding suitable corpora is a prerequisite for many student assignments and projectswe are therefore putting together a collection of corpora containing data appropriate for every module defined by the toolkitnltk is an open source project and we welcome any contributionsreaders who are interested in contributing to nltk or who have suggestions for improvements are encouraged to contact the authorswe are indebted to our students for feedback on the toolkit and to anonymous reviewers jee bang and the workshop organizers for comments on an earlier version of this paperwe are grateful to mitch marcus and the department of computer and information science at the university of pennsylvania for sponsoring the work reported here
W02-0109
nltk the natural language toolkitnltk the natural language toolkit is a suite of open source program modules tutorials and problem sets providing readytouse computational linguistics coursewarenltk covers symbolic and statistical natural language processing and is interfaced to annotated corporastudents augment and replace existing components learn structured programming by example and manipulate sophisticated models from the outsetnltk the natural language toolkit is a suite of python modules providing many nlp data types processing tasks corpus samples and readers together with animated algorithms tutorials and problem sets
tuning support vector machines for biomedical named entity recognition we explore the use of support vector machines for biomedical named entity recognition to make the svm training with the available largest corpus the genia corpus tractable we propose to split the nonentity class into subclasses using partofspeech information in addition we explore new features such as word cache and the states of an hmm trained by unsupervised learning experiments on the genia corpus show that our class splitting technique not only enables the training with the genia corpus but also improves the accuracy the proposed new features also contribute to improve the accuracy we compare our svmbased recognition system with a system using maximum entropy tagging method application of natural language processing is now a key research topic in bioinformaticssince it is practically impossible for a researcher to grasp all of the huge amount of knowledge provided in the form of natural language eg journal papers there is a strong demand for biomedical information extraction which extracts knowledge automatically from biomedical papers using nlp techniques the process called named entity recognition which finds entities that fill the information slots eg proteins dnas rnas cells etc in the biomedical context is an important building block in such biomedical ie systemsconceptually named entity recognition consists of two tasks identification which finds the region of a named entity in a text and classification which determines the semantic class of that named entitythe following illustrates biomedical named entity recognitionthus ciitaprotezn not only activates the expression of class ii genesdna but recruits another b cellspecific coactivator to increase transcriptional activity of class ii promotersdna in machine learning approach has been applied to biomedical named entity recognition however no work has achieved sufficient recognition accuracyone reason is the lack of annotated corpora for training as is often the case of a new domainnobata et al and collier et al trained their model with only 100 annotated paper abstracts from the medline database and yamada et al used only 77 annotated paper abstractsin addition it is difficult to compare the techniques used in each study because they used a closed and different corpusto overcome such a situation the genia corpus has been developed and at this time it is the largest biomedical annotated corpus available to public containing 670 annotated abstracts of the medline databaseanother reason for low accuracies is that biomedical named entities are essentially hard to recognize using standard feature sets compared with the named entities in newswire articles thus we need to employ powerful machine learning techniques which can incorporate various and complex features in a consistent waysupport vector machines and maximum entropy method are powerful learning methods that satisfy such requirements and are applied successfully to other nlp tasks in this paper we apply support vector machines to biomedical named entity recognition and train them with the genia corpuswe formulate the named entity recognition as the classification of each word with context to one of the classes that represent region and named entitys semantic classalthough there is a previous work that applied svms to biomedical named entity task in this formulation their method to construct a classifier using svms onevsrest fails to train a classifier with entire genia corpus since the cost of svm training is superlinear to the size of training sampleseven with a more feasible method pairwise which is employed in we cannot train a classifier in a reasonable time because we have a large number of samples that belong to the nonentity class in this formulationto solve this problem we propose to split the nonentity class to several subclasses using partofspeech informationwe show that this technique not only enables the training feasible but also improves the accuracyin addition we explore new features such as word cache and the states of an unsupervised hmm for named entity recognition using svmsin the experiments we show the effect of using these features and compare the overall performance of our svmbased recognition system with a system using the maximum entropy method which is an alternative to the svm methodthe genia corpus is an annotated corpus of paper abstracts taken from the medline databasecurrently 670 abstracts are annotated with named entity tags by biomedical experts and made available to public 1 these 670 abstracts are a subset of more than 5000 abstracts obtained by the query human and blood cell and transcription factor to the medline databasetable 1 shows basic statistics of the genia corpussince the genia corpus is intended to be extensive there exist 24 distinct named entity classes in the corpus2 our task is to find a named entity region in a paper abstract and correctly select its class out of these 24 classesthis number of classes is relatively large compared with other corpora used in previous studies and compared with the named entity task for newswire articlesthis indicates that the task with the genia corpus is hard apart from the difficulty of the biomedical domain itselfwe formulate the named entity task as the classification of each word with context to one of the classes that represent region information and named entitys semantic classseveral representations to encode region information are proposed and examined in this paper we employ the simplest bio representation which is also used in we modify this representation in section 51 in order to accelerate the svm trainingin the bio representation the region information is represented as the class prefixes b and i and a class ob means that the current word is at the beginning of a named entity i means that the current word is in a named entity and o means the word is not in a named entityfor each named entity class c class because and ic are producedtherefore if we have n named entity classes the bio representation yields 2n 1 classes which will be the targets of a classifierfor instance the following corresponds to the annotation number of glucocorticoid receptorsprotein in support vector machines are powerful methods for learning a classifier which have been applied successfully to many nlp tasks such as base phrase chunking and partofspeech tagging the svm constructs a binary classifier that outputs 1 or 1 given a sample vector x e right nowthe decision is based on the separating hyperplane as followsthe class for an input x c is determined by seeing which side of the space separated by the hyperplane w x b 0 the input lies ongiven a set of labeled training samples xi right now yi 1 1 the svm training tries to find the optimal hyperplane ie the hyperplane with the maximum marginmargin is defined as the distance between the hyperplane and the training samples nearest to the hyperplanemaximizing the margin insists that these nearest samples exist on both sides of the separating hyperplane and the hyperplane lies exactly at the midpoint of these support vectorsthis margin maximization tightly relates to the fine generalization power of svmsassuming that wxib 1 at the support vectors without loss of generality the svm training can be formulated as the following optimization problem3 the solution of this problem is known to be written as follows using only support vectors and weights for themin the svm learning we can use a function k called a kernel function instead of the inner product in the above equationintroducing a kernel function means mapping an original input x using st k to another usually a higher dimensional feature spacewe construct the optimal hyperplane in that spaceby using kernel functions we can construct a nonlinear separating surface in the original feature spacefortunately such nonlinear training does not increase the computational cost if the calculation of the kernel function is as cheap as the inner producta polynomial function defined as d is popular in applications of svms to nlps because it has an intuitively sound interpretation that each dimension of the mapped space is a 3for many realworld problems where the samples may be inseparable we allow the constraints are broken with some penaltyin the experiments we use socalled 1norm soft margin formulation described as subject to yi 1 ei i 1 l ei 0 i 1 l conjunction of d features in the original sampleas described above the standard svm learning constructs a binary classifierto make a named entity recognition system based on the bio representation we require a multiclass classifieramong several methods for constructing a multiclass svm we use a pairwise method proposed by kre13el instead of the onevsrest method used in and extend the bio representation to enable the training with the entire genia corpushere we describe the onevsrest method and the pairwise method to show the necessity of our extensionboth onevsrest and pairwise methods construct a multiclass classifier by combining many binary svmsin the following explanation k denotes the number of the target classes onevsrest construct k binary svms each of which determines whether the sample should be classified as class i or as the other classesthe output is the class with the maximum f in equation 1 pairwise construct k2 binary svms each of which determines whether the sample should be classified as class i or as class jeach binary svm has one vote and the output is the class with the maximum votesbecause the svm training is a quadratic optimization program its cost is superlinear to the size of the training samples even with the tailored techniques such as smo and kernel evaluation caching let l be the number of the training samples then the onevsrest method takes time in k osvmthe bio formulation produces one training sample per word and the training with the genia corpus involves over 100000 training samples as can be seen from table 1therefore it is apparent that the onevsrest method is impractical with the genia corpuson the other hand if target classes are equally distributed the pairwise method will take time in k2 os vmthis method is worthwhile because each training is much faster though it requires the training of 2 times more classifiersit is also reported that the pairwise method achieves higher accuracy than other methods in some benchmarks an input x to an svm classifier is a feature representation of the word to be classified and its contextwe use a bitvector representation each dimension of which indicates whether the input matches with 4 named entity recognition using me a certain featurethe following illustrates the well model used features for the named entity recognition taskthe maximum entropy method with which we compare our svmbased method defines the probability that the class is c given an input vector x as follows where z is a normalization constant and fi is a feature functiona feature function is defined in the same way as the features in the svm learning except that it includes c in it like f wikif x contains previously assigned classes then the most probable searched by using the viterbitype algorithmwe use the maximum entropy tagging method described in for the experiments which is a variant of modified to use hmm state featuresin the above definitions k is a relative word position from the word to be classifieda negative value represents a preceding words position and a positive value represents a following words positionnote that we assume that the classification proceeds left to right as can be seen in the definition of the preceding class featurefor the svm classification we does not use a dynamic argmaxtype classification such as the viterbi algorithm since it is difficult to define a good comparable value for the confidence of a prediction such as probabilitythe consequences of this limitation will be discussed with the experimental resultsfeatures usually form a group with some variables such as the position unspecifiedin this paper we instantiate all features ie instantiate for all i for a group and a positionthen it is convenient to denote a set of features for a group g and a position k as gk using this notation we write a feature set as w1 w0 pre1 pre0 pc14 this feature description derives the following input vector5 in section 33 we described that if target classes are equally distributed the pairwise method will reduce the training costin our case however we have a very unbalanced class distribution with a large number of samples belonging to the class o this leads to the same situation with the onevsrest method ie if lo is the number of the samples belonging to the class o then the most dominant part of the training takes time in k osvmone solution to this unbalanced class distribution problem is to split the class o into several subclasses effectivelythis will reduce the training cost for the same reason that the pairwise method worksin this paper we propose to split the nonentity class according to partofspeech information of the wordthat is given a partofspeech tag set pos we produce new pos classes op p possince we use a pos tagger that outputs 45 penn treebanks pos tags in this paper we have new 45 subclasses which correspond to nonentity regions such as onns ojj and odt splitting by pos information seems useful for improving the system accuracy as well because in the named entity recognition we must discriminate between nouns in named entities and nouns in ordinal noun phrasesin the experiments we show this class splitting technique not only enables the feasible training but also improves the accuracyin addition to the standard features we explore word cache feature and hmm state feature mainly to solve the data sparseness problemalthough the genia corpus is the largest annotated corpus for the biomedical domain it is still small compared with other linguistic annotated corpora such as the penn treebankthus the data sparseness problem is severe and must be treated carefullyusually the data sparseness is prevented by using more general features that apply to a broader set of instances while polynomial kernels in the svm learning can effectively generate feature conjunctions kernel functions that can effectively generate feature disjunctions are not knownthus we should explicitly add dimensions for such general featuresthe word cache feature is defined as the disjunction of several word features as we intend that the word cache feature captures the similarities of the patterns with a common key word such as followswe use a left word cache defined as lwcki wc_k0i and a right word cache defined as rwcki wc1ki for patterns like and in the above example respectivelykazama et al proposed to use as features the viterbi state sequence of a hidden markov model to prevent the data sparseness problem in the maximum entropy tagging modelan hmm is trained with a large number of unannotated texts by using an unsupervised learning methodbecause the number of states of the hmm is usually made smaller than v the viterbi states give smoothed but maximally informative representations of word patterns tuned for the domain from which the raw texts are takenthe hmm feature is defined in the same way as the word feature as follows hmmki 1 if the viterbi state for wk is the ith state in the hmms states w 0 otherwise in the experiments we train an hmm using raw medline abstracts in the genia corpus and show that the hmm state feature can improve the accuracytowards practical named entity recognition using svms we have tackled the following implementation issuesit would be impossible to carry out the experiments in a reasonable time without such effortsparallel training the training of pairwise svms has trivial parallelism ie each svm can be trained separatelysince computers with two or more cpus are not expensive these days parallelization is very practical solution to accelerate the training of pairwise svmsfast winner finding although the pairwise method reduces the cost of training it greatly increases the number of classifications needed to determine the class of one samplefor example for our experiments using the genia corpus the bio representation with class splitting yields more than 4000 classification pairsfortunately we can stop classifications when a class gets k 1 votes and this stopping greatly saves classification time moreover we can stop classifications when the current votes of a class is greater than the others possible votessupport vector caching in the pairwise method though we have a large number of classifiers each classifier shares some support vectors with other classifiersby storing the bodies of all support vectors together and letting each classifier have only the weights we can greatly reduce the size of the classifierthe sharing of support vectors also can be exploited to accelerate the classification by caching the value of the kernel function between a support vector and a classifiee sampleto conduct experiments we divided 670 abstracts of the genia corpus into the training part and the test part 6 texts are tokenized by using penn treebanks tokenizeran hmm for the hmm state features was trained with raw abstracts of the genia corpus 7 the number of states is 160the vocabulary for the word feature is constructed by taking the most frequent 10000 words from the above raw abstracts the prefixsuffixprefix list by taking the most frequent 10000 prefixessuffixessubstrings8 the performance is measured by precision recall and fscore which are the standard measures for the named entity recognitionsystems based on the bio representation may produce an inconsistent class sequence such as o bdna irna owe interpret such outputs as follows once a named entity starts with because then we interpret that the named entity with class c ends only when we see another b or o tagwe have implemented smo algorithm and techniques described in for soft margin svms in c programming language and implemented support codes for pairwise classification and parallel training in java programming languageto obtain pos information required for features and class splitting we used an english pos tagger described in first we show the effect of the class splitting described in section 51varying the size of training data we compared the change in the training time and the accuracy with and without the class splittingwe used a feature set hw pre suf sub posi22pc21 and the inner product kernel9 the training time was measured on a machine with four 700mhz pentiumiiis and 16gb ramtable 2 shows the results of the experimentsfigure 1 shows the results graphicallywe can see that without splitting we soon suffer from superlinearity of the svm training while with splitting we can handle the training with over 100000 samples in a reasonable timeit is very important that the splitting technique does not sacrifice the accuracy for speed rather improves the accuracyin this experiment we see the effect of the word cache feature and the hmm state feature described in section 34the effect is assessed by the accuracy gain observed by adding each feature set to a base feature set and the accuracy degradation observed by subtracting it from a base setthe first column in table 3 shows an adding case where the base feature set is w22the columns and show subtracting cases where the base feature set is hw pre suf sub pos hmmik k lwck rwck pc21 with k 2 and k 3 respectivelythe kernel function is the inner productwe can see that word cache and hmm state features surely improve the recognition accuracyin the table we also included the accuracy change for other standard featurespreceeding classes and suffixes are definitely helpfulon the other hand the substring feature is not effective in our settingalthough the effects of partofspeech tags and prefixes are not so definite it can be said that they are practically effective since they show positive effects in the case of the maximum performancein this set of experiments we compare our svmbased system with a named entity recognition system based on the maximum entropy methodfor the svm system we used the feature set hw pre suf pos hmmi3 3 lwc3 rwc3 pc21 which is shown to be the best in the previous experimentthe compared system is a maximum entropy tagging model described in though it supports several character type features such as number and hyphen and some conjunctive features such as word ngram we do not use these features to compare the performance under as close a condition as possiblethe feature set used in the maximum entropy system is expressed as hwpresufposhmmi22 pc2110 both systems use the bio representation with splittingtable 4 shows the accuracies of both systemsfor the svm system we show the results with the inner product kernel and several polynomial kernelsthe row all shows the accuracy from the view10when the width becomes 3 3 the accuracy degrades point of the identification task which only finds the named entity regionsthe accuracies for several major entity classes are also shownthe svm system with the 2dimensional polynomial kernel achieves the highest accuracythis comparison may be unfair since a polynomial kernel has the effect of using conjunctive features while the me system does not use such conjunctive featuresnevertheless the facts we can introduce the polynomial kernel very easily there are very few parameters to be tuned11 we could achieve the higher accuracy show an advantage of the svm systemit will be interesting to discuss why the svm systems with the inner product kernel are outperformed by the me systemwe here discuss two possible reasonsthe first is that the svm system does not use a dynamic decision such as the viterbi algorithm while the me system uses itto see this we degrade the me system so that it predicts the classes deterministically without using the viterbi algorithmwe found that this system only marks 5154 in fscorethus it can be said that a dynamic decision is important for this named entity taskhowever although a method to convert the outputs of a binary svm to probabilistic values is proposed the way to obtain meaningful probabilistic values needed in viterbitype algorithms from the outputs of a multiclass svm is unknownsolving this problem is certainly a part of the future workthe second possible reason is that the svm system in this paper does not use any cutoff or feature truncation method to remove data noise while the me system uses a simple feature cutoff method12 we observed that the me system without the cutoff only marks 4911 in 11c s r and d 12features that occur less than 10 times are removedfscorethus such a noise reduction method is also importanthowever the cutoff method for the me method cannot be applied without modification since as described in section 34 the definition of the features are different in the two approachesit can be said the features in the me method is finer than those in svmsin this sense the me method allows us more flexible feature selectionthis is an advantage of the me methodthe accuracies achieved by both systems can be said high compared with those of the previous methods if we consider that we have 24 named entity classeshowever the accuracies are not sufficient for a practical usethough higher accuracy will be achieved with a larger annotated corpus we should also explore more effective features and find effective feature combination methods to exploit such a large corpus maximallywe have described the use of support vector machines for the biomedical named entity recognition taskto make the training of svms with the genia corpus practical we proposed to split the nonentity class by using pos informationin addition we explored the new types of features word cache and hmm states to avoid the data sparseness problemin the experiments we have shown that the class splitting technique not only makes training feasible but also improves the accuracywe have also shown that the proposed new features also improve the accuracy and the svm system with the polynomial kernel function outperforms the mebased systemwe would like to thank dr jindong kim for providing us easytouse preprocessed training data
W02-0301
tuning support vector machines for biomedical named entity recognitionwe explore the use of support vector machines for biomedical named entity recognitionto make the svm training with the available largest corpus the genia corpus tractable we propose to split the nonentity class into subclasses using partofspeech informationin addition we explore new features such as word cache and the states of an hmm trained by unsupervised learningexperiments on the genia corpus show that our class splitting technique not only enables the training with the genia corpus but also improves the accuracythe proposed new features also contribute to improve the accuracywe compare our svmbased recognition system with a system using maximum entropy tagging methodfor protein name recognition we achieve scores of 0492 0664 and 0565 for precision recall and fscore respectivelywe use a feature set containing lexical information pos tags affixes and their combinations in order to recognise and classify terms into a set of general biological classes used within the genia project
machine transliteration of names in arabic texts we present a transliteration algorithm based on sound and spelling mappings using finite state machines the transliteration models can be trained on relatively small lists of names we introduce a new spellingbased model that is much more accurate than stateoftheart phoneticbased models and can be trained on easiertoobtain training data we apply our transliteration algorithm to the transliteration of names from arabic into english we report on the accuracy of our algorithm based on exactmatching criterion and based on humansubjective evaluation we also compare the accuracy of our system to the accuracy of human translators human translators and machine translation systems are often faced with the task of transliterating phrases like person names and locationstransliteration is the process of replacing words in the source language with their approximate phonetic or spelling equivalents in the target languagetransliterating names between languages that use similar alphabets and sound systems is often very simple since the phrase mostly remains the samehowever the transliteration becomes far more difficult when transliterating between languages with very different sound and writing systemswhen transliterating a name from arabic into english there are two types of transliterations transliteration of an arab name into englishtypically many variations of the transliterated name are acceptablethis is especially true when transliterating between two languages with many phonetic incompatibilities such as arabic and englishfor example the arab name quot_a ycisrquot1 can reasonably be transliterated in any of the following ways yasir yassir yaser yasser etctransliterating names from arabic into english in either direction is a difficult task mainly due to the differences in their sound and writing systemsfor instance vowels in arabic come in two varieties long and shortshort vowels are rarely written in arabic in newspaper text which makes pronunciation highly ambiguousalso because of the differences in their sound inventory there is no onetoone correspondence between arabic sounds and english soundsfor example english p and b are both mapped into arabic quot_ bquot arabic quotc and quota hquot into english h and so onin this paper we describe arabictoenglish name transliteration system using probabilistic finite state machines2 that address both the transliteration of arab and foreign names into englishkawtrakul et al present a back transliteration system from thai into english in the context of document retrievalin their approach loan words are first segmented into syllables using a combination of rules and statistical techniquesthen syllables are mapped to phonemes based on some transcription rulesthe phoneme sequence of the loan word is compared to the phonetic sequence of a set of english words found in a phonetic dictionary and the word with the most similar phonetic sequence is selected as the transliterationthe approach described by kawtrakul et al requires a phonetic dictionary of english in order to match phonetic sequencesonly those words with known phonetic sequences in the dictionary can be mapped by the transliteration systemalso applying such technique to arabic will most likely fail because without short vowels the pronunciation is highly ambiguous and so is its corresponding phonetic sequencearbabi et al describe an algorithm for the forwardtransliteration of arab names into a number of romance and germanic languages including english french and spanishthe transliteration process starts by vowelizing the given arab name by inserting the appropriate short vowels which originally are not written but necessary for the correct pronunciation of the namesthen the vowelized arab name is converted into its phonetic roman representation using a parser and table lookupthe phonetic representation is then used in a table lookup to produce the spelling in the desired languagethe vowelization rules described by arbabi et al apply only to arab names that conform to strict arabic morphological rulesany name that does not conform to the morphological rules is ignored and hence no transliteration will be attemptedthis restriction limits the applicability of this approach since many person and organization names do not conform to morphological rules especially loan words and foreign namesstalls and knight present an arabictoenglish backtransliteration system based on the sourcechannel frameworkthe transliteration process is based on a generative model of how an english name is transliterated into arabicit consists of several steps each defined as a probabilistic model represented as a finite state machinefirst an english word w is generated according to its unigram probabilities pethen the english word w is pronounced with probability p which is collected directly from an english pronunciation dictionaryfinally the english phoneme sequence is converted into arabic writing with probability p which we discuss in details in section 4the pronunciation model p converts english letter sequences into english sound sequencesthe model proposed by stalls and knight uses a pronunciation dictionary to do this conversiontherefore only words with known pronunciations in the dictionary can be transliteratedone way to overcome this limitation is to train a model that can map any given english letter sequence into its corresponding english sound sequencethis mapping is a complex task because of the mismatch between english spelling and english pronunciationthis difficulty coupled with the difficulty of mapping arabic letter sequences to english sound sequences renders this choice unattractiveinstead we propose a spellingbased model that maps directly into arabic letter sequences which can be trained on an englisharabic name list as we describe in section 5but before we present any further details we describe our evaluation data nextour evaluation corpora consist of two data sets a development test set and a blind onethe two sets consist of a list of person names extracted from arabic newspaper articlesthe development test set contains 854 names and the blind test set contains 218 the person names are then manually transliterated into englishthe transliterations are then thoroughly reviewed and any obvious mistakes corrected3 the corrected transliterations form the goldstandard we will compare our results withwe would like to investigate the suitability of the models proposed here for back and forwardtransliterationtherefore each name in the list is classified in one of three categories arabic for names of arabic origin english for names of english origin and other for names of other origins including chinese russian indian etcthe names were classified by a bilingual speaker the classification is not always clear cutin some cases the first name of a person might be of one category and the last name of another in such cases the category is chosen based on the identity of the person if it is known otherwise the category of the last name is chosenthe distribution of person according to this model the probability of transliterating arabic word a into english word w is given by the following equation the actual transliteration process is a graphsearch problem through millions of possible mappings to find the best path with english word sequence w that maximizes ppone serious limitation of the phoneticbased model described above is that only english words with known pronunciations can be producedfor backtransliterating person names of english origin this is not a big problem because many of such names are typically found in the dictionaryhowever applying this technique to transliterate names of origins other than english is not going to work because many such names are not likely to be in the dictionary despite the fact that the dictionary has more than 100100 entries in it as shown in table 2moreover if we want to apply this technique to transliterate a name into a language other than english a large pronunciation dictionary is needed for that language which is not easily obtainablealso human translators often transliterate words based on how they are spelled in the source languagefor example graham is typically transliterated by humans into arabic as quotare orcihamquot and not as jrcimquot also both quot hwjzquot and 134 hywzn occur in our corpus as possible transliterations for hughes to backtransliterate such instances one would need to consider spellingbased mappings not just sound mappingsto address these limitations we propose a new spellingbased model that can be used alone or in conjunction with the phoneticbased modelthe new model outperforms the phoneticbased model even when evaluated on names found in the phonetic dictionary as we will discuss in more detail in section 8the spellingbased model we propose directly maps english letter sequences into arabic letter sequences with probability p which is trained on an englisharabic name list without the need for english pronunciationssince no pronunciations are needed this list is easily obtainable for many language pairswe also extend the model p to include a letter trigram model in addition to the word unigram modelthis makes it possible to generate words that are not already defined in the word unigram model but obey english patternsthe word unigram model can be trained on any list of wordswhen trained on a list of person names the transliterations will be most accurate for person namesfor the experiments reported in this paper the unigram model was trained on the list of names from the cmu dictionarythe letter trigram is also trained on the same listthe transliteration score according to this model is given by for a given arabic name a the actual transliteration process is carried out by searching for the english word sequence that maximizes ps in our spellingbased model a sequence of one or more english letters is mapped to a sequence of zero dictionary presented by the category of each nameoverall is a weighted average of the three categories or more arabic lettersenglish letter sequences are typically longer than their arabic equivalents for many reasonsfirst because arabic short vowels are not written and need to be quotguessedquot by the modelsecond english names often have silent letters that mostly are not reflected in the arabic equivalent this phenomenon was also reflected in the learned modelhere is an example of some of the parameters learned during training here are some examples of the letter sequence alignments for pairs of arabic nametop transliteration as provided by our systemexample i given the name quotr luc sdamquot its top transliteration was saddam and the letter sequence alignment was 6to reduce the parameters to be estimated and prevent data sparseness without loss of any practical modeling power english letter sequences were restricted to a maximum of 3 letters while arabic ones were restricted to a maximum of 2 lettersexample iii given the name quot wbuhciymrquot its top transliteration was oppenheimer and the letter sequence alignment wasthe phoneticbased and spellingbased models can be linearly combined into a single transliteration modelthe transliteration score for an english word w given an arabic word a is a linear combination of the phoneticbased and the spellingbased transliteration scores as follows7in this section we discuss two different techniques that were used to improve the transliteration accuracyin the first technique the given word to be transliterated is preprocessed to correct any typos and spelling errorsthe spelling correction model described in section 71 is also implemented using a finite state machine which can be easily added to the transliteration composition pipelinein the second technique to improve transliterations transliterations are postprocessed to filter any unlikely transliterations as described in section 72typos and misspellings are very common in arabic newspapers especially in online editionstypical tor the experiments reported in this paper we used a 05 with and without spelling correctionthe results shown here are for the phoneticbased modelthe topl results considers whether the correct answer is the top candidate or not while the top20 results considers whether the correct answer is among the top20 candidates typos stem from replacing a letter with another that has a similar shape especially when they are mapped to adjacent keys on the keyboard layout these letters have very different sounds and without being corrected names with those typos will most likely be transliterated incorrectlyfor example the name wuzcilysquot is a misspelled version of the name quotly4c jwuzcilysquot spaces are reliably used in arabic to separate words with very few exceptionsarabic employs a cursive writing system so typically letters in the same word are connected to each othermost letters can be connected from both sides but some can be connected only from the right sideafter any of these letters a space might be incorrectly deleted bytr mndlswnquot or inserted additionally there are common misspellings that can be found even in the most respected arabic newspapers eg interchanging one form of an alif with another especially at the beginning of a word or interchanging quotaquot and quotsquot at the end of a word etcthese kinds of typos and misspellings are more common than we expectedfor example 5 of the names in our development test set were misspelledhuman translators seem to be able to recover from name misspellings when transliterating a name they are familiar withour human subject was able to transliterate the name quot4 bwrysquot correctly even though it was misspelled as quotlty brwysquot therefore we believe that we need to model misspellings explicitly rather than hope that they will not because wrong transliterationswe model misspellings by using an additional finitestate machine at the end of the cascade of finite state machineswe would like to estimate the parameters in this model empiricallybut since we do not have enough misspellingscorrect spelling pairs to train this model the weights were set manuallythe use of this spelling correction model slightly improves our transliterations as shown in table 3as we have described in section 5 the p model is a combination of a word unigram model and a letter trigram modelthe latter is needed in order to be able to generate words that are not in the word unigram modelhowever despite being trained on a long list of names the letter trigram model occasionally produce unlikely candidatesunlikely candidates can be eliminated by filtering out candidates with zero web countsthe webbased filtering is useful only for our spellingbased model since all candidates generated by the phoneticbased model are in the pronunciation dictionary and all have nonzero web countsa comparison of the transliteration accuracy with and without the webbased filtering is shown in table 4in this section we present a comparison of the accuracy of the phoneticbased model the spellingbased model and the linear combination in transliterating names from arabic into english on the development and test setswe also present the transliteration accuracy of human translators on the same taskthe results presented in section 81 and section 82 are based on the exactmatching criterion we also show the accuracy based on humansubjective evaluation in section 83we wanted to know how well human translators do in this taskso we asked a bilingual speaker to transliterate the names in both data sets given the context they appear within in the arabic documentthen the transliterations provided by the human subject are compared with those in the goldstandardthe accuracy of the transliterations provided by the human translator is shown in table 5examples of the transliteration errors made by the human subject are shown in table 6we first show in section 821 the overall accuracy of the phoneticbased model the spellingbased model and the linear combination of themthen in section 822 we show how the presence of names in the pronunciation dictionary affects the transliterations obtained using our modelswe also present some transliteration errors made by our algorithm in section 823table 7 shows the transliteration accuracy of the spellingbased model the phoneticbased model and the linear combination on the development and blind test setthe spellingbased model was by far more accurate than the phoneticbased model in all three categories and on both data setsbecause it combines the transliterations of the two models we expected the linear combination to be the most accuratehowever this was not the casethe linear combination was slightly worse than the spellingbased model when considering only the top candidate and slightly better when considering the top20 candidateswe believe that the reason is that equal weights were given to the phoneticbased and spellingbased models in the combinationweighting the spellingbased model higher will most likely give more accurate transliterations822 phoneticbased vs spelling based on names in the dictionary as we have described in section 4 the phoneticbased model uses a pronunciation dictionary to convert an english phoneme sequence to an english word sequenceconsequently only words with known pronunciations can be generated using this modeltherefore the spellingbased model generally has a higher transliteration accuracybut does the spellingbased model generate more accurate transliterations for words with known pronunciationswe expected the answer to this question to be nobut much to our surprise the spellingbased model produced more accurate transliterations on all categories as shown in table 8when top20 transliterations were considered the spellingbased model was slightly less accurateas expected the transliterations for names in the pronunciation dictionary are much more accurate than those that are not in itthis is because the word unigram model p was trained on names in the dictionarytable 9 shows some examples of the transliteration errors made by our transliteration algorithmsome of the errors occurred were in fact not errors but rather acceptable alternative transliterationshowever many were true errorsthe humansubjective evaluation described in section 83 helps distinguish between these two casesthe evaluation results presented so far consider a transliteration correct only if it matches the goldstandardin some cases where more than one possible transliteration is acceptable this criterion is too rigidto address such cases we must ask a human subject to determine the correctness of transliterationswe asked a native speaker of english with good knowledge of arabic to decide whether any given transliteration is correct or notthis humanbased evaluation is done for both the transliterations provided by the human translators and by our transliteration systemthe human subject was presented with the names in the arabic script their goldstandard transliterations and the transliteration that we are evaluatingfor our transliteration algorithm the human subject was provided with the top 20 transliteration candidates as wellthe accuracy of the human translator based on the humansubjective evaluation is shown in table 10the accuracy of our transliteration models based on the humansubjective evaluation is shown in table 11the human translator accuracy based on the humansubjective evaluation was higher than the exactmatching accuracy by about 11most of the increase came from the forwardtransliteration of arab namesthis was expected because for arab names typically many variant transliterations are acceptablethis was also reflected on the humansubjective evaluation of our spellingbased modelhowever the accuracy of our phoneticbased model remains almost the same as in the case of the exactmatching evaluationthis is because names that can be found in the dictionary have only a single spelling that for the most part agrees with our goldstandardalso most of the names in the dictionary are english names and with english names the human evaluator was rigid mostly accepting only the exactmatching spellingwe have presented and evaluated a transliteration algorithm using phoneticbased and spellingbased nation on the development and blind test sets by categorythe evaluation is based on humansubjective evaluation modelsthis algorithm is most accurate on backtransliterating english namesthe reason for this is that most names in the dictionary are of english originhence the language model was mostly trained on english namesone way to improve transliterations of nonenglish names is to train the language model on a list of nonenglish names in addition to the dictionary namesour current models do not deal with the issue of metathesis in person names across languagesmetathesis in person names into arabic is often a result of wrong transliterations by the person who transliterated in the original name in arabicfor example the name dostoevsky was found in our arabic corpus transliterated as dystwyfskyquot and dystwyfksyquot the name ordzhonikidze was found transliterated as cirdjion ykydzyquot and quot si ati cirdjyk yrt ydztquot this causes incorrect transliterations of theses names by our systemthe transliteration accuracy on the blind test set for both our system and the human translator is significantly higher than the development test setthis is because the blind set is mostly of highly frequent prominent politicians whereas the development set contains also names of writers and less common political figures and hence are less likely to be in our unigram language model
W02-0505
machine transliteration of names in arabic textswe present a transliteration algorithm based on sound and spelling mappings using finite state machinesthe transliteration models can be trained on relatively small lists of nameswe introduce a new spellingbased model that is much more accurate than stateoftheart phoneticbased models and can be trained on easiertoobtain training datawe apply our transliteration algorithm to the transliteration of names from arabic into englishwe report on the accuracy of our algorithm based on exactmatching criterion and based on humansubjective evaluationwe also compare the accuracy of our system to the accuracy of human translatorswe transliterate named entities in arabic text to english by combining phoneticbased and spellingbased models and reranking candidates with fullname web counts named entities coreference and contextual web countswe show that the use of outside linguistic resources such as www counts of transliteration candidates can greatly boost transliteration accuracyout spellingbased model directly maps english letter sequences into arabic letter sequences with associated probability that are trained on a small englisharabic name list without the need for english pronunciations
unsupervised discovery of morphemes we present two methods for unsupervised segmentation of words into morphemelike units the model utilized is especially suited for languages with a rich morphology such as finnish the first method is based on the minimum description length principle and works online in the second method maximum likelihood optimization is used the quality of the segmentations is measured using an evaluation method that compares the segmentations produced to an existing morphological analysis experiments on both finnish and english corpora show that the presented methods perform well compared to a current stateoftheart system according to linguistic theory morphemes are considered to be the smallest meaningbearing elements of language and they can be defined in a languageindependent mannerhowever no adequate languageindependent definition of the word as a unit has been agreed upon if effective methods can be devised for the unsupervised discovery of morphemes they could aid the formulation of a linguistic theory of morphology for a new languageit seems that even approximative automated morphological analysis would be beneficial for many natural language applications dealing with large vocabulariesfor example in text retrieval it is customary to preprocess texts by returning words to their base forms especially for morphologically rich languagesmoreover in large vocabulary speech recognition predictive models of language are typically used for selecting the most plausible words suggested by an acoustic speech recognizer consider for example the estimation of the standard ngram model which entails the estimation of the probabilities of all sequences of n wordswhen the vocabulary is very large say 100 000 words the basic problems in the estimation of the language model are if words are used as basic representational units in the language model the number of basic units is very high and the estimated word ngrams are poor due to sparse data due to the high number of possible word forms many perfectly valid word forms will not be observed at all in the training data even in large amounts of textthese problems are particularly severe for languages with rich morphology such as finnish and turkishfor example in finnish a single verb may appear in thousands of different forms the utilization of morphemes as basic representational units in a statistical language model instead of words seems a promising courseeven a rough morphological segmentation could then be sufficienton the other hand the construction of a comprehensive morphological analyzer for a language based on linguistic theory requires a considerable amount of work by expertsthis is both slow and expensive and therefore not applicable to all languagesthe problem is further compounded as languages evolve new words appear and grammatical changes take placeconsequently it is important to develop methods that are able to discover a morphology for a language based on unsupervised analysis of large amounts of dataas the morphology discovery from untagged corpora is a computationally hard problem in practice one must make some assumptions about the structure of wordsthe appropriate specific assumptions are somewhat languagededependentfor example for english it may be useful to assume that words consist of a stem often followed by a suffix and possibly preceded by a prefixby contrast a finnish word typically consists of a stem followed by multiple suffixesin addition compound words are common containing an alternation of stems and suffixes eg the word kahvinjuojallekin 1moreover one may ask whether a morphologically complex word exhibits some hierarchical structure or whether it is merely a flat concatenation of stems and sufficesmany existing morphology discovery algorithms concentrate on identifying prefixes suffixes and stems ie assume a rather simple inflectional morphologydejean concentrates on the problem of finding the list of frequent affixes for a language rather than attempting to produce a morphological analysis of each wordfollowing the work of zellig harris he identifies possible morpheme boundaries by looking at the number of possible letters following a given sequence of letters and then utilizes frequency limits for accepting morphemesgoldsmith concentrates on stemsuffixlanguages in particular indoeuropean languages and tries to produce output that would match as closely as possible with the analysis given by a human morphologisthe further assumes that stems form groups that he calls signatures and each signature shares a set of possible affixeshe applies an mdl criterion for model optimizationthe previously discussed approaches consider only individual words without regard to their contexts or to their semantic contentin a different approach schone and jurafsky utilize the context of each term to obtain a semantic representation for it using lsathe division to morphemes is then accepted only when the stem and stemaffix are sufficiently similar semanticallytheir method is shown to improve on the performance of goldsmiths linguistica on celex a morphologically analyzed english corpusin the related field of text segmentation one can sometimes obtain morphemessome of the approaches remove spaces from text and try to identify word boundaries utilizing eg entropybased measures as in word induction from natural language text without word boundaries is also studied in where mdlbased model optimization measures are usedviterbi or the forwardbackward algorithm is used for improving the segmentation of the corpus2also de marcken studies the problem of learning a lexicon but instead of optimizing the cost of the whole corpus as in de marcken starts with sentencesspaces are included as any other charactersutterances are also analyzed in where optimal segmentation for an utterance is sought so that the compression effect over the segments is maximalthe compression effect is measured in what the authors call description length gain defined as the relative reduction in entropythe viterbi algorithm is used for searching for the optimal segmentation given a modelthe input utterances include spaces and punctuation as ordinary charactersthe method is evaluated in terms of precision and recall on word boundary predictionbrent presents a general modular probabilistic model structure for word discovery he uses a minimum representation length criterion for model optimization and applies an incremental greedy search algorithm which is suitable for online learning such that children might employin this work we use a model where words may consist of lengthy sequences of segmentsthis model is especially suitable for languages with agglutinative morphological structurewe call the segments morphs and at this point no distinction is made between stems and affixesthe practical purpose of the segmentation is to provide a vocabulary of language units that is smaller and generalizes better than a vocabulary consisting of words as they appear in textsuch a vocabulary could be utilized in statistical language modeling eg for speech recognitionmoreover one could assume that such a discovered morph vocabulary would correspond rather closely to linguistic morphemes of the languagewe examine two methods for unsupervised learning of the model presented in sections 2 and 3the cost function for the first method is derived from the minimum description length principle from classic information theory which simultaneously measures the goodness of the representation and the model complexityincluding a model complexity term generally improves generalization by inhibiting overlearning a problem especially severe for sparse dataan incremental search algorithm is utilized that applies a hierarchical splitting strategy for wordsin the second method the cost function is defined as the maximum likelihood of the data given the modelsequential splitting is applied and a batch learning algorithm is utilizedin section 4 we develop a method for evaluating the quality of the morph segmentations produced by the unsupervised segmentation methodseven though the morph segmentations obtained are not intended to correspond exactly to the morphemes of linguistic theory a basis for comparison is provided by existing linguistically motivated morphological analyses of the wordsboth segmentation methods are applied to the segmentation of both finnish and english wordsin section 5 we compare the results obtained from our methods to results produced by goldsmiths linguistica on the same datathe task is to find the optimal segmentation of the source text into morphsone can think of this as constructing a model of the data in which the model consists of a vocabulary of morphs ie the codebook and the data is the sequence of textwe try to find a set of morphs that is concise and moreover gives a concise representation for the datathis is achieved by utilizing an mdl cost functionthe total cost consists of two parts the cost of the source text in this model and the cost of the codebooklet m be the morph codebook and d m1m2 mn the sequence of morph tokens that makes up the string of wordswe then define the total cost c as the cost of the source text is thus the negative loglikelihood of the morph summed over all the morph tokens that comprise the source textthe cost of the codebook is simply the length in bits needed to represent each morph separately as a string of characters summed over the morphs in the codebookthe length in characters of the morph mj is denoted by l and k is the number of bits needed to code a character for p we use the ml estimate ie the token count of mi divided by the total count of morph tokensthe online search algorithm works by incrementally suggesting changes that could improve the cost functioneach time a new word token is read from the input different ways of segmenting it into morphs are evaluated and the one with minimum cost is selectedrecursive segmentationthe search for the optimal morph segmentation proceeds recursivelyfirst the word as a whole is considered to be a morph and added to the codebooknext every possible split of the word into two parts is evaluatedthe algorithm selects the split that yields the minimum total costin case of no split the processing of the word is finished and the next word is read from inputotherwise the search for a split is performed recursively on the two segmentsthe order of splits can be represented as a binary tree for each word where the leafs represent the morphs making up the word and the tree structure describes the ordering of the splitsduring model search an overall hierarchical data structure is used for keeping track of the current segmentation of every word type encountered so farlet us assume that we have seen seven instances of linjaauton and two instances of autonkuljettajallakaan figure 1 then shows a possible structure used for representing the segmentations of the dataeach chunk is provided with an occurrence count of the chunk in the data set and the split location in this chunka zero split location denotes a leaf node ie a morphthe occurrence counts flow down through the hierachical structure so that the count of a child always equals the sum of the counts of its parentsthe occurrence counts of the leaf nodes are used for computing the relative frequencies of the morphsto find out the morph sequence that a word consists of we look up the chunk that is identical to the word and trace the split indices recursively until we reach the leafs which are the morphsnote that the hierarchical structure is used only during model search it is not part of the final model and accordingly no cost is associated with any other nodes than the leaf nodesadding and removing morphsadding new morphs to the codebook increases the codebook costconsequently a new word token will tend to be split into morphs already listed in the codebook which may lead to local optimato better escape local optima each time a new word token is encounfigure 1 hierarchical structure of the segmentation of the words linjaauton and autonkuljettajallakaanthe boxes represent chunksboxes with bold text are morphs and are part of the codebookthe numbers above each box are the split location and the occurrence count of the chunk tered it is resegmented whether or not this word has been observed beforeif the word has been observed we first remove the chunk and decrease the counts of all its childrenchunks with zero count are removed next we increase the count of the observed word chunk by one and reinsert it as an unsplit chunkfinally we apply the recursive splitting to the chunk which may lead to a new different segmentation of the worddreamingdue to the online learning as the number of processed words increases the quality of the set of morphs in the codebook gradually improvesconsequently words encountered in the beginning of the input data and not observed since may have a suboptimal segmentation in the new model since at some point more suitable morphs have emerged in the codebookwe have therefore introduced a dreaming stage at regular intervals the system stops reading words from the input and instead iterates over the words already encountered in random orderthese words are resegmented and thus compressed further if possibledreaming continues for a limited time or until no considerable decrease in the total cost can be observedfigure 2 shows the development of the average cost per word as a function of the increasing amount of source textfigure 2 development of the average word cost when processing newspaper textdreaming ie the reprocessing of the words encountered so far takes place five times which can be seen as sudden drops on the curvein this case we use as cost function the likelihood of the data ie pthus the model cost is not includedthis corresponds to maximumlikelihood learningthe cost is then where the summation is over all morph tokens in the source dataas before for p we use the ml estimate ie the token count of mi divided by the total count of morph tokensin this case we utilize batch learning where an emlike algorithm is used for optimizing the modelmoreover splitting is not recursive but proceeds linearlynote that the possibility of introducing a random segmentation at step is the only thing that allows for the addition of new morphsin fact without this step the algorithm seems to get seriously stuck in suboptimal solutionsrejection criteria rare morphsreject the segmentation of a word if the segmentation contains a morph that was used in only one word type in the previous iterationthis is motivated by the fact that extremely rare morphs are often incorrect sequences of oneletter morphsreject the segmentation if it contains two or more oneletter morphs in a sequencefor instance accept the segmentation halua n but reject the segmentation halu a n long sequences of oneletter morphs are usually a sign of a very bad local optimum that may even get worse in future iterations in case too much probability mass is transferred onto these short morphs33nevertheless for finnish there do exist some oneletter morphemes that can occur in a sequencehowever these morphemes can be thought of as a group that belongs together egwe wish to evaluate the method quantitatively from the following perspectives correspondence with linguistic morphemes efficiency of compression of the data and computational efficiencythe efficiency of compression can be evaluated as the total description length of the corpus and the codebook the computational efficiency of the algorithm can be estimated from the running time and memory consumption of the programhowever the linguistic evaluation is in general not so straightforwardif a corpus with marked morpheme boundaries is available the linguistic evaluation can be computed as the precision and recall of the segmentationunfortunately we did not have such data sets at our disposal and for finnish such do not even existin addition it is not always clear exactly where the morpheme boundary should be placedseveral alternatives maybe possible cfengl hope d vs hop ed instead we utilized an existing tool for providing a morphological analysis although not a segmentation of words based on the twolevel morphology of koskenniemi the analyzer is a finitestate transducer that reads a word form as input and outputs the base form of the word together with grammatical tagssample analyses are shown in figure 3the tag set consists of tags corresponding to morphological affixes and other tags for example partofspeech tagswe preprocessed the analyses by removing other tags than those corresponding to affixes and further split compound base forms into their constituentsas a result we obtained for each word a sequence of labels that corresponds well to a linguistic morphemic analysis of the worda label can often be considered to correspond to a single word segment and the labels appear in the order of the segmentsthe following step consists in retrieving the segmentation produced by one of the unsupervised segmentation algorithms and trying to align this segand finnish word formsthe finnish words are auton puutaloja and tehnyt the tags are a act adv cmp gen n pcp2 pl ptv sg v and mentation with the desired morphemic label sequence a good segmentation algorithm will produce morphs that align gracefully with the correct morphemic labels preferably producing a onetoone mappinga onetomany mapping from morphs to labels is also acceptable when a morph forms a common entity such as the suffix ja in puutaloja which contains both the plural and partitive elementby contrast a manytoone mapping from morphs to a label is a sign of excessive splitting eg t alo for talo with their respective correct morphemic analyseswe assume that the segmentation algorithm has split the word bigger into the morphs bigg er hours into hour s and puutaloja into puu t alo jaalignment procedurewe align the morph sequence with the morphemic label sequence using dynamic programming namely viterbi alignment to find the best sequence of mappings between morphs and morphemic labelseach possible pair of morphmorphemic label has a distance associated with itfor each segmented word the algorithm searches for the alignment that minimizes the total alignment distance for the wordthe distance d for a pair of morph m and label l is given by where cml is the number of word tokens in which the morph m has been aligned with the label l and cm is the number of word tokens that contain the morph m in their segmentationthe distance measure can be thought of as the negative logarithm of a conditional probability pthis indicates the probability that a morph m is a realisation of a morpheme represented by the label l put another way if the unsupervised segmentation algorithm discovers morphs that are allomorphs of real morphemes a particular allomorph will ideally always be aligned with the same morphemic label which leads to a high probability p and a short distance d4in contrast if the segmentation algorithm does not discover meaningful morphs each of the segments will be aligned with a number of different morphemic labels throughout the corpus and as a consequence the probabilities will be low and the distances highwe then utilize the them algorithm for iteratively improving the alignmentthe initial alignment that is used for computing initial distance values is obtained through a string matching procedure string matching is efficient for aligning the stem of the word with the base form the suffix morphs that do not match well with the base form labels will end up aligned somehow with the morphological tags comparison of methodsin order to compare two segmentation algorithms the segmentation of each is aligned with the linguistic morpheme labels and the total distance of the alignment is computedshorter total distance indicates better segmentationhowever one should note that the distance measure used favors long morphsif a particular segmentation algorithm does not split one single word of the corpus the total distance can be zeroin such a situation the single morph that a word is composed of is aligned with all morphemic labels of the wordthe morph m ie the word is unique which means that all probabilities p are equal to one eg the morph puutaloja is always aligned with the labels puu talo pl ptv and no other labels which yields the probabilities pthese values are used when the test set is alignedthe better segmentation algorithm is the one that yields a better alignment distance for the test setfor morphlabel pairs that were never observed in the training set a maximum distance value is assigneda good segmentation algorithm will find segments that are good building blocks of entirely new word forms and thus the maximum distance values will occur only rarelywe compared the two proposed methods as well as goldsmiths program linguistica5 on both finnish and english corporathe finnish corpus consisted of newspaper text from csc6a morphosyntactic analysis of the text was performed using the conexor fdg parser7all characters were converted to lower case and words containing other characters than a through z and the scandinavian letters a a and o were removedother than morphemic tags were removed from the morphological analyses of the wordsthe remaining tags correspond to inflectional affixes and cliticsunfortunately the parser does not distinguish derivational affixesthe first 100 000 word tokens were used as training data and the following 100 000 word tokens were used as test datathe test data contained 34 821 word typesthe english corpus consisted of mainly newspaper text from the brown corpus8a morphological analysis of the words was performed using the lingsoft engtwol analyzer9in case of multiple alternative morphological analyses the shortest analysis was selectedall characters were converted to lower case and words containing other characters than a through z an apostrophe or a hyphen were removedother than morphemic tags were removed from the morphological analyses of the wordsthe remaining tags correspond to inflectional or derivational affixesa set of 100 000 word tokens from the corpus sections press reportage and press editorial were used as training dataa separate set of 100 000 word tokens from the sections press editorial press reviews religion and skills hobbies were used as test datathe test data contained 12 053 word typestest results for the three methods and the two languages are shown in table 2we observe different tendencies for finnish and englishfor finnish there is a correlation between the compression of the corpus and the linguistic generalization capacity to new word formsthe recursive splitting with the mdl cost function is clearly superior to the sequential splitting with ml cost which in turn is superior to linguisticathe recursive mdl method is best in terms of data compression it produces the smallest morph lexicon and the codebook naturally occupies a small part of the total costit is best also in terms of the linguistic measure the total alignment distance on test datalinguistica on the other hand employs a more restricted segmentation which leads to a larger codebook and to the fact that the codebook occupies a large part of the total mdl costthis also appears to lead to a poor generalization ability to new word formsthe linguistic alignment distance is the highest and so is the percentage of aligned morphmorphemic label pairs that were never observed in the training seton the other hand linguistica is the fastest program10also for english the recursive mdl method achieves the best alignment but here linguistica achieves nearly the same resultthe rate of compression follows the same pattern as for finnish in that linguistica produces a much larger morph lexicon than the methods presented in this paperin spite of this fact the percentage of unseen morphmorphemic label pairs is about the same for all three methodsthis suggests that in a morphologically poor language such as english a restrictive segmentation method such as linguistica can compensate for new word forms that it does not recognize at all with old familiar words that it gets just rightin contrast the methods presented in this paper produce a morph lexicon that is smaller and able to generalize better to new word forms but has somewhat lower accuracy for already observed word formsvisual inspection of a sample of wordsin an attempt to analyze the segmentations more thoroughly we randomly picked 1000 different words from the finnish test setthe total number of occurrences of these words constitute about 25 of the whole setwe inspected the segmentation of each word visually and classified it into one of three categories correct and complete segmentation correct but incomplete segmentation incorrect segmentation the results of the inspection for each of the three segmentation methods are shown in table 3the recursive mdl method performs best and segments about half of the words correctlythe sequential ml method comes second and linguistica third with a share of 43 correctly segmented wordswhen considering the incomplete and incorrect segmentations the methods behave differentlythe recursive mdl method leaves very common word forms unsplit and often produces excessive splitting for rare mentation and mdl cost sequential segmentation and ml cost and linguistica the total mdl cost measures the compression of the corpushowever the cost is computed according to equation which favors the recursive mdl methodthe final number of morphs in the codebook is a measure of the size of the morph vocabularythe relative codebook cost gives the share of the total mdl cost that goes into coding the codebookthe alignment distance is the total distance computed over the sequence of morphmorphemic label pairs in the test datathe unseen aligned pairs is the percentage of all aligned morphlabel pairs in the test set that were never observed in the training setthis gives an indication of the generalization capacity of the method to new word forms not allow representation of contextual dependencies ie that some morphs appear only in particular contexts moreover languages have rules regarding the ordering of stems and affixes however the current model has no way of representing such contextual dependencieswordsthe sequential ml method is more prone to excessive splitting even for words that are not rarelinguistica on the other hand employs a more conservative splitting strategy but makes incorrect segmentations for many common word formsthe behaviour of the methods is illustrated by example segmentations in table 4often the recursive mdl method produces complete and correct segmentationshowever both it and the sequential ml method can produce excessive splitting as is shown for the latter eg affecti on at e in contrast linguistica refrains from splitting words when they should be split eg the finnish compound words in the tableregarding the model there is always room for improvementin particular the current model does in the experiments the online method with the mdl cost function and recursive splitting appeared most successful especially for finnish whereas for english the compared methods were rather equal in performancethis is likely to be partially due to the model structure of the presented methods which is especially suitable for languages such as finnishhowever there is still room for considerable improvement in the model structure especially regarding the representation of contextual dependenciesconsidering the two examined model optimization methods the recursive mdl method performed consistently somewhat betterwhether this is due to the cost function or the splitting strategy cannot be deduced based on these experimentsin the future we intend to extend the latter method to utilize an mdllike cost functiontable 4 some english and finnish word segmentations produced by the three methodsthe finnish words are elainlaakari elainmuseo elainpuisto and elaintarha the suffixes lle n on and sta are linguistically correct
W02-0603
unsupervised discovery of morphemeswe present two methods for unsupervised segmentation of words into morphemelike unitsthe model utilized is especially suited for languages with a rich morphology such as finnishthe first method is based on the minimum description length principle and works onlinein the second method maximum likelihood optimization is usedthe quality of the segmentations is measured using an evaluation method that compares the segmentations produced to an existing morphological analysisexperiments on both finnish and english corpora show that the presented methods perform well compared to a current stateoftheart systemour method is based on jointly minimizing the size of the morph codebook and the encoded size of all the word forms using the minimum description length mdl cost function
building a sense tagged corpus with open mind word expert open mind word expert is an implemented active learning system for collecting word sense tagging from the general public over the web it is available at httpteachcomputersorg we expect the system to yield a large volume of highquality training data at a much lower cost than the traditional method of hiring lexicographers we thus propose a senseval3 lexical sample activity where the training data is collected via open mind word expert if successful the collection process can be extended to create the definitive corpus of word sense information most of the efforts in the word sense disambiguation field have concentrated on supervised learning algorithmsthese methods usually achieve the best performance at the cost of low recallthe main weakness of these methods is the lack of widely available semantically tagged corpora and the strong dependence of disambiguation accuracy on the size of the training corpusthe tagging process is usually done by trained lexicographers and consequently is quite expensive limiting the size of such corpora to a handful of tagged textsthis paper introduces open mind word expert a webbased system that aims at creating large sense tagged corpora with the help of web usersthe system has an active learning component used for selecting the most difficult examples which are then presented to the human taggerswe expect that the system will yield more training data of comparable quality and at a significantly lower cost than the traditional method of hiring lexicographersopen mind word expert is a newly born project that follows the open mind initiative the basic idea behind open mind is to use the information and knowledge that may be collected from the existing millions of web users to the end of creating more intelligent softwarethis idea has been used in open mind common sense which acquires commonsense knowledge from peoplea knowledge base of about 400000 facts has been built by learning facts from 8000 web users over a one year period if open mind word expert experiences a similar learning rate we expect to shortly obtain a corpus that exceeds the size of all previously tagged dataduring the first fifty days of activity we collected about 26000 tagged examples without significant efforts for publicizing the sitewe expect this rate to gradually increase as the site becomes more widely known and receives more trafficthe availability of large amounts of semantically tagged data is crucial for creating successful wsd systemsyet as of today only few sense tagged corpora are publicly availableone of the first large scale hand tagging efforts is reported in where a subset of the brown corpus was tagged with wordnet sensesthe corpus includes a total of 234136 tagged word occurrences out of which 186575 are polysemousthere are 88058 noun occurrences of which 70214 are polysemousthe next significant hand tagging task was reported in where 2476 usages of interest were manually assigned with sense tags from the longman dictionary of contemporary english this corpus was used in various experiments with classification accuracies ranging from 75 to 90 depending on the algorithm and features employedthe high accuracy of the lexas system is due in part to the use of large corporafor this system 192800 word occurrences have been manually tagged with senses from wordnetthe set of tagged words consists of the 191 most frequently occurring nouns and verbsthe authors mention that approximately one manyear of effort was spent in tagging the data setlately the senseval competitions provide a good environment for the development of supervised wsd systems making freely available large amounts of sense tagged data for about 100 wordsduring senseval1 data for 35 words was made available adding up to about 20000 examples tagged with respect to the hector dictionarythe size of the tagged corpus increased with senseval2 when 13000 additional examples were released for 73 polysemous wordsthis time the semantic annotations were performed with respect to wordnetadditionally mentions the hector corpus which comprises about 300 word types with 3001000 tagged instances for each word selected from a 17 million word corpussense tagged corpora have thus been central to accurate wsd systemsestimations made in indicated that a high accuracy domain independent system for wsd would probably need a corpus of about 32 million sense tagged wordsat a throughput of one word per minute this would require about 27 manyears of human annotation effortwith open mind word expert we aim at creating a very large sense tagged corpus by making use of the incredible resource of knowledge constituted by the millions of web users combined with techniques for active learningopen mind word expert is a webbased interface where users can tag words with their wordnet sensestagging is organized by wordthat is for each ambiguous word for which we want to build a sense tagged corpus users are presented with a set of natural language sentences that include an instance of the ambiguous wordinitially example sentences are extracted from a large textual corpusif other training data is not available a number of these sentences are presented to the users for tagging in stage 1next this tagged collection is used as training data and active learning is used to identify in the remaining corpus the examples that are hard to tagthese are the examples that are presented to the users for tagging in stage 2for all tagging users are asked to select the sense they find to be the most appropriate in the given sentence from a dropdown list that contains all wordnet senses plus two additional choices unclear and none of the abovethe results of any automatic classification or the classification submitted by other users are not presented so as to not bias the contributors decisionsbased on early feedback from both researchers and contributors a future version of open mind word expert may allow contributors to specify more than one sense for any worda prototype of the system has been implemented and is available at httpwwwteachcomputersorgfigure 1 shows a screen shot from the system interface illustrating the screen presented to users when tagging the noun childthe starting corpus we use is formed by a mix of three different sources of data namely the penn treebank corpus the los angeles times collection as provided during trec conferences1 and open mind common sense2 a collection of about 400000 commonsense assertions in english as contributed by volunteers over the weba mix of several sources each covering a different spectrum of usage is used to increase the coverage of word senses and writing styleswhile the first two sources are well known to the nlp community the open mind common sense constitutes a fairly new textual corpusit consists mostly of simple single sentencesthese sentences tend to be explanations and assertions similar to glosses of a dictionary but phrased in a more common language and with many sentences per sensefor example the collection includes such assertions as keys are used to unlock doors and pressing a typewriter key makes a letterwe believe these sentences may be a relatively clean source of keywords that can aid in disambiguationfor details on the data and how it has been collected see to minimize the amount of human annotation effort needed to build a tagged corpus for a given ambiguous word open mind word expert includes an active learning component that has the role of selecting for annotation only those examples that are the most informativeaccording to there are two main types of active learningthe first one uses memberships queries in which the learner constructs examples and asks a user to label themin natural language processing tasks this approach is not always applicable since it is hard and not always possible to construct meaningful unlabeled examples for traininginstead a second type of active learning can be applied to these tasks which is selective samplingin this case several classifiers examine the unlabeled data and identify only those examples that are the most informative that is the examples where a certain level of disagreement is measured among the classifierswe use a simplified form of active learning with selective sampling where the instances to be tagged are selected as those instances where there is a disagreement between the labels assigned by two different classifiersthe two classifiers are trained on a relatively small corpus of tagged data which is formed either with senseval training examples in the case of senseval words or examples obtained with the open mind word expert system itself when no other training data is availablethe first classifier is a semantic tagger with active feature selection this system is one of the top ranked systems in the english lexical sample task at senseval2the system consists of an instance based learning algorithm improved with a scheme for automatic feature selectionit relies on the fact that different sets of features have different effects depending on the ambiguous word consideredrather than creating a general learning model for all polysemous words stafs builds a separate feature space for each individual wordthe features are selected from a pool of eighteen different features that have been previously acknowledged as good indicators of word sense including part of speech of the ambiguous word itself surrounding words and their parts of speech keywords in context noun before and after verb before and after and othersan iterative forward search algorithm identifies at each step the feature that leads to the highest crossvalidation precision computed on the training datamore details on this system can be found in the second classifier is a constraintbased language tagger the system treats every training example as a set of soft constraints on the sense of the word of interestwordnet glosses hyponyms hyponym glosses and other wordnet data is also used to create soft constraintscurrently only keywords in context type of constraint is implemented with weights accounting for the distance from the target wordthe tagging is performed by finding the sense that minimizes the violation of constraints in the instance being taggedcobalt generates confidences in its tagging of a given instance based on how much the constraints were satisfied and violated for that instanceboth taggers use wordnet 17 dictionary glosses and relationsthe performance of the two systems and their level of agreement were evaluated on the senseval noun data setthe two systems agreed in their classification decision in 5496 of the casesthis low agreement level is a good indication that the two approaches are fairly orthogonal and therefore we may hope for high disambiguation precision on the agreement setindeed the tagging accuracy measured on the set where both cobalt and stafs assign the same label is 825 a figure that is close to the 855 interannotator agreement measured for the senseval2 nouns table 1 lists the precision for the agreement and disagreement sets of the two taggersthe low precision on the instances in the disagreement set justifies referring to these as hard to tagin open mind word expert these are the instances that are presented to the users for tagging in the active learning stagecollecting from the general public holds the promise of providing much data at low costit also makes attending to two aspects of data collection more important ensuring contribution quality and making the contribution process engaging to the contributorswe have several steps already implemented and have additional steps we propose to ensure qualityfirst redundant tagging is collected for each itemopen mind word expert currently uses the following rules in presenting items to volunteer contributors two tags per itemonce an item has two tags associated with it it is not presented for further taggingone tag per item per contributorwe allow contributors to submit tagging either anonymously or having logged inanonymous contributors are not shown any items already tagged by contributors from the same ip addresslogged in contributors are not shown items they have already taggedsecond inaccurate sessions will be discardedthis can be accomplished in two ways roughly by checking agreement and precision using redundancy of tags collected for each item any given session will be checked for agreement with tagging of the same items collected outside of this sessionif necessary the precision of a given contributor with respect to a preexisting gold standard can be estimated directly by presenting the contributor with examples from the gold standardthis will be implemented if there are indications of need for this in the pilot it will help screen out contributors who for example always select the first sense in all automatic assessment of the quality of tagging seems possible and based on the experience of prior volunteer contribution projects the rate of maliciously misleading or incorrect contributions is surprisingly lowadditionally the tagging quality will be estimated by comparing the agreement level among web contributors with the agreement level that was already measured in previous sense tagging projectsan analysis of the semantic annotation task performed by novice taggers as part of the semcor project revealed an agreement of about 825 among novice taggers and 752 among novice taggers and lexicographersmoreover since we plan to use paid trained taggers to create a separate test corpus for each of the words tagged with open mind word expert these same paid taggers could also validate a small percentage of the training data for which no gold standard existswe believe that making the contribution process as engaging and as gamelike for the contributors as possible is crucial to collecting a large volume of datawith that goal open mind word expert tracks for each contributor the number of items tagged for each topicwhen tagging items a contributor is shown the number of items she has tagged and the record number of items tagged by a single userif the contributor sets a record it is recognized with a congratulatory message on the contribution screen and the user is placed in the hall of fame for the sitealso the user can always access a realtime graph summarizing by topic their contribution versus the current record for that topicinterestingly it seems that relatively simple word games can enjoy tremendous user acceptancefor example wordzap a game that pits players against each other or against a computer to be the first to make seven words from several presented letters has been downloaded by well over a million users and the reviewers describe the game as addictiveif sense tagging can enjoy a fraction of such popularity very large tagged corpora will be generatedadditionally nlp instructors can use the site as an aid in teaching lexical semanticsan instructor can create an activity code and then for users who have opted in as participants of that activity access the amount tagged by each participant and the percentage agreement of the tagging of each contributor who opted in for this activityhence instructors can assign open mind word expert tagging as part of a homework assignment or a testalso assuming there is a test set of already tagged examples for a given ambiguous word we may add the capability of showing the increase in disambiguation precision on the test set as it results from the samples that a user is currently taggingthe open mind word expert system will be used to build large sense tagged corpora for some of the most frequent ambiguous words in englishthe tagging will be collected over the web from volunteer contributorswe propose to organize a task in senseval3 where systems will disambiguate words using the corpus created with this systemwe will initially select a set of 100 nouns and collect for each of them tagged samples where is the number of senses of the nounit is worth mentioning that unlike previous senseval evaluations where multiword expressions were considered as possible senses for an constituent ambiguous word we filter these expressions apriori with an automatic tool for collocation extractiontherefore the examples we collect refer only to single ambiguous words and hence we expect a lower intertagger agreement rate and lower wsd tagging precision when only single words are used since usually multiword expressions are not ambiguous and they constitute some of the easy cases when doing sense taggingthese initial set of tagged examples will then be used to train the two classifiers described in section 32 and annotate an additional set of examplesfrom these the users will be presented only with those examples where there is a disagreement between the labels assigned by the two classifiersthe final corpus for each ambiguous word will be created with the original set of tagged examples plus the examples selected by the active learning component sense tagged by userswords will be selected based on their frequencies as computed on semcoronce the tagging process of the initial set of 100 words is completed additional nouns will be incrementally added to the open mind word expert interfaceas we go along words with other parts of speech will be considered as wellto enable comparison with senseval2 the set of words will also include the 29 nouns used in the senseval2 lexical sample tasksthis would allow us to assess how much the collected data helps on the senseval2 taskas shown in section 33 redundant tags will be collected for each item and overall quality will be assessedmoreover starting with the initial set of examples labeled for each word we will create confusion matrices that will indicate the similarity between word senses and help us create the sense mappings for the coarse grained evaluationsone of the next steps we plan to take is to replace the two tags per item scheme with the tag until at least two tags agree scheme proposed and used during the senseval2 tagging additionally the set of meanings that constitute the possible choices for a certain ambiguous example will be enriched with groups of similar meanings which will be determined either based on some apriori provided sense mappings or based on the confusion matrices mentioned abovefor each word with sense tagged data created with open mind word expert a test corpus will be built by trained human taggers starting with examples extracted from the corpus mentioned in section 31this process will be set up independently of the open mind word expert web interfacethe test corpus will be released during senseval3open mind word expert pursues the potential of creating a large tagged corpuswsd can also benefit in other ways from the open mind approachwe are considering using a autoascgencor type of approach to generate sense tagged data with a bootstrapping algorithm web contributors can help this process by creating the initial set of seeds and exercising control over the quality of the automatically generated seedswe would like to thank the open mind word expert contributors who are making all this work possiblewe are also grateful to adam kilgarriff for valuable suggestions and interesting discussions to randall davis and to the anonymous reviewers for useful comments on an earlier version of this paper and to all the open mind word expert users who have emailed us with their feedback and suggestions helping us improve this activity
W02-0817
building a sense tagged corpus with open mind word expertopen mind word expert is an implemented active learning system for collecting word sense tagging from the general public over the webit is available at httpteachcomputersorgwe expect the system to yield a large volume of highquality training data at a much lower cost than the traditional method of hiring lexicographerswe thus propose a senseval3 lexical sample activity where the training data is collected via open mind word expertif successful the collection process can be extended to create the definitive corpus of word sense informationfinally in an effort related to the wikipedia collection process we implemente the open mind word expert system for collecting sense annotations from volunteer contributors over the webwe presented another interesting proposal which turns to web users to produce sensetagged corpora
learning a translation lexicon from monolingual corpora this paper presents work on the task of constructing a wordlevel translation lexicon purely from unrelated monolingual corpora we combine various clues such as cognates similar context preservation of word similarity and word frequency experimental results for the construction of a germanenglish noun lexicon are reported noun translation accuracy of 39 scored against a parallel test corpus could be achieved recently there has been a surge in research in machine translation that is based on empirical methodsthe seminal work by brown et al 1990 at ibm on the candide system laid the foundation for much of the current work in statistical machine translation some of this work has been reimplemented and is freely available for research purposes alonaizan et al 1999roughly speaking smt divides the task of translation into two steps a wordlevel translation model and a model for word reordering during the translation processthe statistical models are trained on parallel corpora large amounts of text in one language along with their translation in anothervarious parallel texts have recently become available mostly from government sources such as parliament proceedings or law texts still for most language pairs parallel texts are hard to come bythis is clearly the case for lowdensity languages such as tamil swahili or tetunfurthermore texts derived from parliament speeches may not be appropriate for a particular targeted domainspecific parallel texts can be constructed by hand for the purpose of training an smt system but this is a very costly endeavoron the other hand the digital revolution and the widespread use of the world wide web have proliferated vast amounts of monolingual corporapublishing text in one language is a much more natural human activity than producing parallel textsto illustrate this point the world wide web alone contains currently over two billion pages a number that is still growing exponentiallyaccording to google2 the word directory occurs 61 million times empathy 383000 times and reflex 787000 timesin the hansard each of these words occurs only oncethe objective of this research to build a translation lexicon solely from monolingual corporaspecifically we want to automatically generate a onetoone mapping of german and english nounswe are testing our mappings against a bilingual lexicon of 9206 german and 10645 english nounsthe two monolingual corpora should be in a fairly comparable domainfor our experiments we use the 19901992 wall street journal corpus on the english side and the 19951996 german news wire corpus on the german sideboth corpora are news sources in the general sensehowever they span different time periods and have a different orientation the world street journal covers mostly business news the german news wire mostly german politicsfor experiments on training probabilistic translation lexicons from parallel corpora and similar tasks on the same test corpus refer to our earlier work koehn and knight 2000 2001this section will describe clues that enable us to find translations of words of the two monolingual corporawe will examine each clue separatelythe following clues are considered more frequent than flower as its translation regierung is more frequent than blumewe will now look in detail how these clues may contribute to building a germanenglish translation lexicondue to cultural exchange a large number of words that originate in one language are adopted by othersrecently this phenomenon can be seen with words such as internet or aidsthese terms may be adopted verbatim or changed by wellestablished rulesfor instance immigration has the portuguese translation immigracao as many words ending in tion have translations with the same spelling except for the ending changed to caowe examined the german words in our lexicon and tried to find english words that have the exact same spellingsurprisingly we could count a total of 976 such wordswhen checking them against a benchmark lexicon we found these mappings to be 88 correctthe correctness of word mappings acquired in this fashion depends highly on word lengththis is illustrated in table 1 while identical 3letter words are only translations of each other 60 of the time this is true for 98 of 10letter wordsclearly for shorter words the accidental existence of an identically spelled word in the other language word is much higherthis includes words such as fee ton art and tag spelled words are in fact translations of each other the accuracy of this assumption depends highly on the length of the words knowing this allows us to restrict the word length to be able to increase the accuracy of the collected word pairsfor instance by relying only on words at least of length 6 we could collect 622 word pairs with 96 accuracyin our experiments however we included all the words pairsas already mentioned there are some wellestablished transformation rules for the adoption of words from a foreign languagefor german to english this includes replacing the letters k and z by c and changing the ending tat by tyboth these rules can be observed in the word pair elektrizitat and electricityby using these two rules we can gather 363 additional word pairs of which 330 or 91 are in fact translations of each otherthe combined total of 1339 word pairs are separated and form the seed for some of the following stepswhen words are adopted into another language their spelling might change slightly in a manner that can not be simply generalized in a ruleobserve for instance website and webseitethis is even more the case for words that can be traced back to common language roots such as friend and freund or president and prasidentstill these words often called cognates maintain a very similar spellingthis can be defined as differing in very few lettersthis measurement can be formalized as the number of letters common in sequence between the two words divided by the length of the longer wordthe example word pair friend and freund shares 5 letters and both words have length 6 hence there spelling similarity is 56 or 083this measurement is called longest common subsequence ratio melamed 1995in related work string edit distance has been used mann and yarowski 2001with this computational means at hand we can now measure the spelling similarity between every german and english word and sort possible word pairs accordinglyby going through this list starting at the top we can collect new word pairswe do this is in a greedy fashion once a word is assigned to a word pair we do not look for another matchtable 2 gives the top 24 generated word pairs by this algorithm ing words with most similar spelling in a greedy fashionthe applied measurement of spelling similarity does not take into account that certain letter changes are less harmful than otherstiedemann 1999 explores the automatic construction of a string similarity measure that learns which letter changes occur more likely between cognates of two languagesthis measure is trained however on parallel sentencealigned text which is not available hereobviously the vast majority of word pairs can not be collected this way since their spelling shows no resemblance at allfor instance spiegel and mirror share only one vowel which is rather accidentalif our monolingual corpora are comparable we can assume a word that occurs in a certain context should have a translation that occurs in a similar contextcontext as we understand it here is defined by the frequencies of context words in surrounding positionsthis local context has to be translated into the other language and we can search the word with the most similar contextthis idea has already been investigated in earlier workrapp 1995 1999 proposes to collect counts over words occurring in a four word window around the target wordfor each occurrence of a target word counts are collected over how often certain context words occur in the two positions directly ahead of the target word and the two following positionsthe counts are collected separately for each position and then entered into in a context vector with an dimension for each context word in each positionfinally the raw counts are normalized so that for each of the four word positions the vector values add up to onevector comparison is done by adding all absolute differences of all componentsfung and yee 1998 propose a similar approach they count how often another word occurs in the same sentence as the target wordthe counts are then normalized by a using the tfidf method which is often used in information retrieval jones 1979the need for translating the context poses a chickenandegg problem if we already have a translation lexicon we can translate the context vectorsbut we can only construct a translation lexicon with this approach if we are already able to translate the context vectorstheoretically it is possible to use these methods to build a translation lexicon from scratch rapp 1995the number of possible mappings has complexity o and the computing cost of each mapping has quadratic complexity ofor a large number of words n at least more than 10000 maybe more than 100000 the combined complexity becomes prohibitively expensivebecause of this both rapp and fung focus on expanding an existing large lexicon to add a few novel termsclearly a seed lexicon to bootstrap these methods is neededfortunately we have outlined in section 21 how such a seed lexicon can be obtained by finding words spelled identically in both languageswe can then construct context vectors that contain information about how a new unmapped word cooccurs with the seed wordsthis vector can be translated into the other language since we already know the translations of the seed wordsfinally we can look for the best matching context vector in the target language and decide upon the corresponding word to construct a word mappingagain as in section 22 we have to compute all possible word or context vector matcheswe collect then the best word matches in a greedy fashiontable 3 displays the top 15 generated word pairs by this algorithmthe context vectors are constructed in the way proposed by rapp 1999 with the difference that we collect counts over a four noun window not a four word window by dropping all intermediate wordsintuitively it is obvious that pairs of words that are similar in one language should have translations that are similar in the other languagefor instance wednesday is similar to thursday as mittwoch is similar to donnerstagor dog is similar to cat in english as hund is similar to katze in germanthe challenge is now to come up with a quantifiable measurement of word similarityone strategy is to define two words as similar if they occur in a similar contextclearly this is the case for wednesday and thursday as well as for dog and catexactly this similarity measurement is used in the work by diab and finch 2000their approach to constructing and comparing context vectors differs significantly from methods discussed in the previous sectionfor each word in the lexicon the context vector consists of cooccurrence counts in respect to 150 socalled peripheral tokens basically the most frequent wordsthese counts are collected for each position in a 4word window around the word in focusthis results in a 600dimensional vectorinstead of comparing these cooccurrence counts directly the spearman rank order correlation is applied for each position the tokens are compared in frequency and the frequency count is replaced by the frequency rank the most frequent token count is replaced by 1 the least frequent by n 150the similarity of two context vectors a and b is then defined by3 the result of all this is a matrix with similarity scores between all german words and second one with similarity scores between all english wordssuch matrices could also be constructed using the definitions of context we reviewed in the previous sectionthe important point here is that we have generated a similarity matrix which we will use now to find new translation word pairsagain as in the previous section 23 we as3in the given formula we fixed two mistakes of the original presentation diab and finch 2000 the square of the differences is used and the denominator contains the additional factor 4 since essentially 4 150word vectors are compared sume that we will already have a seed lexiconfor a new word we can look up its similarity scores to the seed words thus creating a similarity vectorsuch a vector can be translated into the other language recall that dimensions of the vector are the similarity scores to seed words for which we already have translationsthe translated vector can be compared to other vectors in the second languageas before we search greedily for the best matching similarity vectors and add the corresponding words to the lexiconfinally another simple clue is the observation that in comparable corpora the same concepts should be used with similar frequencieseven if the most frequent word in the german corpus is not necessarily the translation of the most frequent english word it should also be very frequenttable 4 illustrates the situation with our corporait contains the top 10 german and english words together with the frequency ranks of their best translationsfor both languages 4 of the 10 words have translations that also rank in the top 10clearly simply aligning the nth frequent german word with the nth frequent english word is not a viable strategyin our case this is additionally hampered by the different orientation of the news sourcesthe frequent financial terms in the english wsj corpus are rather rare in the german corpusfor most words especially for more comparable corpora there is a considerable correlation between the frequency of a word and its translationour frequency measurement is defined as ratio of the word frequencies normalized by the corpus sizesthis section provides more detail on the experiments we have carried out to test the methods just outlined quent german and english words and their translationswe are trying to build a onetoone germanenglish translation lexicon for the use in a machine translation systemto evaluate this performance we use two different measurements firstly we record how many correct wordpairs we have constructedthis is done by checking the generated wordpairs against an existing bilingual lexicon4 in essence we try to recreate this lexicon which contains 9206 distinct german and 10645 distinct english nouns and 19782 lexicon entriesfor a machine translation system it is often more important to get more frequently used words right than obscure onesthus our second evaluation measurement tests the word translations proposed by the acquired lexicon against the actual wordlevel translations in a 5000 sentence aligned parallel corpus5 the starting point to extending the lexicon is the seed lexicon of identically spelled words as described in section 21it consists of 1339 entries of which are correct according to the existing bilingual lexicondue to computational constraints6 we focus on the additional mapping of only 1000 german and english wordsthese 1000 words are chosen from the 1000 most frequent lexicon entries in the dictionary without duplications of wordsthis frequency is defined by the sum of two word frequencies of the words in the entry as found in the monolingual corporawe did not collect statistics of the actual use of lexical entries in say a parallel corpusin a different experimental setup we also simply tried to match the 1000 most frequent german words with the 1000 most frequent english wordsthe results do not differ significantlyeach of the four clues described in the sections 22 to 25 provide a matching score between a german and an english wordthe likelihood of these two words being actual translations of each other should correlate to these scoresthere are many ways one could search for the best set of lexicon entries based on these scoreswe could perform an exhaustive search construct all possible mappings and find the highest combined score of all entriessince there are o possible mappings a brute force approach to this is practically impossiblewe therefore employed a greedy search first we search for the highest score for any word pairwe add this word pair to the lexicon and drop word pairs that include either the german and english word from further searchagain we search for the highest score and add the corresponding word pair drop these words from further search and so onthis is done iteratively until all words are used uptables 2 and 3 illustrate this process for the spelling and context similarity clues when applied separatelythe results are summarized in table 5recall that for each word that we are trying to map to the other language a thousand possible target words exist but only one is correctthe baseline for this task choosing words at random results on average in only 1 correct mapping in the entire lexicona perfect lexicon of course contains 1000 correct entriesthe starting point for the corpus score is the 158 that are already achieved with the seed lexicon from section 21in an experiment where we identified the best lexical entries using a very large parallel corpus we could achieve 89 accuracy on this test corpus many correct lexicon entries where added and how well the resulting translation lexicon performs compared to the actual wordlevel translations in a parallel corpus for all experiments the starting point was the seed lexicon of 1339 identical spelled words described in section 21 which achieve 158 corpus scoretaken alone both the context and spelling clues learn over a hundred lexicon entries correctlythe similarity and frequency clues however seem to be too imprecise to pinpoint the search to the correct translationsa closer look of the spelling and context scores reveals that while the spelling clue allows to learn more correct lexicon entries the context clue does better with the more frequently used lexicon entries as found in the test corpus combining different clues is quite simple we can simply add up the matching scoresthe scores can be weightedinitially we simply weighted all clues equallywe then changed the weights to see if we can obtain better resultswe found that there is generally a broad range of weights that result in similar performancewhen using the spelling clue in combination with others we found it useful to define a cutoffif two words agree in 30 of their letters this is generally as bad as if they do not agree in any the agreements are purely coincidentaltherefore we counted all spelling scores below 03 as 03combining the context and the spelling clues yields a significantly better result than using each clue by itselfa total of 185 correct lexical entries are learned with a corpus score of 386adding in the other scores however does not seem to be beneficial only adding the frequency clue to the spelling clue provides some improvementin all other cases these scores are not helpfulbesides this linear combination of scores from the different clues more sophisticated methods may be possible koehn 2002we have attempted to learn a onetoone translation lexicon purely from unrelated monolingual corporausing identically spelled words proved to be a good starting pointbeyond this we examined four different cluestwo of them matching similar spelled words and words with the same context helped us to learn a significant number of additional correct lexical entriesour experiments have been restricted to nounsverbs adjectives adverbs and other part of speech may be tackled in a similar waythey might also provide useful context information that is beneficial to building a noun lexiconthese methods may be also useful given a different starting point for efforts in building machine translation systems some small parallel text should be availablefrom these some highquality lexical entries can be learned but there will always be many words that are missingthese may be learned using the described methods
W02-0902
learning a translation lexicon from monolingual corporathis paper presents work on the task of constructing a wordlevel translation lexicon purely from unrelated monolingual corporawe combine various clues such as cognates similar context preservation of word similarity and word frequencyexperimental results for the construction of a germanenglish noun lexicon are reportednoun translation accuracy of 39 scored against a parallel test corpus could be achievedwe automatically induce the initial seed bilingual dictionary by using identical spelling features such as cognates and similar contexts
improvements in automatic thesaurus extraction the use of semantic resources is comin modern but methods to extract lexical semantics have only recently begun to perform well enough for practical use we evaluate existing and new similarity metrics for thesaurus extraction and experiment with the tradeoff between extraction performance and efficiency we propose an approximation based on attributes and coarseand finegrained matching that reduces the time complexity and execution time of thesaurus extraction with only a marginal performance penalty thesauri have traditionally been used in information retrieval tasks to expand words in queries with synonymous terms since the development of wordnet and large electronic thesauri information from semantic resources is regularly leveraged to solve nlp problemsthese tasks include collocation discovery smoothing and model estimation and text classification unfortunately thesauri are expensive and timeconsuming to create manually and tend to suffer from problems of bias inconsistency and limited coveragein addition thesaurus compilers cannot keep up with constantly evolving language use and cannot afford to build new thesauri for the many subdomains that nlp techniques are being applied tothere is a clear need for methods to extract thesauri automatically or tools that assist in the manual creation and updating of these semantic resourcesmuch of the existing work on thesaurus extraction and word clustering is based on the observation that related terms will appear in similar contextsthese systems differ primarily in their definition of context and the way they calculate similarity from the contexts each term appears inmost systems extract cooccurrence and syntactic information from the words surrounding the target term which is then converted into a vectorspace representation of the contexts that each target term appears in other systems take the whole document as the context and consider term cooccurrence at the document level once these contexts have been defined these systems then use clustering or nearest neighbour methods to find similar termsalternatively some systems are based on the observation that related terms appear together in particular contextsthese systems extract related terms directly by recognising linguistic patterns which link synonyms and hyponyms our previous work has evaluated thesaurus extraction performance and efficiency using several different context modelsin this paper we evaluate some existing similarity metrics and propose and motivate a new metric which outperforms the existing metricswe also present an approximation algorithm that bounds the time complexity of pairwise thesaurus extractionthis results in a significant reduction in runtime with only a marginal performance penalty in our experimentsvectorspace thesaurus extraction systems can be separated into two componentsthe first component extracts the contexts from raw text and compiles them into a statistical description of the contexts each potential thesaurus term appears insome systems define the context as a window of words surrounding each thesaurus term many systems extract grammatical relations using either a broad coverage parser or shallow statistical tools our experiments use a shallow relation extractor based on we define a context relation instance as a tuple where w is the thesaurus term which occurs in some grammatical relation r with another word w in the sentencewe refer to the tuple as an attribute of w for example the tuple indicates that the term dog was the direct object of the verb walkour relation extractor begins with a naive bayes pos tagger and chunkerafter the raw text has been tagged and chunked noun phrases separated by prepositions and conjunctions are concatenated and the relation extracting algorithm is run over each sentencethis consists of four passes over the sentence associating each noun with the modifiers and verbs from the syntactic contexts they appear in the relation tuple is then converted to root form using the sussex morphological analyser and the pos tags are removedthe relations for each term are collected together and counted producing a context vector of attributes and 2005 89 1836 74 42 15 their frequencies in the corpusfigure 1 shows some example attributes for ideathe second system component performs nearestneighbour or cluster analysis to determine which terms are similar based on their context vectorsboth methods require a function that calculates the similarity between context vectorsfor experimental analysis we have decomposed this function into measure and weight functionsthe measure function calculates the similarity between two weighted context vectors and the weight function calculates a weight from the raw frequency information for each context relationthe primary experiments in this paper evaluate the performance of various existing and new measure and weight functions which are described in the next sectionthe simplest algorithm for thesaurus extraction is nearestneighbour comparison which involves pairwise vector comparison of the target with every extracted termgiven n terms and up to m attributes for each term the asymptotic time complexity of nearestneighbour thesaurus extraction is othis is very expensive with even a moderate vocabulary and small attribute vectorsthe number of terms can be reduced by introducing a minimum cutoff that ignores potential synonyms with a frequency less than the cutoff which for our experiments wasearly experiments in thesaurus extraction suffered from the limited size of available corpora but more recent experiments have used much larger corpora with greater success for these experiments we ran our relation extractor over the british national corpus consisting of 114 million words in 62 million sentencesthe pos tagging and chunking took 159 minutes and the relation extraction took an addiwe describe the functions evaluated in these experiments using an extension of the asterisk notation used by lin where an asterisk indicates a set ranging over all existing values of that variablefor example the set of attributes of the term w is for convenience we further extend the notation for weighted attribute vectorsa subscripted asterisk indicates that the variables are bound together for weight functions we use similar notation table 1 defines the measure functions evaluated in these experimentsthe simplest measure functions use the attribute set model from ir and are taken from manning and schutze pp299when these are used with weighted attributes if the weight is greater than zero then it is considered in the setother measures such as lin and jaccard have previously been used for thesaurus extraction finally we have generalised some set measures using similar reasoning to grefenstette alternative generalisations are marked with a daggerthese experiments also cover a range of weight functions as defined in table 2the weight functions lin98a lin98b and gref94 are taken from existing systems our proposed weight functions are motivated by our intuition that highly predictive attributes are strong collocations with their termsthus we have implemented many of the statistics described in the collocations chapter of manning and schutze including the ttest x2test likelihood ratio and mutual informationsome functions have an extra log2 1 factor to promote the influence of higher frequency attributesfor the purposes of evaluation we selected 70 singleword noun terms for thesaurus extractionto avoid sample bias the words were randomly selected from wordnet such that they covered a range of values for the following word properties frequency penn treebank and bnc frequencies number of senses wordnet and macquarie senses specificity depth in the wordnet hierarchy concreteness distribution across wordnet subtreestable 3 lists some example terms with frequency and frequency rank data from the ptb bnc and reuters as well as the number of senses in wordnet and macquarie and their maximum and minimum depth in the wordnet hierarchyfor each term we extracted a thesaurus entry with 200 potential synonyms and their similarity scoresthe simplest method of evaluation is direct comparison of the extracted thesaurus with a manuallycreated gold standard however on small corpora rare direct matches provide limited information for evaluation and thesaurus coverage is a problemour evaluation uses a combination of three electronic thesauri the macquarie rogets and moby thesaurirogets and macquarie are topic ordered and the moby thesaurus is head orderedas the extracted thesauri do not distinguish between senses we transform rogets and macquarie into head ordered format by conflating the sense sets containing each termfor the 70 terms we create a gold standard from the union of the synonyms from the three thesauriwith this gold standard in place it is possible to use precision and recall measures to evaluate the quality of the extracted thesaurusto help overcome the problems of direct comparisons we use several measures of system performance direct matches inverse rank and precision of the top n synonyms for n 1 5 and 10invr is the sum of the inverse rank of each matching synonym eg matching synonyms at ranks 3 5 and 28 give an inverse rank score of1 5 128 and with at most 200 synonyms the maximum invr score is 5878precision of the top n is the percentage of matching synonyms in the top n extracted synonymsthere are a total of 23207 synonyms for the 70 terms in the gold standardeach measure is averaged over the extracted synonym lists for all 70 thesaurus termsfor computational practicality we assume that the performance behaviour of measure and weight functions are independent of each othertherefore we have evaluated the weight functions using the jaccard measure and evaluated the measure functions using the ttest weight because they produced the best results in our previous experimentstable 4 presents the results of evaluating the measure functionsthe best performance across all measures was shared by jaccard and dice which produced identical results for the 70 wordsdice is easier to compute and is thus the preferred measure functiontable 5 presents the results of evaluating the weight functionshere ttest significantly outperformed the other weight functions which supports our intuition that good context descriptors are also strong collocates of the termsurprisingly the other collocation discovery functions did not perform as well even though ttest is not the most favoured for collocation discovery because of its behaviour at low frequency countsone difficulty with weight functions involving logarithms or differences is that they can be negativethe results in table 6 show that weight functions that are not bounded below by zero do not perform as well on thesaurus extractionhowever unbounded weights do produce interesting and unexpected results they tend to return misspellings of the term and synonyms abbreviations and lower frequency synonymsfor instance ttest returned co co and plc for company but they do not appear in the synonyms extracted with ttestthe unbounded weights also extracted more hyponyms such as corporation names for company including kodak and exxonfinally unbounded weights tended to promote the rankings of synonyms from minority senses because the frequent senses are demoted by negative weightsfor example ttest returned writings painting fieldwork essay and masterpiece as the best synonyms for work whereas ttest returned study research job activity and lifeintroducing a minimum cutoff that ignores low frequency potential synonyms can eliminate many unnecessary comparisonsfigure 2 presents both the performance of the system using direct match evaluation and execution times for increasing cutoffsthis test was performed using jaccard and the ttest and lin98a weight functionsthe first feature of note is that as we increase the minimum cutoff to 30 the direct match results improve for ttest which is probably a result of the ttest weakness on low frequency countsinitially the execution time is rapidly reduced by small increments of the minimum cutoffthis is because zipfs law applies to relations and so by small increments of the cutoff we eliminate many terms from the tail of the distributionthere are only 29737 terms when the cutoff is 30 88926 terms when the cutoff is 5 and 246067 without a cutoff and because the extraction algorithm is o this results in significant efficiency gainssince extracting only 70 thesaurus terms takes about 43 minutes with a minimum cutoff of 5 the efficiencyperformance tradeoff is particularly important from the perspective of implementing a practical extraction systemeven with a minimum cutoff of 30 as a reasonable compromise between speed and accuracy extracting a thesaurus for 70 terms takes approximately 20 minutesif we want to extract a complete thesaurus for 29737 terms left after the cutoff has been applied it would take approximately one full week of processinggiven that the size of the training corpus could be much larger which would increase both number of attributes for each term and the total number of terms above the minimum cutoff this is not nearly fast enoughthe problem is that the time complexity of thesaurus extraction is not practically scalable to significantly larger corporaalthough the minimum cutoff helps by reducing n to a reasonably small value it does not constrain m in any wayin fact using a cutoff increases the average value of m across the terms because it removes low frequency terms with few attributesfor instance the frequent company appears in 11360 grammatical relations with a total frequency of 69240 occurrences whereas the infrequent pants appears in only 401 relations with a total frequency of 655 occurrencesthe problem is that for every comparison the algorithm must examine the length of both attribute vectorsgrefenstette uses bit signatures to test for shared attributes but because of the high frequency of the most common attributes this does not skip many comparisonsour system keeps track of the sum of the remaining vector which is a significant optimisation but comes at the cost of increased representation sizehowever what is needed is some algorithmic reduction that bounds the number of full o vector comparisons performedone way of bounding the complexity is to perform an approximate comparison firstif the approximation returns a positive result then the algorithm performs the full comparisonwe can do this by introducing another much shorter vector of canonical attributes with a bounded length k if our approximate comparison returns at most p positive results for each term then the time complexity becomes o which since k is constant is oso as long as we find an approximation function and vector such that p n the system will run much faster and be much more scalable in m the number of attributeshowever p n implies that we are discarding a very large number of potential matches and so there will be a performance penaltythis tradeoff is governed by the number of the canonical attributes and how representative they are of the full attribute vector and thus the term itselfit is also dependent on the functions used to compare the canonical attribute vectorsthe canonical vector must contain attributes that best describe the thesaurus term in a bounded number of entriesthe obvious first choice is the most strongly weighted attributes from the full vectorfigure 3 shows some of the most strongly weighted attributes for pants with their frequencies and weightshowever these attributes although strongly correlated with pants are in fact too specific and idiomatic to be a good summary because there are very few other words with similar canonical attributesfor example only appears with two other terms in the entire corpusthe heuristic is so aggressive that too few positive approximate matches resultto alleviate this problem we filter the attributes so that only strongly weighted subject directobj and indirectobj relations are included in the canonical vectorsthis is because in general they constrain the terms more and partake in fewer idiomatic collocations with the termsso the general principle is the most descriptive verb relations constrain the search for possible synonyms and the other modifiers provide finer grain distinctions used to rank possible synonymsfigure 4 shows the 5 canonical attributes for pantsthis canonical vector is a better general description of the term pants since similar terms are likely to appear as the direct object of wear even though it still contains the idiomatic attributes and one final difficulty this example shows is that attributes like are not informativewe know this because appears with 8769 different terms which means the algorithm may perform a large number of unnecessary full comparisons since could be a canonical attribute for many termsto avoid this problem we apply a maximum cutoff on the number of terms the attribute appears withwith limited experimentation we have found that ttestlog is the best weight function for selecting canonical attributesthis may be because the extra log2 1 factor encodes the desired bias towards relatively frequent canonical attributesif a canonical attribute is shared by the two terms then our algorithm performs the full comparisonfigure 5 shows system performance and speed as canonical vector size is increased with the maximum cutoff at 4000 8000 and 10000as an example with a maximum cutoff of 10000 and a canonical vector size of 70 the total direct score of 1841 represents a 39 performance penalty over full extraction for an 89 reduction in execution timetable 7 presents the example term results using the techniques we have described jaccard measure and ttest weight functions minimum cutoff of 30 and approximation algorithm with canonical vector size of 100 with ttestlog weightingthe big columns show the previous measure results if we returned 10000 synonyms and max gives the results for a comparison of the gold standard against itselfin these experiments we have proposed new measure and weight functions that as our evaluation has shown significantly outperform existing similarity functionsthe list of measure and weight functions we compared against is not complete and we hope to add other functions to provide a general framework for thesaurus extraction experimentationwe would also like to expand our evaluation to include direct methods used by others and using the extracted thesaurus in nlp taskswe have also investigated the speedperformance tradeoff using frequency cutoffsthis has lead to the proposal of a new approximate comparison algorithm based on canonical attributes and a process of coarse and finegrained comparisonsthis approximation algorithm is dramatically faster than simple pairwise comparison with only a small performance penalty which means that complete thesaurus extraction on large corpora is now feasiblefurther the canonical vector parameters allow for control of the speedperformance tradeoffthese experiments show that largescale thesaurus extraction is practical and although results are not yet comparable with manuallyconstructed thesauri may now be accurate enough to be useful for some nlp taskswe would like to thank stephen clark caroline sporleder tara murphy and the anonymous reviewers for their comments on drafts of this paperthis research is supported by commonwealth and sydney university travelling scholarships
W02-0908
improvements in automatic thesaurus extractionthe use of semantic resources is common in modern nlp systems but methods to extract lexical semantics have only recently begun to perform well enough for practical usewe evaluate existing and new similarity metrics for thesaurus extraction and experiment with the tradeoff between extraction performance and efficiencywe propose an approximation algorithm based on canonical attributes and coarse and finegrained matching that reduces the time complexity and execution time of thesaurus extraction with only a marginal performance penaltywe show that synonymy extraction for lexical semantic resources using distributional similarity produces continuing gains in accuracy as the volume of input data increaseswe demonstrate that dramatically increasing the quantity of text used to extract contexts significantly improves synonym qualitywe find the jaccard measure and the ttest weight to have the best performance in our comparison of distance measures
discriminative training methods for hidden markov models theory and experiments with perceptron algorithms we describe new algorithms for train ing tagging models as an alternativeto maximumentropy models or conditional random elds the al gorithms rely on viterbi decoding oftraining examples combined with simple additive updates we describe the ory justifying the algorithms througha modi cation of the proof of conver gence of the perceptron algorithm forclassi cation problems we give experimental results on partofspeech tag ging and base noun phrase chunking in both cases showing improvements over results for a maximumentropy tagger maximumentropy models are justi ably a very popular choice for tagging problems in natural language processing for example see for their use on partofspeech tagging and for their use on a faq segmentation taskme models have the advantage of being quite exible in the features that can be incorporated in the modelhowever recent theoretical and experimental re sults in have highlighted problems with the parameter estimation method for me modelsin response to these problems they describe alternative parameter estimation methods based on conditional markov randomfields give experimental results suggesting that crfs can per form signi cantly better than me modelsin this paper we describe parameter estima tion algorithms which are natural alternatives tocrfsthe algorithms are based on the percep tron algorithm and the voted or averaged versions of the perceptron described in these algorithms have been shown by to be competitive with modern learning algorithms such as support vector machines however theyhave previously been applied mainly to classi cation tasks and it is not entirely clear how the algorithms can be carried across to nlp tasks such as tagging or parsingthis paper describes variants of the perceptron algorithm for tagging problemsthe al gorithms rely on viterbi decoding of trainingexamples combined with simple additive updateswe describe theory justifying the algorithm through a modi cation of the proof of convergence of the perceptron algorithm for classi cation problemswe give experimental results on partofspeech tagging and base noun phrase chunking in both cases showing improvements over results for a maximumentropy tagger although we concentrate on taggingproblems in this paper the theoretical frame work and algorithm described in section 3 ofthis paper should be applicable to a wide va riety of models where viterbistyle algorithmscan be used for decoding examples are proba bilistic contextfree grammars or me models for parsingsee for other applica tions of the voted perceptron to nlp problems121 hmm taggersin this section as a motivating example we de scribe a special case of the algorithm in thispaper the algorithm applied to a trigram tag gerin a trigram hmm tagger each trigram 1the theorems in section 3 and the proofs in sec tion 5 apply directly to the work in these other papersassociation for computational linguisticslanguage processing philadelphia july 2002 pp18proceedings of the conference on empirical methods in natural of tags and each tagword pair have associated parameterswe write the parameter associated with a trigram hx y zi as xyz and the param eter associated with a tagword pair as tw a common approach is to take the param eters to be estimates of conditional probabilities xyz logp tw logp for convenience we will use w 1nas short hand for a sequence of words w 1 w 2 w n and t 1n as shorthand for a taq sequence t 1 t 2 t n in a trigram tagger the score for a tagged sequence t 1npaired with a word se quence w 1n is 2 p n i1 t i
W02-1001
discriminative training methods for hidden markov models theory and experiments with perceptron algorithmswe describe new algorithms for training tagging models as an alternative to maximumentropy models or conditional random fields the algorithms rely on viterbi decoding of training examples combined with simple additive updateswe describe theory justifying the algorithms through a modification of the proof of convergence of the perceptron algorithm for classification problemswe give experimental results on partofspeech tagging and base noun phrase chunking in both cases showing improvements over results for a maximumentropy taggerwe describe how the voted perceptron can be used to train maximumentropy style taggers and also give a discussion of the theory behind the perceptron algorithm applied to ranking tasksvoted perceptron training attempts to minimize the difference between the global feature vector for a training instance and the same feature vector for the bestscoring labeling of that instance according to the current model
an empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation in this paper we evaluate a variety of knowledge sources and supervised learning algorithms for word sense disambiguation on senseval2 and senseval1 data our knowledge sources include the partofspeech of neighboring words single words in the surrounding context local collocations and syntactic relations the learning algorithms evaluated include support vector machines naive bayes adaboost and decision tree algorithms we present empirical results showing the relative contribution of the component knowledge sources and the different learning algorithms in particular using all of these knowledge sources and svm achieves accuracy higher than the best official scores on both senseval2 and senseval1 test data natural language is inherently ambiguousa word can have multiple meanings given an occurrence of a word in a natural language text the task of word sense disambiguation is to determine the correct sense of in that contextwsd is a fundamental problem of natural language processingfor example effective wsd is crucial for high quality machine translationone could envisage building a wsd system using handcrafted rules or knowledge obtained from linguistssuch an approach would be highly laborintensive with questionable scalabilityanother approach involves the use of dictionary or thesaurus to perform wsdin this paper we focus on a corpusbased supervised learning approachin this approach to disambiguate a word we first collect training texts in which instances of occureach occurrence of is manually tagged with the correct sensewe then train a wsd classifier based on these sample texts such that the trained classifier is able to assign the sense of in a new contexttwo wsd evaluation exercises senseval1 and senseval2 were conducted in 1998 and 2001 respectivelythe lexical sample task in these two sensevals focuses on evaluating wsd systems in disambiguating a subset of nouns verbs and adjectives for which manually sensetagged training data have been collectedin this paper we conduct a systematic evaluation of the various knowledge sources and supervised learning algorithms on the english lexical sample data sets of both sensevalsthere is a large body of prior research on wsddue to space constraints we will only highlight prior research efforts that have investigated contribution of various knowledge sources or relative performance of different learning algorithmsearly research efforts on comparing different learning algorithms tend to base their comparison on only one word or at most a dozen wordsng compared two learning algorithms knearest neighbor and naive bayes on the dso corpus escudero et al evaluated knearest neighbor naive bayes winnowbased and lazyboosting algorithms on the dso corpusthe recent work of pedersen and zavrel et al evaluated a variety of learning algorithms on the senseval1 data sethowever all of these research efforts concentrate only on evaluating different learning algorithms without systematically considering their interaction with knowledge sourcesng and lee reported the relative contribution of different knowledge sources but on only one word intereststevenson and wilks investigated the interaction of knowledge sources such as partofspeech dictionary definition subject codes etc on wsdhowever they do not evaluate their method on a common benchmark data set and there is no exploration on the interaction of knowledge sources with different learning algorithmsparticipating systems at senseval1 and senseval2 tend to report accuracy using a particular set of knowledge sources and some particular learning algorithm without investigating the effect of varying knowledge sources and learning algorithmsin senseval2 the various duluth systems attempted to investigate whether features or learning algorithms are more importanthowever relative contribution of knowledge sources was not reported and only two main types of algorithms were testedin contrast in this paper we systematically vary both knowledge sources and learning algorithms and investigate the interaction between themwe also base our evaluation on both senseval2 and senseval1 official test data sets and compare with the official scores of participating systemsto disambiguate a word occurrence we consider four knowledge sources listed beloweach training context of generates one training feature vectorwe use 7 features to encode this knowledge source where is the pos of theth token to the left of and is the pos of a token can be a word or a punctuation symbol and each of these neighboring tokens must be in the same sentence as we use a sentence segmentation program and a pos tagger to segment the tokens surrounding into sentences and assign pos tags to these tokensfor example to disambiguate the word bars in the postagged sentence reidnnp sawvbd meprp lookingvbg atin thedt ironnn barsnns the pos feature vector is where denotes for this knowledge source we consider all single words in the surrounding context of and these words can be in a different sentence from for each training or test example the senseval data sets provide up to a few sentences as the surrounding contextin the results reported in this paper we consider all words in the provided contextspecifically all tokens in the surrounding context of are converted to lower case and replaced by their morphological root formstokens present in a list of stop words or tokens that do not contain at least an alphabet character are removedall remaining tokens from all training contexts provided for are gatheredeach remaining token contributes one featurein a training example the feature corresponding to is set to 1 iff the context of in that training example containswe attempted a simple feature selection method to investigate if a learning algorithm performs better with or without feature selectionthe feature selection method employed has one parameter a feature is selected if occurs in some sense of or more times in the training datathis parameter is also used by we have tried and in the results reported in this paper the pos tag of a null tokenfor example if is the word bars and the set of selected unigrams is chocolate iron beer the feature vector for the sentence reid saw me looking at the iron bars is 0 1 0 a local collocation refers to the ordered sequence of tokens in the local narrow context of offsets and denote the starting and ending position of the sequence where a negative offset refers to a token to its left for example let be the word bars in the sentence reid saw me looking at the iron bars where denotes a null tokenlike pos a collocation does not cross sentence boundaryto represent this knowledge source of local collocations we extracted 11 features corresponding to the following collocations and this set of 11 features is the union of the collocation features used in ng and lee and ng to extract the feature values of the collocation feature we first collect all possible collocation strings corresponding to in all training contexts of unlike the case for surrounding words we do not remove stop words numbers or punctuation symbolseach collocation string is a possible feature valuefeature value selection using analogous to that used to select surrounding words can be optionally appliedif a training context of has collocation and is a selected feature value then the feature of has value otherwise it has the value denoting the null stringnote that each collocation is represented by one feature that can have many possible feature values whereas each distinct surrounding word is represented by one feature that takes binary values for example if is the word bars and suppose the set of selected collocations for is a chocolate the wine the iron then the feature value for collocation in the sentence reid saw me looking at the iron bars is the ironwe first parse the sentence containing with a statistical parser the constituent tree structure generated by charniaks parser is then converted into a dependency tree in which every word points to a parent headwordfor example in the sentence reid saw me looking at the iron bars the word reid points to the parent headword sawsimilarly the word me also points to the parent headword sawwe use different types of syntactic relations depending on the pos of if is a noun we use four features its parent headword the pos of the voice of and the relative position of from if is a verb we use six features the nearest word to the left of such that is the parent headword of the nearest word to the right of such that is the parent headword of the pos of the pos of the pos of and the voice of if is an adjective we use two features its parent headword and the pos of we also investigated the effect of feature selection on syntacticrelation features that are words some examples are shown in table 1each pos noun verb or adjective is illustrated by one examplefor each example shows and its pos shows the sentence where occurs and shows the feature vector corresponding to syntactic relationswe evaluated four supervised learning algorithms support vector machines adaboost with decision stumps naive bayes and decision trees all the experimental results reported in this paper are obtained using the implementation of these algorithms in weka all learning parameters use the default values in weka unless otherwise statedthe svm performs optimization to find a hyperplane with the largest margin that separates training examples into two classesa test example is classified depending on the side of the hyperplane it lies ininput features can be mapped into high dimensional space before performing the optimization and classificationa kernel function can be used to reduce the computational cost of training and testing in high dimensional spaceif the training examples are nonseparable a regularization parameter can be used to control the tradeoff between achieving a large margin and a low training errorin wekas implementation of svm each nominal feature with possible values is converted into binary featuresif a nominal feature takes the th feature value then the th binary feature is set to 1 and all the other binary features are set to 0we tried higher order polynomial kernels but they gave poorer resultsour reported results in this paper used the linear kerneladaboost is a method of training an ensemble of weak learners such that the performance of the whole ensemble is higher than its constituentsthe basic idea of boosting is to give more weights to misclassified training examples forcing the new classifier to concentrate on these hardtoclassify examplesa test example is classified by a weighted vote of all trained classifierswe use the decision stump as the weak learner in adaboostweka implements adaboostm1we used 100 iterations in adaboost as it gives higher accuracy than the default number of iterations in weka the naive bayes classifier assumes the features are independent given the classduring classification it chooses the class with the highest posterior probabilitythe default setting uses laplace smoothingthe decision tree algorithm partitions the training examples using the feature with the highest information gainit repeats this process recursively for each partition until all examples in each partition belong to one classa test example is classified by traversing the learned decision treeweka implements quinlans c45 decision tree algorithm with pruning by defaultin the senseval2 english lexical sample task participating systems are required to disambiguate 73 words that have their pos predeterminedthere are 8611 training instances and 4328 test instances tagged with wordnet sensesour evaluation is based on all the official training and test data of senseval2for senseval1 we used the 36 trainable words for our evaluationthere are 13845 training instances1 for these trainable words and 7446 test instancesfor senseval1 4 trainable words belong to the indeterminate category ie the pos is not providedfor these words we first used a pos tagger to determine the correct posfor a word that may occur in phrasal word form we train a separate classifier for each phrasal word formduring testing if appears in a phrasal word form the classifier for that phrasal word form is usedotherwise the classifier for is usedwe ran the different learning algorithms using various knowledge sourcestable 2 shows each algorithm evaluated and official scores of the top 3 participating systems of senseval2 and senseval1 the accuracy figures for the different combinations of knowledge sources and learning algorithms for the senseval2 data setthe nine columns correspond to using only pos of neighboring words using only single words in the surrounding context with feature selection same as but without feature selection using only local collocations with feature selection same as but without feature selection using only syntactic relations with feature selection on words same as but without feature selection combining all four knowledge sources with feature selection combining all four knowledge sources without feature selectionsvm is only capable of handling binary class problemsthe usual practice to deal with multiclass problems is to build one binary classifier per output class the original adaboost naive bayes and decision tree algoalgorithm is significantly better correspond to the pvalue and respectively or means our rithms can already handle multiclass problems and we denote runs using the original adb nb and dt algorithms as normal in table 2 and table 3accuracy for each word task can be measured by recall or precision defined by no of test instances correctly labeled no of test instances in word task no of test instances correctly labeled no of test instances output in word task recall is very close to precision for the top senseval participating systemsin this paper our reported results are based on the official finegrained scoring methodto compute an average recall figure over a set of words we can either adopt microaveraging or macroaveraging defined by total no of test instances correctly labeled mi total no of test instances in all word tasks that is microaveraging treats each test instance equally so that a word task with many test instances will dominate the microaveraged recallon the other hand macroaveraging treats each word task equallyas shown in table 2 and table 3 the best microaveraged recall for senseval2 is 654 obtained by combining all knowledge sources and using svm as the learning algorithmin table 4 we tabulate the best microaveraged recall for each learning algorithm broken down according to nouns verbs adjectives indeterminates and all wordswe also tabulate analogous figures for the top three participating systems for both sensevalsthe top three systems for senseval2 are jhu smuls and kunlp the top three systems for senseval1 are hopkins etspu and tilburg as shown in table 4 svm with all four knowledge sources achieves accuracy higher than the best official scores of both sensevalswe also conducted paired t test to see if one system is significantly better than anotherthe t statistic of the difference between each pair of recall figures is computed giving rise to a p valuea large p value indicates that the two systems are not significantly different from each otherthe comparison between our learning algorithms and the top three participating systems is given in table 5note that we can only compare macroaveraged recall for senseval1 systems since the sense of each individual test instance output by the senseval1 participating systems is not availablethe comparison indicates that our svm system is better than the best official senseval2 and senseval1 systems at the level of significance 005note that we are able to obtain stateoftheart results using a single learning algorithm without resorting to combining multiple learning algorithmsseveral top senseval2 participating systems have attempted the combination of classifiers using different learning algorithmsin senseval2 jhu used a combination of various learning algorithms with various knowledge sources such as surrounding words local collocations syntactic relations and morphological informationsmuls used a knearest neighbor algorithm with features such as keywords collocations pos and name entitieskunlp used classification information model an entropybased learning algorithm with local topical and bigram contexts and their posin senseval1 hopkins used hierarchical decision lists with features similar to those used by jhu in senseval2 etspu used a naive bayes classifier with topical and local words and their pos tilburg used a knearest neighbor algorithm with features similar to those used by tilburg also used dictionary examples as additional training databased on our experimental results there appears to be no single universally best knowledge sourceinstead knowledge sources and learning algorithms interact and influence each otherfor example local collocations contribute the most for svm while partsofspeech contribute the most for nbnb even outperforms svm if only pos is usedin addition different learning algorithms benefit differently from feature selectionsvm performs best without feature selection whereas nb performs best with some feature selection we will investigate the effect of more elaborate feature selection schemes on the performance of different learning algorithms for wsd in future workalso using the combination of four knowledge sources gives better performance than using any single individual knowledge source for most algorithmson the senseval2 test set svm achieves 654 648 618 and 605 as knowledge sources are removed one at a timebefore concluding we note that the senseval2 participating system umdsst also used svm with surrounding words and local collocations as featureshowever they reported recall of only 568in contrast our implementation of svm using the two knowledge sources of surrounding words and local collocations achieves recall of 618following the description in our own reimplementation of umdsst gives a recall of 586 close to their reported figure of 568the performance drop from 618 may be due to the different collocations used in the two systems
W02-1006
an empirical evaluation of knowledge sources and learning algorithms for word sense disambiguationin this paper we evaluate a variety of knowledge sources and supervised learning algorithms for word sense disambiguation on senseval2 and senseval1 dataour knowledge sources include the partofspeech of neighboring words single words in the surrounding context local collocations and syntactic relationsthe learning algorithms evaluated include support vector machines naive bayes adaboost and decision tree algorithmswe present empirical results showing the relative contribution of the component knowledge sources and the different learning algorithmsin particular using all of these knowledge sources and svm achieves accuracy higher than the best official scores on both senseval2 and senseval1 test dataour feature set consists of the following four types local context ngrams of nearby words global context from all the words in the given context partsofspeech ngrams of nearby words and syntactic information obtained from parser output
thumbs up sentiment classification using machine learning techniques we consider the problem of classifying documents not by topic but by overall sentiment eg determining whether a review is positive or negative using movie reviews as data we find that standard machine learning techniques definitively outperform humanproduced baselines however the three machine learning methods we employed do not perform as well on sentiment classification as on traditional topicbased categorization we conclude by examining factors that make the sentiment classification problem more challenging today very large amounts of information are available in online documentsas part of the effort to better organize this information for users researchers have been actively investigating the problem of automatic text categorizationthe bulk of such work has focused on topical categorization attempting to sort documents according to their subject matter however recent years have seen rapid growth in online discussion groups and review sites where a crucial characteristic of the posted articles is their sentiment or overall opinion towards the subject matter for example whether a product review is positive or negativelabeling these articles with their sentiment would provide succinct summaries to readers indeed these labels are part of the appeal and valueadd of such sites as wwwrottentomatoescom which both labels movie reviews that do not contain explicit rating indicators and normalizes the different rating schemes that individual reviewers usesentiment classification would also be helpful in business intelligence applications and recommender systems tatemura where user input and feedback could be quickly summarized indeed in general freeform survey responses given in natural language format could be processed using sentiment categorizationmoreover there are also potential applications to message filtering for example one might be able to use sentiment information to recognize and discard flamesin this paper we examine the effectiveness of applying machine learning techniques to the sentiment classification problema challenging aspect of this problem that seems to distinguish it from traditional topicbased classification is that while topics are often identifiable by keywords alone sentiment can be expressed in a more subtle mannerfor example the sentence how could anyone sit through this movie contains no single word that is obviously negativethus sentiment seems to require more understanding than the usual topicbased classificationso apart from presenting our results obtained via machine learning techniques we also analyze the problem to gain a better understanding of how difficult it isthis section briefly surveys previous work on nontopicbased text categorizationone area of research concentrates on classifying documents according to their source or source style with statisticallydetected stylistic variation serving as an important cueexamples include author publisher nativelanguage background and brow another more related area of research is that of determining the genre of texts subjective genres such as editorial are often one of the possible categories other work explicitly attempts to find features indicating that subjective language is being used but while techniques for genre categorization and subjectivity detection can help us recognize documents that express an opinion they do not address our specific classification task of determining what that opinion actually ismost previous research on sentimentbased classification has been at least partially knowledgebasedsome of this work focuses on classifying the semantic orientation of individual words or phrases using linguistic heuristics or a preselected set of seed words past work on sentimentbased categorization of entire documents has often involved either the use of models inspired by cognitive linguistics or the manual or semimanual construction of discriminantword lexicons interestingly our baseline experiments described in section 4 show that humans may not always have the best intuition for choosing discriminating wordsturneys work on classification of reviews is perhaps the closest to ours2 he applied a specific unsupervised learning technique based on the mutual information between document phrases and the words excellent and poor where the mutual information is computed using statistics gathered by a search enginein contrast we utilize several completely priorknowledgefree supervised machine learning methods with the goal of understanding the inherent difficulty of the taskfor our experiments we chose to work with movie reviewsthis domain is experimentally convenient because there are large online collections of such reviews and because reviewers often summarize their overall sentiment with a machineextractable rating indicator such as a number of stars hence we did not need to handlabel the data for supervised learning or evaluation purposeswe also note that turney found movie reviews to be the most 2indeed although our choice of title was completely independent of his our selections were eerily similar difficult of several domains for sentiment classification reporting an accuracy of 6583 on a 120document set but we stress that the machine learning methods and features we use are not specific to movie reviews and should be easily applicable to other domains as long as sufficient training data existsour data source was the internet movie database archive of the recartsmoviesreviews newsgroup3 we selected only reviews where the author rating was expressed either with stars or some numerical value ratings were automatically extracted and converted into one of three categories positive negative or neutralfor the work described in this paper we concentrated only on discriminating between positive and negative sentimentto avoid domination of the corpus by a small number of prolific reviewers we imposed a limit of fewer than 20 reviews per author per sentiment category yielding a corpus of 752 negative and 1301 positive reviews with a total of 144 reviewers representedthis dataset will be available online at httpwwwcscornelledupeoplepabomoviereviewdata intuitions seem to differ as to the difficulty of the sentiment detection probleman expert on using machine learning for text categorization predicted relatively low performance for automatic methodson the other hand it seems that distinguishing positive from negative reviews is relatively easy for humans especially in comparison to the standard text categorization problem where topics can be closely relatedone might also suspect that there are certain words people tend to use to express strong sentiments so that it might suffice to simply produce a list of such words by introspection and rely on them alone to classify the textsto test this latter hypothesis we asked two graduate students in computer science to choose good indicator words for positive and negative sentiments in movie reviewstheir selections shown in figure 1 seem intuitively plausiblewe then converted their responses into simple decision procedures that essentially count the number of the proposed positive and negative words in a given documentwe applied these procedures to uniformlydistributed data so that the randomchoice baseline result would be 50as shown in figure 1 the accuracy percentage of documents classified correctly for the humanbased classifiers were 58 and 64 respectively4 note that the tie rates percentage of documents where the two sentiments were rated equally likely are quite highs while the tie rates suggest that the brevity of the humanproduced lists is a factor in the relatively poor performance results it is not the case that size alone necessarily limits accuracybased on a very preliminary examination of frequency counts in the entire corpus plus introspection we created a list of seven positive and seven negative words shown in figure 2as that figure indicates using these words raised the accuracy to 69also although this third list is of comparable length to the other two it has a much lower tie rate of 16we further observe that some of the items in this third list such as or still would probably not have been proposed as possible candidates merely through introspection although upon reflection one sees their merit we conclude from these preliminary experiments that it is worthwhile to explore corpusbased techniques rather than relying on prior intuitions to select good indicator features and to perform sentiment classification in generalthese experiments also provide us with baselines for experimental comparison in particular the third baseline of 69 might actually be considered somewhat difficult to beat since it was achieved by examination of the test data our aim in this work was to examine whether it suffices to treat sentiment classification simply as a special case of topicbased categorization or whether special sentimentcategorization methods need to be developedwe experimented with three standard algorithms naive bayes classification maximum entropy classification and support vector machinesthe philosophies behind these three algorithms are quite different but each has been shown to be effective in previous text categorization studiesto implement these machine learning algorithms on our document data we used the following standard bagoffeatures frameworklet f1 fmj be a predefined set of m features that can appear in a document examples include the word still or the bigram really stinkslet ni be the number of times fi occurs in document d then each document d is represented by the document vector d n2 nmone approach to text classification is to assign to a given document d the class c arg maxc pwe derive the naive bayes classifier by first observing that by bayes rule where p plays no role in selecting cto estimate the term p naive bayes decomposes it by assuming the fis are conditionally independent given our training method consists of relativefrequency estimation of p and p using addone smoothingdespite its simplicity and the fact that its conditional independence assumption clearly does not hold in realworld situations naive bayesbased text categorization still tends to perform surprisingly well indeed domingos and pazzani show that naive bayes is optimal for certain problem classes with highly dependent featureson the other hand more sophisticated algorithms might yield better results we examine two such algorithms nextmaximum entropy classification is an alternative technique which has proven effective in a number of natural language processing applications nigam et al show that it sometimes but not always outperforms naive bayes at standard text classificationits estimate of p takes the following exponential form where z is a normalization functionfic is a featureclass function for feature fi and class c defined as follows6 class c the parameter values are set so as to maximize the entropy of the induced distribution subject to the constraint that the expected values of the featureclass functions with respect to the model are equal to their expected values with respect to the training data the underlying philosophy is that we should choose the model making the fewest assumptions about the data while still remaining consistent with it which makes intuitive sensewe use ten iterations of the improved iterative scaling algorithm for parameter training together with a gaussian prior to prevent overfitting support vector machines have been shown to be highly effective at traditional text categorization generally outperforming naive bayes they are largemargin rather than probabilistic classifiers in contrast to naive bayes and maxentin the twocategory case the basic idea behind the training procedure is to find a hyperplane represented by vector w that not only separates the document vectors in one class from those in the other but for which the separation or margin is as large as possiblethis search corresponds to a constrained optimization problem letting cj e 11 11 be the correct class of document dj the solution can be written as where the js are obtained by solving a dual opti mization problemthose dj such that j is greater for instance a particular featureclass function might fire if and only if the bigram still hate appears and the documents sentiment is hypothesized to be negative7 importantly unlike naive bayes maxent makes no assumptions about the relationships between features and so might potentially perform better when conditional independence assumptions are not metthe ics are featureweight parameters inspection of the definition of pme shows that a large ic means that fi is considered a strong indicator for than zero are called support vectors since they are the only document vectors contributing to wclassification of test instances consists simply of determining which side of ws hyperplane they fall onwe used joachims svmlight package8 for training and testing with all parameters set to their default values after first lengthnormalizing the document vectors as is standard we used documents from the moviereview corpus described in section 3to create a data set with uniform class distribution we randomly selected 700 positivesentiment and 700 negativesentiment documentswe then divided this data into three equalsized folds maintaining balanced class distributions in each foldall results reported below as well as the baseline results from section 4 are the average threefold crossvalidation results on this data to prepare the documents we automatically removed the rating indicators and extracted the textual information from the original html document format treating punctuation as separate lexical itemsno stemming or stoplists were usedone unconventional step we took was to attempt to model the potentially important contextual effect of negation clearly good and not very good indicate opposite sentiment orientationsadapting a technique of das and chen we added the tag not to every word between a negation word and the first punctuation mark following the negation wordfor this study we focused on features based on unigrams and bigramsbecause training maxent is expensive in the number of features we limited consideration to the 16165 unigrams appearing at least four times in our 1400document corpus and the 16165 bigrams occurring most often in the same data note that we did not add negation tags to the bigrams since we consider bigrams to be an orthogonal way to incorporate contextinitial unigram results the classification accuracies resulting from using only unigrams as features are shown in line of figure 3as a whole the machine learning algorithms clearly surpass the randomchoice baseline of 50they also handily beat our two humanselectedunigram baselines of 58 and 64 and furthermore perform well in comparison to the 69 baseline achieved via limited access to the testdata statistics although the improvement in the case of svms is not so largeon the other hand in topicbased classification all three classifiers have been reported to use bagofunigram features to achieve accuracies of 90 and above for particular categories 9 and such results are for settings with more than two classesthis provides suggestive evidence that sentiment categorization is more difficult than topic classification which corresponds to the intuitions of the text categorization expert mentioned above10 nonetheless we still wanted to investigate ways to improve our sentiment categorization results these experiments are reported belowfeature frequency vs presence recall that we represent each document d by a featurecount vector nhowever the definition of the 9joachims used stemming and stoplists in some of their experiments nigam et al like us did not10we could not perform the natural experiment of attempting topicbased categorization on our data because the only obvious topics would be the film being reviewed unfortunately in our data the maximum number of reviews per movie is 27 too small for meaningful resultsmaxent featureclass functions fic only reflects the presence or absence of a feature rather than directly incorporating feature frequencyin order to investigate whether reliance on frequency information could account for the higher accuracies of naive bayes and svms we binarized the document vectors setting ni to 1 if and only feature fi appears in d and reran naive bayes and svmlight on these new vectors11 as can be seen from line of figure 3 better performance is achieved by accounting only for feature presence not feature frequencyinterestingly this is in direct opposition to the observations of mccallum and nigam with respect to naive bayes topic classificationwe speculate that this indicates a difference between sentiment and topic categorization perhaps due to topic being conveyed mostly by particular content words that tend to be repeated but this remains to be verifiedin any event as a result of this finding we did not incorporate frequency information into naive bayes and svms in any of the following experimentsbigrams in addition to looking specifically for negation words in the context of a word we also studied the use of bigrams to capture more context in generalnote that bigrams and unigrams are surely not conditionally independent meaning that the feature set they comprise violates naive bayes conditionalindependence assumptions on the other hand recall that this does not imply that naive bayes will necessarily do poorly line of the results table shows that bigram information does not improve performance beyond that of unigram presence although adding in the bigrams does not seriously impact the results even for naive bayesthis would not rule out the possibility that bigram presence is as equally useful a feature as unigram presence in fact pedersen found that bigrams alone can be effective features for word sense disambiguationhowever comparing line to line shows that relying just on bigrams causes accuracy to decline by as much as 58 percentage pointshence if context is in fact important as our intuitions suggest bigrams are not effective at capturing it in our setting11alternatively we could have tried integrating frequency information into maxenthowever featureclass functions are traditionally defined as binary hence explicitly incorporating frequencies would require different functions for each count making training impracticalbut cfparts of speech we also experimented with appending pos tags to every word via oliver masons qtag program12 this serves as a crude form of word sense disambiguation for example it would distinguish the different usages of love in i love this movie versus this is a love story however the effect of this information seems to be a wash as depicted in line of figure 3 the accuracy improves slightly for naive bayes but declines for svms and the performance of maxent is unchangedsince adjectives have been a focus of previous work in sentiment detection 13 we looked at the performance of using adjectives aloneintuitively we might expect that adjectives carry a great deal of information regarding a documents sentiment indeed the humanproduced lists from section 4 contain almost no other parts of speechyet the results shown in line of figure 3 are relatively poor the 2633 adjectives provide less useful information than unigram presenceindeed line shows that simply using the 2633 most frequent unigrams is a better choice yielding performance comparable to that of using all 16165 this may imply that applying explicit featureselection algorithms on unigrams could improve performanceposition an additional intuition we had was that the position of a word in the text might make a difference movie reviews in particular might begin with an overall sentiment statement proceed with a plot discussion and conclude by summarizing the authors viewsas a rough approximation to determining this kind of structure we tagged each word according to whether it appeared in the first quarter last quarter or middle half of the document14the results did not differ greatly from using unigrams alone but more refined notions of position might be more successfulthe results produced via machine learning techniques are quite good in comparison to the humangenerated baselines discussed in section 4in terms of relative performance naive bayes tends to do the worst and svms tend to do the best although the differences are not very largeon the other hand we were not able to achieve accuracies on the sentiment classification problem comparable to those reported for standard topicbased categorization despite the several different types of features we triedunigram presence information turned out to be the most effective in fact none of the alternative features we employed provided consistently better performance once unigram presence was incorporatedinterestingly though the superiority of presence information in comparison to frequency information in our setting contradicts previous observations made in topicclassification work what accounts for these two differences difficulty and types of information proving useful between topic and sentiment classification and how might we improve the latterto answer these questions we examined the data furtheras it turns out a common phenomenon in the documents was a kind of thwarted expectations narrative where the author sets up a deliberate contrast to earlier discussion for example this film should be brilliantit sounds like a great plot the actors are first grade and the supporting cast is good as well and stallone is attempting to deliver a good performancehowever it cannot hold up or i hate the spice girls3 things the author hates about them why i saw this movie is a really really really long story but i did and one would think i would despise every minute of itbutokay i am really ashamed of it but i enjoyed iti mean i admit it is a really awful movie the ninth floor of hellthe plot is such a mess that it is terriblebut i loved it 15 in these examples a human would easily detect the true sentiment of the review but bagoffeatures classifiers would presumably find these instances difficult since there are many words indicative of the opposite sentiment to that of the entire reviewfundamentally it seems that some form of discourse analysis is necessary or at least some way of determining the focus of each sentence so that one can decide when the author is talking about the film itself makes a similar point noting that for reviews the whole is not necessarily the sum of the partsfurthermore it seems likely that this thwartedexpectations rhetorical device will appear in many types of texts devoted to expressing an overall opinion about some topichence we believe that an important next step is the identification of features indicating whether sentences are ontopic we look forward to addressing this challenge in future workwe thank joshua goodman thorsten joachims jon kleinberg vikas krishna john lafferty jussi myllymaki phoebe sengers richard tong peter turney and the anonymous reviewers for many valuable comments and helpful suggestions and hubie chen and tony faradjian for participating in our baseline experimentsportions of this work were done while the first author was visiting ibm almadenthis paper is based upon work supported in part by the national science foundation under itri am grant iis0081334any opinions findings and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the national science foundation
W02-1011
thumbs up sentiment classification using machine learning techniqueswe consider the problem of classifying documents not by topic but by overall sentiment eg determining whether a review is positive or negativeusing movie reviews as data we find that standard machine learning techniques deflnitively outperform humanproduced baselineshowever the three machine learning methods we employed do not perform as well on sentiment classiflcation as on traditional topicbased categorizationwe conclude by examining factors that make the sentiment classiflcation problem more challengingwe collect reviews form a movie database and rate them as positive negative or neutral based on the training given by the reviewerwe suggest that termbased models perform better than the frequencybased alternatives
a phrasebased joint probability model for statistical machine translation we present a joint probability model for statistical machine translation which automatically learns word and phrase equivalents from bilingual corpora translations produced with parameters estimated using the joint model are more accurate than translations produced using ibm model 4 1 motivation most of the noisychannelbased models used in statistical machine translation are conditional probability models in the noisychannel framework each source sentence e in a parallel corpus is assumed to generate a target sentence f by means of a stochastic process whose parameters are estimated using traditional them techniques the generative model explains how source words are mapped into target words and how target words are reordered to yield wellformed target sentences a variety of methods are used to account for the reordering stage wordbased templatebased and syntaxbased to name just a few although these models use different generative processes to explain how translated words are reordered in a target language at the lexical level they are quite similar all these models assume that source words are into target individual words may contain a nonexistent element called null we suspect that mt researchers have so far chosen to automatically learn translation lexicons defined only over words for primarily pragmatic reasons large scale bilingual corpora with vocabularies in the range of hundreds of thousands yield very large translation lexicons tuning the probabilities associated with these large lexicons is a difficult enough task to deter one from trying to scale up to learning phrasebased lexicons unfortunately trading space requirements and efficiency for explanatory power often yields nonintuitive results consider for example the parallel corpus of three sentence pairs shown in figure 1 intuitively if we allow any source words to be aligned to any target words the best alignment that we can come up with is the one in figure 1c sentence pair offers strong evidence that b c in language s means the same thing as x in language t on the basis of this evidence we expect the system to also learn from sentence pair that a in language s means the same thing as y in language t unfortunately if one works with translation models that do not allow target words to be aligned to more than one source word as it is the case in the ibm models it is impossible to learn that the phrase b c in language s means the same thing as word x in language t the ibm model 4 for example converges to the word alignments shown in figure 1b and learns the probabilities shown in figure since in the ibm model one cannot link a target word to more than a source word the training procedure train the ibm4 model we used giza ibm4 ttable ibm4 intuitive joint joint ttable p 1 p 1 p 098 p 002 s1 a b c t1 x y s2 b c s1 a b c t1 x y s2 b c s1 a b c t1 x y s2 b c p 032 p 034 p 001 p 033 corresponding conditional table t2 x t2 x t2 x p 1 p 1 p 1 p 1 s3 b s3 b s3 b t3 z t3 z t3 z a b c d e figure 1 alignments and probability distributions in ibm model 4 and our joint phrasebased model yields unintuitive translation probabilities and p and low probability to p in this paper we describe a translation model that assumes that lexical correspondences can be established not only at the word level but at the phrase level as well in constrast with many previous approaches our model does not try to capture how source sentences can be mapped into target sentences but rather how source and target sentences can be generated simultaneously in other words in the style of melamed we estimate a joint probability model that can be easily marginalized in order to yield conditional probability models for both sourcetotarget and targettosource machine translation applications the main difference between our work and that of melamed is that we learn joint probability models of translation equivalence not only between words but also between phrases and we show that these models can be used not only for the extraction of bilingual lexicons but also for the automatic translation of unseen sentences in the rest of the paper we first describe our model and explain how it can be implementedtrained we briefly describe a decoding algorithm that works in conjunction with our model and evaluate the performance of a translation system that uses the jointprobability model we end with a discussion of the strengths and weaknesses of our model as compared to other models proposed in the literature 2 a phrasebased joint probability model 21 model 1 in developing our joint probability model we started out with a very simple generative story we assume that each sentence pair in our corpus is generated by the following stochastic process 1 generate a bag of concepts 2 for each concept generate a pair of phrases according to the distribution contain at least one word 3 order the phrases generated in each language so as to create two linear sequences of phrases these sequences correspond to the sentence pairs in a bilingual corpus for simplicity we initially assume that the bag of concepts and the ordering of the generated phrases are modeled by uniform distributions we do not assume that is a hidden variable that generates pair but rather that under these assumptions it follows that the probability of generating a sentence pair using concepts given by the product of all phrasetophrase translation probabilities yield bags of phrases that can be ordered linearly so as to obtain the sentences e and f for example the sentence pair a b c x y can be generated using two concepts and or one concept because in both cases the phrases in each language can be arranged in a sequence that would yield the original sentence pair however the same sentence pair cannot be generated using the concepts and because the sequence x y cannot be recreated from the two phrases y and y similarly the pair cannot be generated using concepts and because the sequence a b c cannot be created by catenating the phrases a c and b we say that a set of concepts can be linearized into a sentence pair if e and f can be obtained by permuting the phrasesandthat characterize concepts we denote this property using the predicate under this model the probability of a given sentence pair can then be obtained by summing up over all possible ways of generating bags of concepts that can linearized to 22 model 2 although model 1 is fairly unsophisticated we have found that it produces in practice fairly good alignments however this model is clearly unsuited for translating unseen sentences as it imposes no constraints on the ordering of the phrases associated with a given concept in order to account for this we modify slightly the generative process in model 1 so as to account for distortions the generative story of model 2 is this 1 generate a bag of concepts 2 initialize e and f to empty sequences 3 randomly take a concept and generate a pair of phrases according to the distribution whereandeach contain at least one word remove then from 4 append phraseat the end of f letbe the start position ofin f 5 insert phraseat positionin e provided that no other phrase occupies any of the positions betweenand wheregives length of the phrase we hence create the alignment between the two phrasesand with probability is a positionbased distortion distribution 6 repeat steps 3 to 5 untilis empty in model 2 the probability to generate a sentence pair is given by formula where the position of wordof phrasein sen f and denotes the position in tence e of the center of mass of phrase model 2 implements an absolute positionbased distortion model in the style of ibm model 3 we have tried many types of distortion models we eventually settled for the model discussed here because it produces better translations during decoding since the number of factors involved in computing the probability of an alignment does not vary with the size of the target phrases into which source phrases are translated this model is not predisposed to produce translations that are shorter than the source sentences given as input 3 training training the models described in section 2 is computationally challenging since there is an exponential number of alignments that can generate a sentence pair it is clear that we cannot apply the 1 determine highfrequency ngrams in the bilingual corpus 2 initialize the tdistribution table 3 apply them training on the viterbi alignments while using smoothing 4 generate conditional model probabilities figure 2 training algorithm for the phrasebased joint probability model them training algorithm exhaustively to estimate the parameters of our model we apply the algorithm in figure 2 whose steps are motivated and described below 31 determine highfrequency ngrams in e and f if one assumes from the outset that any phrases can be generated from a cept one would need a supercomputer in order to store in the memory a table that models the distribution since we do not have access to computers with unlimited memory we initially learn t distribution entries only for the phrases that occur often in the corpus and for unigrams then through smoothing we learn t distribution entries for the phrases that occur rarely as well in order to be considered in step 2 of the algorithm a phrase has to occur at least five times in the corpus 32 initialize the tdistribution table before the them training procedure starts one has no idea what wordphrase pairs are likely to share the same meaning in other words all alignments that can generate a sentence pair can be assumed to have the same probability under these conditions the evidence that a sentence pair contributes to fact that are generated by the same cept is given by the number of alignments that can be built between that have a concept that is linked to phrasein sentence e and phrase sentence f divided by the total number of alignments that can be built between the two sentences both these numbers can be easily approximated given a sentence e ofwords there are ways in which thewords can be partitioned into setsconcepts where is the ling number of second kind there are also ways in which the words a sentence f can be partitioned into nonempty sets given that any words in e can be mapped to any words in f it follows that there are alignments that can be built between two sentences of lengthsand respectively when a concept generates two phrases of lengthand respectively there are only and words left to link hence the absence of any other information the probability that phrasesandare generated by the same concept is given by formula note that the fractional counts returned by equation are only an approximation of the t distribution that we are interested in because the stirling numbers of the second kind do not impose any restriction on the words that are associated with a given concept be consecutive however since formula overestimates the numerator and denominator equally the approximation works well in practice in the second step of the algorithm we apply equation to collect fractional counts for all unigram and highfrequency ngram pairs in the cartesian product defined over the phrases in each sentence pair in a corpus we sum over all these tcounts and we normalize to obtain an initial joint distribution this step amounts to running the them algorithm for one step over all possible alignments in the corpus 33 them training on viterbi alignments given a nonuniform t distribution phrasetophrase alignments have different weights and there are no other tricks one can apply to collect fractional counts over all possible alignments in polynomial time starting with step 3 of the algorithm in figure 2 for each sentence pair in a corpus we greedily produce an initial alignment by linking together phrases so as to create concepts that have high t probabilities we then hillclimb towards the viterbi alignment of highest probability by breaking and merging concepts swapping words between concepts and moving words across concepts we compute the probabilities associated with all the alignments we generate during the hillclimbing process and collect t counts over all concepts in these alignments we apply this viterbibased them training procedure for a few iterations the first iterations estimate the alignment probabilities using model 1 the rest of the iterations estimate the alignment probabilities using model 2 during training we apply smoothing so we can associate nonzero values to phrasepairs that do not occur often in the corpus 34 derivation of conditional probability model at the end of the training procedure we take marginals on the joint probability distributionsand this yields conditional probability distributions and which we use for decoding 35 discussion when we run the training procedure in figure 2 on the corpus in figure 1 after four model 1 iterations we obtain the alignments in figure 1d and the joint and conditional probability distributions shown in figure 1e at prima facie the viterbi alignment for the first sentence pair appears incorrect because we as humans have a natural tendency to build alignments between the smallest phrases possible however note that the choice made by our model is quite reasonable after all in the absence of additional information the model can either assume that a and y mean the same thing or that phrases a b c and x y mean the same thing the model chose to give more weight to the second hypothesis while preserving some probability mass for the first one also note that although the joint distribution puts the second hypothesis at an advantage the conditional distribution does not the conditional distribution in figure 1e is consistent with our intuitions that tell us that it is reasonable both to translate a b c into x y as well as a into y the conditional distribution mirrors perfectly our intuitions 4 decoding for decoding we have implemented a greedy procedure similar to that proposed by germann et al given a foreign sentence f we first produce a gloss of it by selecting phrases inthat the probability we then tively hillclimb by modifying e and the alignment between e and f so as to maximize the formula we hillclimb by modifying an existing alignmenttranslation through a set of operations that modify locally the aligmenttranslation built until a given time these operations replace the english side of an alignment with phrases of different probabilities merge and break existing concepts and swap words across concepts the probability p is computed using a simple trigram language model that was trained using the cmu language modeling toolkit the language model is estimated at the word level figure 3 shows the steps taken by our decoder in order to find the translation of sentence je vais me arrˆeter la each intermediate translation in figure 3 is preceded by its probability and succeded by the operation that changes it to yield a translation of higher probability 5 evaluation to evaluate our system we trained both giza and our joint probability model on a frenchenglish parallel corpus of 100000 sentence pairs from the hansard corpus the sentences in the corpus were at most 20 words long the english side had a total of 1073480 words the french side had a total of 1177143 words we translated 500 unseen sentences which were uniformly distributed across lengths 6 8 10 15 and 20 for each group of 100 sentences we manually determined the number of sentences translated perfectly by the ibm model decoder of germann et and the decoder that uses the joint prob model percent perfect translations ibm bleu score sentence length sentence length 6 8 10 15 20 average 6 8 10 15 20 average ibm 36 26 35 11 2 22 02076 02040 02414 02248 02011 02158 phrasebased 43 37 33 19 6 28 02574 02181 02435 02407 02028 02325 table 1 comparison of ibm and phrasebased joint probability models on a translation task je vais me arreter la je vais me arreter la 946e08 i am going to stop there figure 3 example of phrasebased greedy decoding ability model we also evaluated the translations automatically using the ibmbleu metric the results in table 1 show that the phrasedbased translation model proposed in this paper significantly outperforms ibm model 4 on both the subjective and objective metrics 6 discussion 61 limitations the main shortcoming of the phrasebased model in this paper concerns the size of the ttable and the cost of the training procedure we currently apply to keep the memory requirements manageable we arbitrarily restricted the system to learning phrase translations of at most six words on each side also the swap break and merge operations used during the viterbi training are computationally expensive we are currently investigating the applicability of dynamic programming techniques to increase the speed of the training procedure clearly there are language pairs for which it would be helpful to allow concepts to be realized as noncontiguous phrases the english word not for example is often translated into two french words ne and pas but ne and pas almost never occur in adjacent positions in french texts at the outset of this work we attempted to develop a translation model that enables concepts to be mapped into noncontiguous phrases but we were not able to scale and train it on large amounts of data the model described in this paper cannot learn that the english word not corresponds to the french words ne and pas however our model learns to deal with negation by memorizing longer phrase translation equivalents such as and 62 comparison with other work a number of researchers have already gone beyond wordlevel translations in various mt settings for example melamed uses wordlevel alignments in order to learn translations of noncompositional compounds och and ney learn phrasetophrase mappings involving word classes which they call templates and exploit them in a statistical machine translation system and marcu extracts phrase translations from automatically aligned corpora and uses them in conjunction with a wordforword statistical translation system however none of these approaches learn simultaneously the translation of phrasestemplates and the translation of words as a consequence there is a chance that the learning procedure will not discover phraselevel patterns that occur often in the je vais me arreter la 750e11 fuseandchangetrans i want me to that je vais me arreter la 297e10 changewordtrans 775e10 109e09 i want me to there je vais me arreter la i want me stop there je vais me arreter la let me to stop there fuseandchange fuseandchange 128e14 changewordtrans i me to that data in our approach phrases are not treated differently from individual words and as a consequence the likelihood of the them algorithm converging to a better local maximum is increased working with phrase translations that are learned independent of a translation model can also affect the decoder performance for example in our previous work we have used a statistical translation memory of phrases in conjunction with a statistical translation model the phrases in the translation memory were automatically extracted from the viterbi alignments produced by giza and reused in decoding the decoder described in starts from a gloss that uses the translations in the translation memory and then tries to improve on the gloss translation by modifying it incrementally in the style described in section 4 however because the decoder hillclimbs on a wordforword translation model probability it often discards good phrasal translations in favour of wordforword translations of higher probability the decoder in section 4 does not have this problem because it hillclimbs on translation model probabilities in which phrases play a crucial role most of the noisychannelbased models used in statistical machine translation are conditional probability modelsin the noisychannel framework each source sentence e in a parallel corpus is assumed to generate a target sentence f by means of a stochastic process whose parameters are estimated using traditional them techniques the generative model explains how source words are mapped into target words and how target words are reordered to yield wellformed target sentencesa variety of methods are used to account for the reordering stage wordbased templatebased and syntaxbased to name just a fewalthough these models use different generative processes to explain how translated words are reordered in a target language at the lexical level they are quite similar all these models assume that source words are individually translated into target words1 we suspect that mt researchers have so far chosen to automatically learn translation lexicons defined only over words for primarily pragmatic reasonslarge scale bilingual corpora with vocabularies in the range of hundreds of thousands yield very large translation lexiconstuning the probabilities associated with these large lexicons is a difficult enough task to deter one from trying to scale up to learning phrasebased lexiconsunfortunately trading space requirements and efficiency for explanatory power often yields nonintuitive resultsconsider for example the parallel corpus of three sentence pairs shown in figure 1intuitively if we allow any source words to be aligned to any target words the best alignment that we can come up with is the one in figure 1csentence pair offers strong evidence that b c in language s means the same thing as x in language t on the basis of this evidence we expect the system to also learn from sentence pair that a in language s means the same thing as y in language t unfortunately if one works with translation models that do not allow target words to be aligned to more than one source word as it is the case in the ibm models it is impossible to learn that the phrase b c in language s means the same thing as word x in language t the ibm model 4 for example converges to the word alignments shown in figure 1b and learns the translation probabilities shown in figure 1a2 since in the ibm model one cannot link a target word to more than a source word the training procedure yields unintuitive translation probabilities and p and low probability to pin this paper we describe a translation model that assumes that lexical correspondences can be established not only at the word level but at the phrase level as wellin constrast with many previous approaches our model does not try to capture how source sentences can be mapped into target sentences but rather how source and target sentences can be generated simultaneouslyin other words in the style of melamed we estimate a joint probability model that can be easily marginalized in order to yield conditional probability models for both sourcetotarget and targettosource machine translation applicationsthe main difference between our work and that of melamed is that we learn joint probability models of translation equivalence not only between words but also between phrases and we show that these models can be used not only for the extraction of bilingual lexicons but also for the automatic translation of unseen sentencesin the rest of the paper we first describe our model and explain how it can be implementedtrained we briefly describe a decoding algorithm that works in conjunction with our model and evaluate the performance of a translation system that uses the jointprobability model we end with a discussion of the strengths and weaknesses of our model as compared to other models proposed in the literaturein developing our joint probability model we started out with a very simple generative storywe assume that each sentence pair in our corpus is generated by the following stochastic process phrases according to the distribution whereandeach contain at least one wordfor simplicity we initially assume that the bag of concepts and the ordering of the generated phrases are modeled by uniform distributionswe do not assume that is a hidden variable that generates the pair but rather that under these assumptions it follows that the probability of generating a sentence pair using concepts is given by the product of all phrasetophrase translation probabilities that yield bags of phrases that can be ordered linearly so as to obtain the sentences e and f for example the sentence pair a b c x y can be generated using two concepts and or one concept because in both cases the phrases in each language can be arranged in a sequence that would yield the original sentence pairhowever the same sentence pair cannot be generated using the concepts and because the sequence x y cannot be recreated from the two phrases y and ysimilarly the pair cannot be generated using concepts and because the sequence a b c cannot be created by catenating the phrases a c and bwe say that a set of concepts can be linearized into a sentence pair if e and f can be obtained by permuting the phrasesandthat characterize all concepts we denote this property using the predicate under this model the probability of a given sentence pair can then be obtained by summing up over all possible ways of generating bags of concepts that can be linearized to although model 1 is fairly unsophisticated we have found that it produces in practice fairly good alignmentshowever this model is clearly unsuited for translating unseen sentences as it imposes no constraints on the ordering of the phrases associated with a given conceptin order to account for this we modify slightly the generative process in model 1 so as to account for distortionsthe generative story of model 2 is this a pair of phrases according to the distribution whereandeach contain at least one wordremove then from betweenand wheregives the length of the phrasewe hence create the alignment between the two phrasesand with probability where is a positionbased distortion distributionmodel 2 implements an absolute positionbased distortion model in the style of ibm model 3we have tried many types of distortion modelswe eventually settled for the model discussed here because it produces better translations during decodingsince the number of factors involved in computing the probability of an alignment does not vary with the size of the target phrases into which source phrases are translated this model is not predisposed to produce translations that are shorter than the source sentences given as inputtraining the models described in section 2 is computationally challengingsince there is an exponential number of alignments that can generate a sentence pair it is clear that we cannot apply the if one assumes from the outset that any phrases and can be generated from a concept one would need a supercomputer in order to store in the memory a table that models the distributionsince we do not have access to computers with unlimited memory we initially learn t distribution entries only for the phrases that occur often in the corpus and for unigramsthen through smoothing we learn t distribution entries for the phrases that occur rarely as wellin order to be considered in step 2 of the algorithm a phrase has to occur at least five times in the corpusbefore the them training procedure starts one has no idea what wordphrase pairs are likely to share the same meaningin other words all alignments that can generate a sentence pair can be assumed to have the same probabilityunder these conditions the evidence that a sentence pair contributes to the fact that are generated by the same concept is given by the number of alignments that can be built between that have a concept that is linked to phrasein sentence e and phrase in sentence f divided by the total number of alignments that can be built between the two sentencesboth these numbers can be easily approximatedgiven a sentence e ofwords there are ways in which thewords can be partitioned into nonempty setsconcepts where is the stirling number of second kindthere are also ways in which the words of a sentence f can be partitioned into nonempty setsgiven that any words in e can be mapped to any words in f it follows that there are alignments that can be built between two sentences of lengthsand respectivelywhen a concept generates two phrases of lengthand respectively there are only and words left to linkhence in the absence of any other information the probability that phrasesandare generated by the same concept is given by formula note that the fractional counts returned by equation are only an approximation of the t distribution that we are interested in because the stirling numbers of the second kind do not impose any restriction on the words that are associated with a given concept be consecutivehowever since formula overestimates the numerator and denominator equally the approximation works well in practicein the second step of the algorithm we apply equation to collect fractional counts for all unigram and highfrequency ngram pairs in the cartesian product defined over the phrases in each sentence pair in a corpuswe sum over all these tcounts and we normalize to obtain an initial joint distributionthis step amounts to running the them algorithm for one step over all possible alignments in the corpusgiven a nonuniform t distribution phrasetophrase alignments have different weights and there are no other tricks one can apply to collect fractional counts over all possible alignments in polynomial timestarting with step 3 of the algorithm in figure 2 for each sentence pair in a corpus we greedily produce an initial alignment by linking together phrases so as to create concepts that have high t probabilitieswe then hillclimb towards the viterbi alignment of highest probability by breaking and merging concepts swapping words between concepts and moving words across conceptswe compute the probabilities associated with all the alignments we generate during the hillclimbing process and collect t counts over all concepts in these alignmentswe apply this viterbibased them training procedure for a few iterationsthe first iterations estimate the alignment probabilities using model 1the rest of the iterations estimate the alignment probabilities using model 2during training we apply smoothing so we can associate nonzero values to phrasepairs that do not occur often in the corpusat the end of the training procedure we take marginals on the joint probability distributionsand this yields conditional probability distributions and which we use for decodingwhen we run the training procedure in figure 2 on the corpus in figure 1 after four model 1 iterations we obtain the alignments in figure 1d and the joint and conditional probability distributions shown in figure 1eat prima facie the viterbi alignment for the first sentence pair appears incorrect because we as humans have a natural tendency to build alignments between the smallest phrases possiblehowever note that the choice made by our model is quite reasonableafter all in the absence of additional information the model can either assume that a and y mean the same thing or that phrases a b c and x y mean the same thingthe model chose to give more weight to the second hypothesis while preserving some probability mass for the first onealso note that although the joint distribution puts the second hypothesis at an advantage the conditional distribution does notthe conditional distribution in figure 1e is consistent with our intuitions that tell us that it is reasonable both to translate a b c into x y as well as a into ythe conditional distribution mirrors perfectly our intuitionsfor decoding we have implemented a greedy procedure similar to that proposed by germann et al given a foreign sentence f we first produce a gloss of it by selecting phrases inthat maximize the probability we then iteratively hillclimb by modifying e and the alignment between e and f so as to maximize the formula we hillclimb by modifying an existing alignmenttranslation through a set of operations that modify locally the aligmenttranslation built until a given timethese operations replace the english side of an alignment with phrases of different probabilities merge and break existing concepts and swap words across conceptsthe probability p is computed using a simple trigram language model that was trained using the cmu language modeling toolkit the language model is estimated at the word levelfigure 3 shows the steps taken by our decoder in order to find the translation of sentence je vais me arrˆeter la each intermediate translation in figure 3 is preceded by its probability and succeded by the operation that changes it to yield a translation of higher probabilityto evaluate our system we trained both giza and our joint probability model on a frenchenglish parallel corpus of 100000 sentence pairs from the hansard corpusthe sentences in the corpus were at most 20 words longthe english side had a total of 1073480 words the french side had a total of 1177143 words we translated 500 unseen sentences which were uniformly distributed across lengths 6 8 10 15 and 20for each group of 100 sentences we manually determined the number of sentences translated perfectly by the ibm model decoder of germann et al and the decoder that uses the joint probje vais me arreter la je vais me arreter la 946e08 i am going to stop there ability modelwe also evaluated the translations automatically using the ibmbleu metric the results in table 1 show that the phrasedbased translation model proposed in this paper significantly outperforms ibm model 4 on both the subjective and objective metricsthe main shortcoming of the phrasebased model in this paper concerns the size of the ttable and the cost of the training procedure we currently applyto keep the memory requirements manageable we arbitrarily restricted the system to learning phrase translations of at most six words on each sidealso the swap break and merge operations used during the viterbi training are computationally expensivewe are currently investigating the applicability of dynamic programming techniques to increase the speed of the training procedureclearly there are language pairs for which it would be helpful to allow concepts to be realized as noncontiguous phrasesthe english word not for example is often translated into two french words ne and pasbut ne and pas almost never occur in adjacent positions in french textsat the outset of this work we attempted to develop a translation model that enables concepts to be mapped into noncontiguous phrasesbut we were not able to scale and train it on large amounts of datathe model described in this paper cannot learn that the english word not corresponds to the french words ne and pashowever our model learns to deal with negation by memorizing longer phrase translation equivalents such as and a number of researchers have already gone beyond wordlevel translations in various mt settingsfor example melamed uses wordlevel alignments in order to learn translations of noncompositional compoundsoch and ney learn phrasetophrase mappings involving word classes which they call templates and exploit them in a statistical machine translation systemand marcu extracts phrase translations from automatically aligned corpora and uses them in conjunction with a wordforword statistical translation systemhowever none of these approaches learn simultaneously the translation of phrasestemplates and the translation of wordsas a consequence there is a chance that the learning procedure will not discover phraselevel patterns that occur often in the datain our approach phrases are not treated differently from individual words and as a consequence the likelihood of the them algorithm converging to a better local maximum is increasedworking with phrase translations that are learned independent of a translation model can also affect the decoder performancefor example in our previous work we have used a statistical translation memory of phrases in conjunction with a statistical translation model the phrases in the translation memory were automatically extracted from the viterbi alignments produced by giza and reused in decodingthe decoder described in starts from a gloss that uses the translations in the translation memory and then tries to improve on the gloss translation by modifying it incrementally in the style described in section 4however because the decoder hillclimbs on a wordforword translation model probability it often discards good phrasal translations in favour of wordforword translations of higher probabilitythe decoder in section 4 does not have this problem because it hillclimbs on translation model probabilities in which phrases play a crucial roleacknowledgmentsthis work was supported by darpaito grant n660010019814 and by nsfsttr grant 0128379
W02-1018
a phrasebased joint probability model for statistical machine translationwe present a joint probability model for statistical machine translation which automatically learns word and phrase equivalents from bilingual corporatranslations produced with parameters estimated using the joint model are more accurate than translations produced using ibm model 4we propose a joint probability model which searches the phrase alignment space simultaneously learning translations lexicons for words and phrases without consideration of potentially suboptimal word alignments and heuristics for phrase extraction
generation of word graphs in statistical machine translation graphs an efficient interface between continous speech recognition and language understanding international conference on acoustics and signal processing 2 pages 119122 minneapolis mn april stefan ortmanns hermann ney and xavier aubert 1997 a word graph algorithm for large vocabcontinuous speech recognition and language january christoph tillmann and hermann ney 2000 word reordering and dpbased search in statistical matranslation in 00 the 18th int whose probability is below this value multiplied with a threshold will not be regarded for further expansionhistogram pruning means that all but the m best hypotheses are pruned for a fixed m for finding the most likely partial hypotheses first all hypotheses with the same set of covered source sentence positions are comparedafter threshold and histogram pruning have been applied we also compare all hypotheses with the same number of covered source sentence positions and apply both pruning types againthose hypotheses that survive the pruning are called the active hypothesesthe word graph structure and the results presented here can easily be transferred to other search algorithms such as a searchit is widely accepted in the community that a significant improvement in translation quality will come from more sophisticated translation and language modelsfor example a language model that goes beyond mgram dependencies could be used but this would be difficult to integrate into the search processas a step towards the solution of this problem we determine not only the single best sentence hypothesis but also other complete sentences that the search algorithm found but that were judged worsewe can then apply rescoring with a refined model to those hypothesesone efficient way to store the different alternatives is a word graphword graphs have been successfully applied in speech recognition for the search process and as an interface to other systems and propose the use of word graphs for natural language generationin this paper we are going to present a concept for the generation of word graphs in a machine translation systemduring search we keep a bookkeeping treeit is not necessary to keep all the information that we need for the expansion of hypotheses during search in this structure thus we store only the following after the search has finished ie when all source sentence positions have been translated we trace back the best sentence in the bookkeeping treeto generate the n best hypotheses after search it is not sufficient to simply trace back the complete hypotheses with the highest probabilities in the bookkeeping because those hypotheses have been recombinedthus many hypotheses with a high probability have not been storedto overcome this problem we enhance the bookkeeping concept and generate a word graph as described in section 33if we want to generate a word graph we have to store both alternatives in the bookkeeping when two hypotheses are recombinedthus an entry in the bookkeeping structure may have several backpointers to different preceding entriesthe bookkeeping structure is no longer a tree but a network where the source is the bookkeeping entry with zero covered source sentence positions and the sink is a node accounting for complete hypotheses this leads us to the concept of word graph nodes and edges containing the following information the probabilities according to the different models the language model and the translation submodels the backpointer to the preceding bookkeeping entryafter the pruning in beam search all hypotheses that are no longer active do not have to be kept in the bookkeeping structurethus we can perform garbage collection and remove all those bookkeeping entries that cannot be reached from the backpointers of the active hypothesesthis reduces the size of the bookkeeping structure significantlyan example of a word graph can be seen in figure 3to keep the presentation simple we chose an example without reordering of sentence positionsthe words on the edges are the produced target words and the bitvectors in the nodes show the covered source sentence positionsif an edge is labeled with two words this means that the first english word has no equivalence in the source sentence like just and have in figure 3the reference translation what did you say is contained in the graph but it has a slightly lower probability than the sentence what do you say which is then chosen by the single best searchthe recombination of hypotheses can be seen in the nodes with two or more incoming edges those hypotheses have been recombined because they were indistinguishable by translation and language model stateto study the effect of the word graph size on the translation quality we produce a conservatively large word graphthen we apply word graph pruning with a threshold t 1 and study the change of graph error rate the pruning is based on the beam search concept also used in the single best search we determine the probability of the best sentence hypothesis in the word graphall hypotheses in the graph which probability is lower than this maximum probability multiplied with the pruning threshold are discardedif the pruning threshold t is zero the word graph is not pruned at all and if t 1 we retain only the sentence with maximum probabilityin single best search a standard trigram language model is usedsearch with a bigram language model is much faster but it yields a lower translation qualitytherefore we apply a twopass approach as it was widely used in speech recognition in the past this method combines both advantages in the following way a word graph is constructed using a bigram language model and is then rescored with a trigram language modelthe rescoring algorithm is based on dynamic programming a description can be found in the results of the comparison of the onepass and the twopass search are given in section 5we use a search for finding the n best sentences in a word graph starting in the root of the graph we successively expand the sentence hypothesesthe probability of the partial hypothesis is obtained by multiplying the probabilities of the edges expanded for this sentenceas rest cost estimation we use the probabilities determined in a backward pass as follows for each node in the graph we calculate the probability of a best path from this node to the goal node ie the highest probability for completing a partial hypothesisthis rest cost estimation is perfect because it takes the exact probability as heuristic ie the probability of the partial hypothesis multiplied with the rest cost estimation yields the actual probability of the complete hypothesisthus the n best hypothesis are extracted from the graph without additional overhead of finding sentences with a lower probabilityof course the hypotheses must not be recombined during this searchwe have to keep every partial hypothesis in the priority queue in order to determine the n best sentencesotherwise we might lose one of them by recombinationthe graph error rate is computed by determining that sentence in the word graph that has the minimum levenstein distance to a given referencethus it is a lower bound for the word error rate and gives a measurement of what can be achieved by rescoring with more complex modelsthe calculation of the graph error rate is performed by a dynamic programming based algorithmits space complexity is the number of graph nodes times the length of the reference translationin our experiments we varied the word graph pruning threshold in order to obtain word graphs of different densities ie different numbers of hypothesesthe word graph density is computed as the total number of word graph edges divided by the number of reference sentence words analogously to the word graph density in speech recognitionthe effect of pruning on the graph error rate is shown in table 3the value of the pruning threshold is given as the negative logarithm of the probabilitythus t 0 refers to pruning everything but the best hypothesisfigure 4 shows the change in graph error rate in relation to the average graph densitywe see that for graph densities up to 200 the graph error rate significantly changes if the graph is enlargedthe saturation point of the ger lies at 13 and is reached for an average graph density about 1000 which relates to a pruning threshold of 20we have presented a concept for constructing word graphs for statistical machine translation by extending the single best search algorithmexperiments have shown that the graph error rate significantly decreases for rising word graph densitiesthe quality of the hypotheses contained in a word graph is better than of those in an nbest listthis indicates that word graph rescoring can yield a significant gain in translation qualityfor the future we plan the application of refined translation and language models for rescoring on word graphs
W02-1021
generation of word graphs in statistical machine translationthe paper describes a natural language based expert system route advisor for the public bus transport in trondheim norwaythe system is available on the internet and has been intstalled at the bus company web server since the beginning of 1999the system is bilingual relying on an internal language independent logic representationin between the question and the answer is a process of lexical analysis syntax analysis semantic analysis pragmatic reasoning and database query processingone could argue that the information content could be solved by an interrogation whereby the customer is asked to produce 4 items station of departure station of arrival earliest departure time andor latest arrival timewe generate word graphs for a bottomtop search with the ibm constraintsa word graph is a weighted directed acyclic graph in which each node represents a partial translation hypothesis and each edge is labelled with a word of the target sentence and is weighted according to the scores given by the model
a bootstrapping method for learning semantic lexicons using extraction pattern contexts this paper describes a bootstrapping algorithm called basilisk that learns highquality semantic lexicons for multiple categories basilisk begins with an unannotated corpus and seed words for each semantic category which are then bootstrapped to learn new words for each category basilisk hypothesizes the semantic class of a word based on collective information over a large body of extraction pattern contexts we evaluate basilisk on six semantic categories the semantic lexicons produced by basilisk have higher precision than those produced by previous techniques with several categories showing substantial improvement in recent years several algorithms have been developed to acquire semantic lexicons automatically or semiautomatically using corpusbased techniquesfor our purposes the term semantic lexicon will refer to a dictionary of words labeled with semantic classes semantic class information has proven to be useful for many natural language processing tasks including information extraction anaphora resolution question answering and prepositional phrase attachment although some semantic dictionaries do exist these resources often do not contain the specialized vocabulary and jargon that is needed for specific domainseven for relatively general texts such as the wall street journal or terrorism articles roark and charniak reported that 3 of every 5 terms generated by their semantic lexicon learner were not present in wordnetthese results suggest that automatic semantic lexicon acquisition could be used to enhance existing resources such as wordnet or to produce semantic lexicons for specialized domainswe have developed a weakly supervised bootstrapping algorithm called basilisk that automatically generates semantic lexiconsbasilisk hypothesizes the semantic class of a word by gathering collective evidence about semantic associations from extraction pattern contextsbasilisk also learns multiple semantic classes simultaneously which helps constrain the bootstrapping processfirst we present basilisks bootstrapping algorithm and explain how it differs from previous work on semantic lexicon inductionsecond we present empirical results showing that basilisk outperforms a previous algorithmthird we explore the idea of learning multiple semantic categories simultaneously by adding this capability to basilisk as well as another bootstrapping algorithmfinally we present results showing that learning multiple semantic categories simultaneously improves performancebasilisk is a weakly supervised bootstrapping algorithm that automatically generates semantic lexiconsfigure 1 shows the highlevel view of basilisks bootstrapping processthe input to basilisk is an unannotated text corpus and a few manually defined seed words for each semantic categorybefore bootstrapping begins we run an extraction pattern learner over the corpus which generates patterns to extract every noun phrase in the corpusthe bootstrapping process begins by selecting a subset of the extraction patterns that tend to extract the seed wordswe call this the pattern poolthe nouns extracted by these patterns become candidates for the lexicon and are placed in a candidate word poolbasilisk scores each candidate word by gathering all patterns that extract it and measuring how strongly those contexts are associated with words that belong to the semantic categorythe five best candidate words are added to the lexicon and the process starts over againin this section we describe basilisks bootstrapping algorithm in more detail and discuss related workthe input to basilisk is a text corpus and a set of seed wordswe generated seed words by sorting the words in the corpus by frequency and manually identifying the 10 most frequent nouns that belong to each categorythese seed words form the initial semantic lexiconin this section we describe the learning process for a single semantic categoryin section 3 we will explain how the process is adapted to handle multiple categories simultaneouslyto identify new lexicon entries basilisk relies on extraction patterns to provide contextual evidence that a word belongs to a semantic classas our representation for extraction patterns we used the autoslog system autoslogs extraction patterns represent linguistic expressions that extract a noun phrase in one of three syntactic roles subject direct object or prepositional phrase objectfor example three patterns that would extract people are was arrested murdered and collaborated with extraction patterns represent linguistic contexts that often reveal the meaning of a word by virtue of syntax and lexical semanticsextraction patterns are typically designed to capture role relationshipsfor example consider the verb robbed when it occurs in the active voicethe subject of robbed identifies the perpetrator while the direct object of robbed identifies the victim or targetbefore bootstrapping begins we run autoslog exhaustively over the corpus to generate an extraction generate all extraction patterns in the corpus and record their extractions pattern for every noun phrase that appearsthe patterns are then applied to the corpus and all of their extracted noun phrases are recordedfigure 2 shows the bootstrapping process that follows which we explain in the following sectionsthe first step in the bootstrapping process is to score the extraction patterns based on their tendency to extract known category membersall words that are currently defined in the semantic lexicon are considered to be category membersbasilisk scores each pattern using the rlogf metric that has been used for extraction pattern learning the score for each pattern is computed as where fi is the number of category members extracted by patterni and ni is the total number of nouns extracted by patterniintuitively the rlogf metric is a weighted conditional probability a pattern receives a high score if a high percentage of its extractions are category members or if a moderate percentage of its extractions are category members and it extracts a lot of themthe top n extraction patterns are put into a pattern poolbasilisk uses a value of n20 for the first iteration which allows a variety of patterns to be considered yet is small enough that all of the patterns are strongly associated with the category1 the purpose of the pattern pool is to narrow down the field of candidates for the lexiconbasilisk collects all noun phrases extracted by patterns in the pattern pool and puts the head noun of each np into the candidate word poolonly these nouns are considered for addition to the lexiconas the bootstrapping progresses using the same value n20 causes the candidate pool to become stagnantfor example let us assume that basilisk performs perfectly adding only valid category words to the lexiconafter some number of iterations all of the valid category members extracted by the top 20 patterns will have been added to the lexicon leaving only noncategory words left to considerfor this reason the pattern pool needs to be infused with new patterns so that more nouns become available for considerationto achieve this effect we increment the value of n by one after each bootstrapping iterationthis ensures that there is always at least one new pattern contributing words to the candidate word pool on each successive iterationthe next step is to score the candidate wordsfor each word basilisk collects every pattern that extracted the wordall extraction patterns are used during this step not just the patterns in the pattern poolinitially we used a scoring function that computes the average number of category members extracted by the patternsthe formula is where pi is the number of patterns that extract wordi and fj is the number of distinct category members extracted by pattern ja word receives a high score if it is extracted by patterns that also have a tendency to extract known category membersas an example suppose the word peru is in the candidate word pool as a possible locationbasilisk finds all patterns that extract peru and computes the average number of known locations extracted by those patternslet us assume that the three patterns shown below extract peru and that the underlined words are known locationsperu would receive a score of 3 23intuitively this means that patterns that extract peru also extract on average 23 known location wordsunfortunately this scoring function has a problemthe average can be heavily skewed by one pattern that extracts a large number of category membersfor example suppose word w is extracted by 10 patterns 9 which do not extract any category members but the tenth extracts 50 category membersthe average number of category members extracted by these patterns will be 5this is misleading because the only evidence linking word w with the semantic category is a single highfrequency extraction pattern to alleviate this problem we modified the scoring function to compute the average logarithm of the number of category members extracted by each patternthe logarithm reduces the influence of any single patternwe will refer to this scoring metric as the avglog function which is defined belowsince log2 0 we add one to each frequency count so that patterns which extract a single category member contribute a positive valueusing this scoring metric all words in the candidate word pool are scored and the top five words are added to the semantic lexiconthe pattern pool and the candidate word pool are then emptied and the bootstrapping process starts over againseveral weakly supervised learning algorithms have previously been developed to generate semantic lexicons from text corporariloff and shepherd developed a bootstrapping algorithm that exploits lexical cooccurrence statistics and roark and charniak refined this algorithm to focus more explicitly on certain syntactic structureshale ge and charniak devised a technique to learn the gender of wordscaraballo and hearst created techniques to learn hypernymhyponym relationshipsnone of these previous algorithms used extraction patterns or similar contexts to infer semantic class associationsseveral learning algorithms have also been developed for named entity recognition used contextual information of a different sort than we dofurthermore our research aims to learn general nouns rather than proper nouns so many of the features commonly used to great advantage for named entity recognition are not applicable to our taskthe algorithm most closely related to basilisk is metabootstrapping which also uses extraction pattern contexts for semantic lexicon inductionmetabootstrapping identifies a single extraction pattern that is highly correlated with a semantic category and then assumes that all of its extracted noun phrases belong to the same categoryhowever this assumption is often violated which allows incorrect terms to enter the lexiconriloff and jones acknowledged this issue and used a second level of bootstrapping to alleviate this problemwhile metabootstrapping trusts individual extraction patterns to make unilateral decisions basilisk gathers collective evidence from a large set of extraction patternsas we will demonstrate in section 22 basilisks approach produces better results than metabootstrapping and is also considerably more efficient because it uses only a single bootstrapping loop however metabootstrapping produces categoryspecific extraction patterns in addition to a semantic lexicon while basilisk focuses exclusively on semantic lexicon inductionto evaluate basilisks performance we ran experiments with the muc4 corpus which contains 1700 texts associated with terrorismwe used basilisk to learn semantic lexicons for six semantic categories building event human location time and weaponbefore we ran these experiments one of the authors manually labeled every head noun in the corpus that was found by an extraction patternthese manual annotations were the gold standardtable 1 shows the breakdown of semantic categories for the head nounsthese numbers represent a baseline an algorithm that randomly selects words would be expected to get accuracies consistent with these numbersthree semantic lexicon learners have previously been evaluated on the muc4 corpus and of these metabootstrapping achieved the best resultsso we implemented the metabootstrapping algorithm ourselves to directly compare its performance with that of basiliska difference between the original implementation and ours is that our version learns individual nouns instead of noun phraseswe believe that learning individual nouns is a more conservative approach because noun phrases often overlap consequently our metabootstrapping results differ from those reported in figure 3 shows the results for basilisk and metabootstrapping we ran both algorithms for 200 iterations so that 1000 words were added to the lexicon the x axis shows the number of words learned and the y axis shows how many were correctthe y axes have different ranges because some categories are more prolific than othersbasilisk outperforms metabootstrapping for every category often substantiallyfor the human and location categories basilisk learned hundreds of words with accuracies in the 8089 range through much of the bootstrappingit is worth noting that basilisks performance held up well on the human and location categories even at the end achieving 795 accuracy for humans and 532 accuracy for locationswe also explored the idea of bootstrapping multiple semantic classes simultaneouslyour hypothesis was that errors of confusion2 between semantic categories can be lessened by using information about multiple categoriesthis hypothesis makes sense only if a word cannot belong to more than one semantic classin general this is not true because words are often polysemousbut within a limited domain a word usually has a dominant word sensetherefore we make a one sense per domain assumption are represented by the solid black area in category cs territorythe hypothesized words in the growing lexicon are represented by a shaded areathe goal of the bootstrapping algorithm is to expand the area of hypothesized words so that it exactly matches the categorys true territoryif the shaded area expands beyond the categorys true territory then incorrect words have been added to the lexiconin figure 4 category c has claimed a significant number of words that belong to categories b and e when generating a lexicon for one category at a time these confusion errors are impossible to detect because the learner has no knowledge of the other categoriesfigure 5 shows the same search space when lexicons are generated for six categories simultaneouslyif the lexicons cannot overlap then we constrain the ability of a category to overstep its boundscategory c is stopped when it begins to encroach upon the territories of categories b and e because words in those areas have already been claimedthe easiest way to take advantage of multiple categories is to add simple conflict resolution that enforces the one sense per domain constraintif more than one category tries to claim a word then we use conflict resolution to decide which category should winwe incorporated a simple conflict resolution procedure into basilisk as well as the metabootstrapping algorithmfor both algorithms the conflict resolution procedure works as follows if a word is hypothesized for category a but has already been assigned to category b during a previous iteration then the category a hypothesis is discarded if a word is hypothesized for both category a and category b during the same iteration then it to the one sense per discourse observation that a word belongs to a single semantic category within a limited domainall of our experiments involve the muc4 terrorism domain and corpus for which this assumption seems appropriatefigure 4 shows one way of viewing the task of semantic lexicon inductionthe set of all words in the corpus is visualized as a search spaceeach category owns a certain territory within the space representing the words that are true members of that categorynot all territories are the same size since some categories have more members than others e are is assigned to the category for which it receives the highest scorein section 34 we will present empirical results showing how this simple conflict resolution scheme affects performancesimple conflict resolution helps the algorithm recognize when it has encroached on another categorys territory but it does not actively steer the bootstrapping in a more promising directiona more intelligent way to handle multiple categories is to incorporate knowledge about other categories directly into the scoring functionwe modified basilisks scoring function to prefer words that have strong evidence for one category but little or no evidence for competing categorieseach word wi in the candidate word pool receives a score for category ca based on the following formula where avglog is the candidate scoring function used previously by basilisk and the max function returns the maximum avglog value over all competing categoriesfor example the score for each candidate location word will be its avglog score for the location category minus its maximum avglog score for all other categoriesa word is ranked highly only if it has a high score for the targeted category and there is little evidence that it belongs to a different categorythis has the effect of steering the bootstrapping process away from ambiguous parts of the search spacewe will use the abbreviation 1cat to indicate that only one semantic category was bootstrapped and mcat to indicate that multiple semantic categories were simultaneously bootstrappedfigure 6 compares the performance of basiliskmcat with conflict resolution against basilisk1cat most categories show small performance gains with the building location and weapon categories benefitting the mosthowever the improvement usually does not kick in until many bootstrapping iterations have passedthis phenomenon is consistent with the visualization of the search space in figure 5since the seed words for each category are not generally located near each other in the search space the bootstrapping process is unaffected by conflict resolution until the categories begin to encroach on each others territories1learning multiple categories improves the performance of metabootstrapping dramatically for most categorieswe were surprised that the improvement for metabootstrapping was much we also measured the recall of basilisks lexicons after 1000 words had been learned based on the gold standard data shown in table 1the recall results range from 4060 which indicates that a good percentage of the category words are being found although there are clearly more category words lurking in the corpusbasilisks bootstrapping algorithm exploits two ideas collective evidence from extraction patterns can be used to infer semantic category associations and learning multiple semantic categories simultaneously can help constrain the bootstrapping processthe accuracy achieved by basilisk is substantially higher than that of previous techniques for semantic lexicon induction on the muc4 corpus and empirical results show that both of basilisks ideas contribute to its performancewe also demonbuilding theatre store cathedral temple palace penitentiary academy houses school mansions event ambush assassination uprisings sabotage takeover incursion kidnappings clash shootout human boys snipers detainees commandoes extremists deserter narcoterrorists demonstrators cronies missionaries location suburb soyapango capital oslo regions cities neighborhoods quito corregimiento time afternoon evening decade hour march weeks saturday eve anniversary wednesday weapon cannon grenade launchers firebomb carbomb rifle pistol machineguns firearms strated that learning multiple semantic categories simultaneously improves the metabootstrapping algorithm which suggests that this is a general observation which may improve other bootstrapping algorithms as wellthis research was supported by the national science foundation under award iri9704240
W02-1028
a bootstrapping method for learning semantic lexicons using extraction pattern contextsthis paper describes a bootstrapping algorithm called basilisk that learns highquality semantic lexicons for multiple categoriesbasilisk begins with an unannotated corpus and seed words for each semantic category which are then bootstrapped to learn new words for each categorybasilisk hypothesizes the semantic class of a word based on collective information over a large body of extraction pattern contextswe evaluate basilisk on six semantic categoriesthe semantic lexicons produced by basilisk have higher precision than those produced by previous techniques with several categories showing substantial improvementwe learn multiple semantic categories simultaneously relying on the assumption that a word cannot belong to more than one semantic category
phrasal cohesion and statistical machine translation there has been much interest in using phrasal movement to improve statistical machine translation we explore how well phrases cohere across two languages specifically english and french and examine the particular conditions under which they do not we demonstrate that while there are cases where coherence is poor there are many regularities which can be exploited by a statistical machine translation system we also compare three variant syntactic representations to determine which one has the best properties with respect to cohesion statistical machine translation seeks to develop mathematical models of the translation process whose parameters can be automatically estimated from a parallel corpusthe first work in smt done at ibm developed a noisychannel model factoring the translation process into two portions the translation model and the language modelthe translation model captures the translation of source language words into the target language and the reordering of those wordsthe language model ranks the outputs of the translation model by how well they adhere to the syntactic constraints of the target language1 the prime deficiency of the ibm model is the reordering componenteven in the most complex of though usually a simple word ngram model is used for the language model the five ibm models the reordering operation pays little attention to context and none at all to higherlevel syntactic structuresmany attempts have been made to remedy this by incorporating syntactic information into translation modelsthese have taken several different forms but all share the basic assumption that phrases in one language tend to stay together during translation and thus the wordreordering operation can move entire phrases rather than moving each word independently states that during their work on noun phrase bracketing they found a strong cohesion among noun phrases even when comparing english to czech a relatively free word order languageother than this there is little in the smt literature to validate the coherence assumptionseveral studies have reported alignment or translation performance for syntactically augmented translation models and these results have been promisinghowever without a focused study of the behavior of phrases across languages we cannot know how far these models can take us and what specific pitfalls they facethe particulars of cohesion will clearly depend upon the pair of languages being comparedintuitively we expect that while french and spanish will have a high degree of cohesion french and japanese may notit is also clear that if the cohesion between two closely related languages is not high enough to be useful then there is no hope for these methods when applied to distantly related languagesfor this reason we have examined phrasal cohesion for french and english two languages which are fairly close syntactically but have enough differences to be interestingan alignment is a mapping between the words in a string in one language and the translations of those words in a string in another languagegiven an english string and a french string an alignment a can be represented by each is a set of indices into where indicates that word in the french sentence is aligned with word in the english sentence indicates that english word has no corresponding french wordgiven an alignment and an english phrase covering words the span is a pair where the first element is and the second element is thus the span includes all words between the two extrema of the alignment whether or not they too are part of the translationif phrases cohere perfectly across languages the span of one phrase will never overlap the span of anotherif two spans do overlap we call this a crossingfigure 1 shows an example of an english parse along with the alignment between the english and french words the english word not is aligned to the two french words ne and pas and thus has a span of 13the main english verb change is aligned to the french modifie and has a span of 22the two spans overlap and thus there is a crossingthis definition is asymmetric however we only pursue translation direction since that is the one for which we have parsed datato calculate spans we need aligned pairs of english and french sentences along with parses for the english sentencesour aligned data comes from a corpus described in which contains 500 sentence pairs randomly selected from the canadian hansard corpus and manually alignedthe alignments are of two types sure and possible s alignments are those which are unambiguous while p alignments are those which are less certainp alignments often appear when a phrase in one language translates as a unit into a phrase in the other language but can also be the result of genuine ambiguitywhen two annotators disagree the union of the p alignments produced by each annotator is recorded as the p alignment in the corpuswhen an s alignment exists there will always also exist a p alignment such that p s the english sentences were parsed using a stateoftheart statistical parser trained on the university of pennsylvania treebank je invoque le reglement since p alignments often align phrasal translations the number of crossings when p alignments are used will be artificially inflatedfor example in figure 2 note that every pair of english and french words under the verb phrase is alignedthis will generate five crossings one each between the pairs vbppp innp np pp nndt and innp however what is really happening is that the whole verb phrase is first being moved without crossing anything else and then being translated as a unitfor our purposes we want to count this example as producing zero crossingsto accomplish this we defined a simple heuristic to detect phrasal translations so we can filter them if desiredafter calculating the french spans from the english parses and alignment information we counted crossings for all pairs of child constituents in each constituent in the sentence maintaining separate counts for those involving the head constituent of the phrase and for crossings involving modifiers onlywe did this while varying conditions along two axes alignment type and phrasal translation filteringrecalling the two different types of alignments s and p we examined three different conditions s alignments only p alignments only or s alignments where present falling back to p alignments for each of these conditions we counted crossings both with and without using the phrasal translation filterfor a given alignment type ss pp let if phrases and cross each other and otherwiselet if the phrasal translation filter is turned offif the filter is on modifier constituents and child constituents and for a particular alignment type the number of head crossings and modifier crossings can be calculated recursivelytable 1 shows the average number of crossings per sentencethe table is split into two sections one for results when the phrasal filter was used and one for when it was notalignment type refers to whether we used s p or s p as the alignment datathe head crossings line shows the results when comparing the span of the head constituent of a phrase with the spans of its modifier constituents and modifier crossings refers to the case where we compare the spans of pairs of modifiersthe phrasal translations line shows the average number of phrasal translations detected per sentencefor s alignments the results are quite promising with an average of only 0236 head crossings per sentence and an even smaller average for modifier crossings however these results are overly optimistic since often many words in a sentence will not have an s alignment at all such as coming in and before in following example the full report will be coming in before the fall le rapport complet sera depose de ici le automne prochain when we use p alignments for these unaligned words we get a more meaningful resultboth types of crossings are much more frequent and then for a given phrase with head constituent if and are part of a phrasal translation in alignment otherwise phrasal translation filtering has a much larger effect phrasal translations account for almost half of all crossings on averagethis effect is even more pronounced in the case where we use p alignments onlythis reinforces the importance of phrasal translation in the development of any translation systemeven after filtering the number of crossings in the s p case is quite largethis is discouraging however there are reasons why this result should be looked on as more of an upper bound than anything precisefor one thing there are cases of phrasal translation which our heuristic fails to recognize an example of which is shown in figure 3the alignment of explorer with this and matter seems to indicate that the intention of the annotator was to align the phrase work this matter out as a unit to de explorer la questionhowever possibly due to an error during the coding of the alignment work and out align with de while this and matter do notthis causes the phrasal translation heuristic to fail resulting in a crossing where there should be nonealso due to the annotation guidelines p alignments are not as consistent as would be idealrecall that in cases of annotator disagreement the p alignment is taken to be the union of the p alignments of both annotatorsthus it is possible for the p alignment to contain two mutually conflicting alignmentsthese composite alignments will likely generate crossings even where the alignments of each individual annotator would notwhile reflecting genuine ambiguity an smt system would likely pursue only one of the alternatives and only a portion of the crossings would come into playour results show a significantly larger number of head crossings than modifier crossingsone possibility is that this is due to most phrases having a head and modifier pair to test while many do not have multiple modifiers and therefore there are fewer opportunities for modifier crossingsthus it is informative to examine how many potential crossings actually turn out to be crossingstable 2 provides this result in the form of the percentage of crossing tests which result in detection of a crossingto calculate this we kept totals for the number of head and modifier crossing tests performed as well as the number of phrasal translations detected note that when the phrasal translation filter is turned on these totals differ for each of the different alignment types the percentages are calculated after summing over all sentencesin the corpus there are still many more crossings in the s p and p alignments than in the s alignmentsthe s alignment has 158 head crossings while the s p and p alignments have 3216 and 3547 respectively with similar relative percentages for modifier crossingsalso as before half to twothirds of crossings in the s p and p alignments are due to phrasal translationsmore interestingly we see that modifier crossings remain significantly less prevalent than head crossings and that this is true uniformly across all parameter settingsthis indicates that heads are more intimately involved with their modifiers than modifiers are with each other and therefore are more likely to be involved in semiphrasal constructionssince it is clear that crossings are too prevalent to ignore it is informative to try to understand exactly what constructions give rise to themto that end we examined by hand all of the head crossings produced using the s alignments with phrasal filteringtable 3 shows the results of this analysisthe first thing to note is that by far most of the crossings do not reflect lack of phrasal cohesion between the two languagesinstead they are caused either by errors in the syntactic analysis or the fact that translation as done by humans is a much richer process than just replication of the source sentence in another languagesentences are reworded clauses are reordered and sometimes human translators even make mistakeserrors in syntactic analysis consist mostly of attachment errorsrewording and reordering accounted for a large number of crossings as wellin most of the cases of rewording or relorsque nous avons prepare le budget nous avons pris cela en consideration ordering a more parallel translation would also be validthus while it would be difficult for a statistical model to learn from these examples there is nothing to preclude production of a valid translation from a system using phrasal movement in the reordering phasethe rewording and reordering examples were so varied that we were unable to find any regularities which might be exploited by a translation modelamong the cases which do result from language differences the most common is the ne pas construction fifteen percent of the 86 total crossings are due to this constructionbecause ne pas wraps around the verb it will always result in a crossinghowever the types of syntactic structures which are present in cases of negation are rather restrictedof the 47 total distinct syntactic structures which resulted in crossings only three of them involved negationin addition the crossings associated with these particular structures were unambiguously caused by negation next most common is the case where the english contains a modal verb which is aligned with the main verb in the frenchin the example in figure 6 will be is aligned to sera and because of the constituent structure of the english parse there is a crossingas with negation this type of crossing is quite regular resulting uniquely from only two different syntactic structuresmany of the causes listed above are related to verb phrasesin particular some of the adverbrelated crossings and all of the modalrelated crossings are artifacts of the nested verb phrase structure of our parserthis nesting usually does not provide any extra information beyond what could be gleaned from word ordertherefore we surmised that flattening verb phrases would eliminate some types of crossings without reducing the utility of the parsethe flattening operation consists of identifying all nested verb phrases and splicing the children of the nested phrase into the parent phrase in its placethis procedure is applied recursively until there are no nested verb phrasesan example is shown in figure 8crossings can be calculated as beforeadverbs are a third common because as they typically follow the verb in french while preceding it in englishfigure 7 shows an example where the span of simplement overlaps with the span of the verb phrase beginning with tells unlike negation and modals this case is far less regularit arises from six different syntactic constructions and two of those constructions are implicated in other types of crossings as wellflattening reduces the number of potential head crossings while increasing the number of potential modifier crossingstherefore we would expect to see a comparable change to the number of crossings measured and this is exactly what we find as shown in tables 4 and 5for example for s p alignments the average number of head crossings decreases from 2772 to 2252 while the average number of modifier crossings increases from 0516 to 086we see similar behavior when we look at the percentage of crossings per chance for the same alignment type the percentage of head crossings decreases from 1861 to 1512 while the percentage of modifier crossings increases from 847 to 1059one thing to note however is that the total number of crossings of both types detected in the corpus decreases as compared to the baseline and thus the benefits to head crossings outweigh the detriments to modifier crossingsour intuitions about the cohesion of syntactic structures follow from the notion that translation as a meaningpreserving operation preserves the dependencies between words and that syntactic structures encode these dependenciestherefore dependency structures should cohere as well as or better than their corresponding syntactic structuresto examine the validity of this we extracted dependency structures from the parse trees and calculated crossings for themfigure 9 shows a parse tree and its corresponding dependency structurethe procedure for counting modifier crossings in a dependency structure is identical to the procedure for parse treesfor head crossings the only difference is that rather than comparing spans of two siblings we compare the spans of a child and its parentagain focusing on the s p alignment case we see that the average number of head crossings continues to decrease compared to the previous case and that the average number of modifier crossings continues to increase this time however the percentages for both types of crossings decrease relative to the case of flattened verb phrases the percentage of modifier crossings is still higher than in the base case overall however the dependency representation has the best cohesion properties ernmentwe would like to thank franz och for providing us with the manually annotated data used in these experimentswe have examined the issue of phrasal cohesion between english and french and discovered that while there is less cohesion than we might desire there is still a large amount of regularity in the constructions where breakdowns occurthis reassures us that reordering words by phrasal movement is a reasonable strategymany of the initially daunting number of crossings were due to nonlinguistic reasons such as rewording during translation or errors in syntactic analysisamong the rest there are a small number of syntactic constructions which result in the majority of the crossings examined in our analysisone practical result of this skewed distribution is that one could hope to discover the major problem areas for a new language pair by manually aligning a small number of sentencesthis information could be used to filter a training corpus to remove sentences which would cause problems in training the translation model or for identifying areas to focus on when working to improve the model itselfwe are interested in examining different language pairs as the opportunity ariseswe have also examined the differences in cohesion between treebankstyle parse trees trees with flattened verb phrases and dependency structuresour results indicate that the highest degree of cohesion is present in dependency structurestherefore in an smt system which is using some type of phrasal movement during reordering dependency structures should produce better results than raw parse treesin the future we plan to explore this hypothesis in an actual translation systemthe work reported here was supported in part by the defense advanced research projects agency under contract number n6600100c8008the views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies either expressed or implied of the defense advanced research projects agency or the united states gov
W02-1039
phrasal cohesion and statistical machine translationthere has been much interest in using phrasal movement to improve statistical machine translationwe explore how well phrases cohere across two languages specifically english and french and examine the particular conditions under which they do notwe demonstrate that while there are cases where coherence is poor there are many regularities which can be exploited by a statistical machine translation systemwe also compare three variant syntactic representations to determine which one has the best properties with respect to cohesionwe measure phrasal cohesion in gold standard alignments by counting crossingswe compare treebank parser style analyses a variant with flattened vps and dependency structures
efficient deep processing of japanese we present a broad coverage japanese grammar written in the hpsg formalism with mrs semantics the grammar is created for use in real world applications such that robustness and performance issues play an important role it is connected to a pos tagging and word segmentation tool this grammar is being developed in a multilingual context requiring mrs structures that are easily comparable across languages natural language processing technology has recently reached a point where applications that rely on deep linguistic processing are becoming feasiblesuch applications require natural language understanding or at least an approximation thereofthis in turn requires rich and highly precise information as the output of a parsehowever if the technology is to meet the demands of realworld applications this must not come at the cost of robustnessrobustness requires not only wide coverage by the grammar but also large and extensible lexica as well as interfaces to preprocessing systems for named entity recognition nonlinguistic structures such as addresses etcfurthermore applications built on deep nlp technology should be extensible to multiple languagesthis requires flexible yet welldefined output structures that can be adapted to grammars of many different languagesfinally for use in realworld applications nlp systems meeting the above desiderata must also be efficientin this paper we describe the development of a broad coverage grammar for japanese that is used in an automatic email response applicationthe grammar is based on work done in the verbmobil project on machine translation of spoken dialogues in the domain of travel planningit has since been greatly extended to accommodate written japanese and new domainsthe grammar is couched in the theoretical framework of headdriven phrase structure grammar with semantic representations in minimal recursion semantics hpsg is well suited to the task of multilingual development of broad coverage grammars it is flexible enough and has a rich theoretical literature from which to draw analyzes and inspirationthe characteristic type hierarchy of hpsg also facilitates the development of grammars that are easy to extendmrs is a flat semantic formalism that works well with typed feature structures and is flexible in that it provides structures that are underspecified for scopal informationthese structures give compact representations of ambiguities that are often irrelevant to the task at handhpsg and mrs have the further advantage that there are practical and useful opensource tools for writing testing and efficiently processing grammars written in these formalismsthe tools we are using in this project include the lkb system for grammar development incr tsdb for testing the grammar and tracking changes and pet a very efficient hpsg parser for processingwe also use the chasen tokenizer and pos tagger while couched within the same general framework our approach differs from that of kanayama et al the work described there achieves impressive coverage with an underspecified grammar consisting of a small number of lexical entries lexical types associated with parts of speech and six underspecified grammar rulesin contrast our grammar is much larger in terms of the number of lexical entries the number of grammar rules and the constraints on both1 and takes correspondingly more effort to bring up to that level of coveragethe higher level of detail allows us to output precise semantic representations as well as to use syntactic semantic and lexical information to reduce ambiguity and rank parsesthe fundamental notion of an bpsg is the signa sign is a complex feature structure representing information of different linguistic levels of a phrase or lexical itemthe attributevalue matrix of a sign in the japanese bpsg is quite similar to a sign in the lingo english resource grammar with information about the orthographical realization of the lexical sign in phon syntactic and semantic information in synsem information about the lexical status in lex nonlocal information in nonloc head information that goes up the tree in head and information about subcategorization in subcatthe grammar implementation is based on a system of typesthere are 900 lexical types that define the syntactic semantic and pragmatic properties of the japanese words and 188 types that define the properties of phrases and lexical rulesthe grammar includes 50 lexical rules for inflectional and derivational morphology and 47 phrase structure rulesthe lexicon contains 5100 stem entriesas the grammar is developed for use in applications it treats a wide range of 1 we do also make use of generic lexical entries for certain parts of speech as a means of extending our lexiconsee section 3 below basic constructions of japaneseonly some of these phenomena can be described herethe structure of subcat is different from the erg subcat structurethis is due to differences in subcategorization between japanese and englisha fundamental difference is the fact that in japanese verbal arguments are frequently omittedfor example arguments that refer to the speaker addressee and other arguments that can be inferred from context are often omitted in spoken languageadditionally optional verbal arguments can scrambleon the other hand some arguments are not only obligatory but must also be realized adjacent to the selecting headto account for this our subcategorization contains the attributes sat and valthe sat value encodes whether a verbal argument is already saturated optional or adjacentval contains the agreement information for the argumentwhen an argument is realized its sat value on the mother node is specified as sat and its synsem is unified with its val value on the subcategorizing headthe val value on the mother is noneadjacency must be checked in every rule that combines heads and arguments or adjunctsthis is the principle of adjacency stated as follows in a headed phrase the subcatsat value on the nonhead daughter must not contain any adjacent argumentsin a headcomplement structure the subcatsat value of the head daughter must not contain any adjacent arguments besides the nonhead daughterin a headadjunct structure the subcatsat value of the head daughter must not contain any adjacent argumentsjapanese verb stems combine with endings that provide information about honorification tense aspect voice and modeinflectional rules for the different types of stems prepare the verb stems for combination with the verbal endingsfor example the verb stem yomu must be inflected to yon to combine with the past tense ending damorphological features constrain the combination of stem and endingin the above example the inflectional rule changes the mu character to the n character and assigns the value ndmorph to the morphological feature rmorphbindtypethe ending da selects for a verbal stem with this valueendings can be combined with other endings as in saseraremashita but not arbitrarily sasemashirareta sasetamashirare saseta raremashita this is accounted for with two kinds of rules which realize mutually selected elementsin the combination of stem and ending the verb stem selects for the verbal ending via the head feature specin the case of the combination of two verbal endings the first ending selects for the second one via the head feature markin both cases the right element subcategorizes for the left one via subcatvalsprusing this mechanism it is possible to control the sequence of verbal endings verb stems select verbal endings via spec and take no spr derivational morphemes select tense endings or other derivational morphemes via mark and subcategorize for verb stems andor verb endings via spr and tense endings take verb stems or endings as spr and take no mark or spec a special treatment is needed for japanese verbal noun light verb constructionsin these cases a word that combines the qualities of a noun with those of a verb occurs in a construction with a verb that has only marginal semantic informationthe syntactic semantic and pragmatic information on the complex is a combination of the information of the twoconsider example 1the verbal noun benkyou contains subcategorization information as well as semantic information the light verb shita supplies tense information pragmatic information can be supplied by both parts of the construction as in the formal form obenkyou shimashitathe rule that licenses this type of combination is the vnlightrule a subtype of the headmarkerrule study dopast omeone has studiedjapanese auxiliaries combine with verbs and provide either aspectual or perspective information or information about honorificationin a verbauxiliary construction the information about subcategorization is a combination of the subcat information of verb and auxiliary depending on the type of auxiliarythe rule responsible for the information combination in these cases is the headspecifierrulewe have three basic types of auxiliariesthe first type is aspect auxiliariesthese are treated as raising verbs and include such elements as iru and aru as can be seen in example 2the other two classes of auxiliaries provide information about perspective or the point of view from which a situation is being describedboth classes of auxiliaries add a ni marked argument to the argument structure of the whole predicatethe classes differ in how they relate their arguments to the arguments of the verbone class are treated as subject control verbsthe other class establishes a control relation between the nimarked argument and the embedded subjectwatashi ga sensei ni hon wo i nom teacher dat book acc katte moratta buy getpast the teacher bought me a bookthe careful treatment of japanese particles is essential because they are the most frequently occurring words and have various central functions in the grammarit is difficult because one particle can fulfill more than one function and they can cooccur but not arbitrarilythe japanese grammar thus contains a type hierarchy of 44 types for particlessee siegel for a more detailed description of relevant phenomena and solutionsnumber names such as sen kyuu hyaku juu 1910 constitute a notable exception to the general headfinal pattern of japanese phraseswe found smith headmedial analysis of english number names to be directly applicable to the japanese system as well this analysis was easily incorporated into the grammar despite the oddity of head positioning because the type hierarchy of hpsg is well suited to express the partial generalizations that permeate natural languageon the other hand number names in japanese contrast sharply with number names in english in that they are rarely used without a numeral classifierthe grammar provides for true numeral classifiers like hon ko and hiki as well as formatives like en yen and do degree which combine with number names just like numeral classifiers do but never serve as numeral classifiers for other nounsin addition there are a few nonbranching rules that allow bare number names to surface as numeral classifier phrases with specific semantic constraintsspoken language and email correspondence both encode references to the social relation of the dialogue partnersutterances can express social distance between addressee and speaker and third personshonorifics can even express respect towards inanimatespragmatic information is treated in the context layer of the complex signshonorific information is given in the contextbackground and linked to addressee and speaker anchorsthe expression of empathy or ingroup vs outgroup is quite prevalent in japaneseone means of expressing empathy is the perspective auxiliaries discussed abovefor example two auxiliaries meaning roughly give contrast in where they place the empathyin the case of ageru it is with the giverin the case of kureru it is with the recipientwe model this within the sign by positing a feature empathy within context and linking it to the relevant arguments indicesin the multilingual context in which this grammar has been developed a high premium is placed on parallel and consistent semantic representations between grammars for different languagesensuring this parallelism enables the reuse of the same downstream technology no matter which language is used as inputintegrating mrs representations parallel to those used in the erg into the japanese grammar took approximately 3 monthsof course semantic work is ongoing as every new construction treated needs to be given a suitable semantic representationfor the most part semantic representations developed for english were straightforwardly applicable to japanesethis section provides a brief overview of those cases where the japanese constructions we encountered led to innovations in the semantic representations andor the correspondence between syntactic and semantic structuresdue to space limitations we discuss these analyses in general terms and omit technical details2l nominalization and verbal nouns nominalization is of course attested in english and across languageshowever it is much more prevalent in japanese than in english primarily because of verbal nounsas noted in section 13 above a verbal noun like benkyou tudy can appear in syntactic contexts requiring nouns or in combination with a light verb in contexts requiring verbsone possible analysis would provide two separate lexical entries one with nominal and one with verbal semanticshowever this would not only be redundant but would also contradict the intuition that even in its nominal use the arguments of benkyou are still presentnihongo no benkyou wo hajimerujapanese gen study acc begin omeone begins the study of japanesein order to capture this intuition we opted for an analysis that essentially treats verbal nouns as underlyingly verbalthe nominal uses are produced by a lexical rule which nominalizes the verbal nounsthe semantic effect of this rule is to provide a nominal relation which introduces a variable which can in turn be bound by quantifiersthe nominal relation subordinates the original verbal relation supplied by the verbal nounthe rule is lexical as we have not yet found any cases where the verb arguments are clearly filled by phrases in the syntaxif they do appear it is with genitive marking in order to reduce ambiguity we leave the relationship between these genitive marked nps and the nominalized verbal noun underspecifiedthere is nothing in the syntax to disambiguate these cases and we find that they are better left to downstream processing where there may be access to world knowledgeas noted in section15 the internal syntax of number names is surprisingly parallel between english and japanese but their external syntax differs dramaticallyenglish number names can appear directly as modifiers of nps and are treated semantically as adjectives in the ergjapanese number names can only modify nouns in combination with numeral classifiersin addition numeral classifier phrases can appear in np positions finally some numeralclassifierlike elements do not serve the modifier function but can only head phrases that fill np positionsthis constellation of facts required the following innovations a representation of numbers that does not treat them as adjectives a representation of the semantic contribution of numeral classifiers and a set of rules for promoting numeral classifier phrases to nps that contribute the appropriate nominal semantics the primary issue in the analysis of relative clauses and adjectives is the possibility of extreme ambiguity due to several intersecting factors japanese has rampant prodrop and does not have any relative pronounsin addition a head noun modified by a relative clause need not correspond to any gap in the relative clause as shown by examples like the following head nom better become book a book that makes one smarter therefore if we were to posit an attributive adjective noun construction we would have systematic ambiguities for nps like akai hon ambiguities which could never be resolved based on information in the sentenceinstead we have opted for a relative clause analysis of any adjective noun combination in which the adjective could potentially be used predicativelyfurthermore because of gapless relative clauses like the one cited above we have opted for a nonextraction analysis of relative clauses2 nonetheless the wellformedness constraints on mrs representations require that there be 2 there is in fact some linguistic evidence for extraction in some relative clauses in japanese however we saw no practical need to allow for this possibility in our grammar and particularly not one that would justify the increase in ambiguitythere is also evidence that some adjectives are true attributives and cannot be used predicatively these are handled by a separate adjective noun rule restricted to just these cases some relationship between the head noun and the relative clausewe picked the topic relation for this purpose the topic relation is introduced into the semantics by the relative clause ruleas with main clause topics we rely on downstream anaphora resolution to refine the relationshipfor the most part semantic representations and the syntaxsemantic interface already worked out in the erg were directly applicable to the japanese grammarin those cases where japanese presented problems not yet encountered in english it was fairly straightforward to work out suitable mrs representations and means of building them upboth of these points illustrate the crosslinguistic validity and practical utility of mrs representationsas japanese written text does not have word segmentation a preprocessing system is requiredwe integrated chasen a tool that provides word segmentation as well as pos tags and morphological information such as verbal inflectionas the lexical coverage of chasen is higher than that of the hpsg lexicon default partofspeech entries are inserted into the lexiconthese are triggered by the partofspeech information given by chasen if there is no existing entry in the lexiconthese specific default entries assign a type to the word that contains features typical to its partofspeechit is therefore possible to restrict the lexicon to those cases where the lexical information contains more than the typical information for a certain partofspeechthis default mechanism is often used for different kinds of names and ordinary nouns but also for adverbs interjections and verbal nouns 3 the chasen lexicon is extended with a domainspecific lexicon containing among others names in the domain of bankingfor verbs and adjectives chasen gives information about stems and inflection that is used in a similar waythe inflection type is translated to an hpsg typethese types interact with the inflectional rules in the grammar such that the default entries are inflected just as known words would bein addition to the preprocessing done by chasen an additional preprocessing tool recognizes numbers date expressions addresses email addresses urls telephone numbers and currency expressionsthe output of the preprocessing tool replaces these expressions in the string with placeholdersthe placeholders are parsed by the grammar using special placeholder lexical entriesthe grammar is aimed at working with realworld data rather than at experimenting with linguistic examplestherefore robustness and performance issues play an important rolewhile grammar development is carried out in the lkb processing is done with the highly efficient pet parser figures 1 and 2 show the performance of pet parsing of handmade and real data respectivelyone characteristic of realworld data is the variety of punctuation marks that occur and the potential for ambiguity that they bringin our grammar certain punctuation marks are given lexical entries and processed by grammar rulestake for example quotation marksignoring them leads to a significant loss of structural information omeone said push the buttonquot the formative to is actually ambiguous between a complementizer and a conjunctionsince the phrase before to is a complete sentence this string is ambiguous if one ignores the quotation markswith the quotation marks however only the complementizer to is possiblegiven the high degree of ambiguity inherent in broadcoverage grammars we have found it extremely useful to parse punctuation rather than ignore itthe domains we have been working on contain many date and number expressionswhile a shallow tool recognizes general structures the grammar contains rules and types to process thesephenomena occurring in semispontaneous language such as interjections contracted verb forms ate it all up fragmentary sentences and np fragments must be covered as well as the ordinary complete sentences found in more carefully edited textour grammar includes types lexical entries and grammar rules for dealing with such phenomenaperhaps the most important performance issue for broad coverage grammars is ambiguityat one point in the development of this grammar the average number of readings doubled in two months of workwe currently have two strategies for addressing this problem first we include a mechanism into the grammar rules that chooses leftbranching rules in cases of compounds genitive modification and conjuncts as we do not have enough lexicalsemantic information represented to choose the right dependencies in these cases4 secondly we use a mechanism for handcoding reading preferences among rules and lexical entries4consider for example genitive modification the semantic relationship between modifier and modifiee is dependent on their semantic properties toukyou no kaigi the meeting in tokyo watashi no hon my bookmore lexicalsemantic information is needed to choose the correct parse in more complex structures such as in watashi no toukyou no imooto my sister in tokyorestrictions like headcomplement preferred to headadjunct are quite obviousothers require domainspecific mechanisms that shall be subject of further workstochastic disambiguation methods being developed for the erg by the redwoods project at stanford university should be applicable to this grammar as wellthe grammar currently covers 934 of constructed examples for the banking domain and 782 of realistic email correspondence data concerning requests for documentsduring three months of work the coverage in the banking domain increased 4849the coverage of the document request data increased 5143 in the following two weekswe applied the grammar to unseen data in one of the covered domains namely the faq site of a japanese bankthe coverage was 61912 of the parses output were associated with all wellformed mrssthat means that we could get correct mrss in 5561 of all sentenceswe described a broad coverage japanese grammar based on hpsg theoryit encodes syntactic semantic and pragmatic informationthe grammar system is connected to a morphological analysis system and uses default entries for words unknown to the hpsg lexiconsome basic constructions of the japanese grammar were describedas the grammar is aimed at working in applications with realworld data performance and robustness issues are importantthe grammar is being developed in a multilingual context where much value is placed on parallel and consistent semantic representationsthe development of this grammar constitutes an important test of the crosslinguistic validity of the mrs formalismthe evaluation shows that the grammar is at a stage where domain adaptation is possible in a reasonable amount of timethus it is a powerful resource for linguistic applications for japanesein future work this grammar could be further adapted to another domain such as the edr newspaper corpus as each new domain is approached we anticipate that the adaptation will become easier as resources from earlier domains are reusedinitial evaluation of the grammar on new domains and the growth curve of grammar coverage should bear this out
W02-1210
efficient deep processing of japanesewe present a broad coverage japanese grammar written in the hpsg formalism with mrs semanticsthe grammar is created for use in real world applications such that robustness and performance issues play an important roleit is connected to a pos tagging and word segmentation toolthis grammar is being developed in a multilingual context requiring mrs structures that are easily comparable across languagesour handcrafted japanese hpsg grammar jacy provides semantic information as well as linguistically motivated analysis of complex constructions
the grammar matrix an opensource starterkit for the rapid development of crosslinguistically consistent broadcoverage precision grammars the grammar matrix is an opensource starterkit for the development of broad by using a type hierarchy to represent crosslinguistic generalizations and providing compatibility with other opensource tools for grammar engineering evaluation parsing and generation it facilitates not only quick startup but also rapid growth towards the wide coverage necessary for robust natural language processing and the precision parses and semantic representations necessary for natural language understanding the past decade has seen the development of widecoverage implemented grammars representing deep linguistic analysis of several languages in several frameworks including headdriven phrase structure grammar lexicalfunctional grammar and lexicalized tree adjoining grammar in hpsg the most extensive grammars are those of english german and japanese despite being couched in the same general framework and in some cases being written in the same formalism and consequently being compatible with the same parsing and generation software these grammars were developed more or less independently of each otherthey each represent between 5 and 15 person years of research efforts and comprise 3570000 lines of codeunfortunately most of that research is undocumented and the accumulated analyses best practices for grammar engineering and tricks of the trade are only available through painstaking inspection of the grammars andor consultation with their authorsthis lack of documentation holds across frameworks with certain notable exceptions including alshawi muller and butt king nino segond grammars which have been under development for many years tend to be very difficult to mine for information as they contain layers upon layers of interacting analyses and decisions made in light of various intermediate stages of the grammaras a result when embarking on the creation of a new grammar for another language it seems almost easier to start from scratch than to try to model it on an existing grammarthis is unfortunatebeing able to leverage the knowledge and infrastructure embedded in existing grammars would greatly accelerate the process of developing new onesat the same time these grammars represent an untapped resource for the bottomup exploration of language universalsas part of the lingo consortiums multilingual grammar engineering effort we are developing a grammar matrix or starterkit distilling the wisdom of existing grammars and codifying and documenting it in a form that can be used as the basis for new grammarsin the following sections we outline the inventory of a first preliminary version of the grammar matrix discuss the interaction of basic construction types and semantic composition in unification grammars by means of a detailed example and consider extensions to the core inventory that we foresee and an evaluation methodology for the matrix properwe have produced a preliminary version of the grammar matrix relying heavily on the lingo projects english resource grammar and to a lesser extent on the japanese grammar developed jointly between dfki saarbrucken and yy technologies this early version of the matrix comprises the following comincluded with the matrix are configuration and parameter files for the lkb grammar engineering environment although small this preliminary version of the matrix already reflects the main goals of the project consistent with other work in hpsg semantic representations and in particular the syntaxsemantics interface are developed in detail the types of the matrix are each representations of generalizations across linguistic objects and across languages and the richness of the matrix and the incorporation of files which connect it with the lkb allow for extremely quick startup as the matrix is applied to new languagessince february 2002 this preliminary version of the matrix has been in use at two norwegian universities one working towards a broadcoverage reference implementation of norwegian the otherfor the time beingfocused on specific aspects of clause structure and lexical description in the first experiment with the matrix at ntnu basic norwegian sentences were parsing and producing reasonable semantics within two hours of downloading the matrix fileslinguistic coverage should scale up quickly since the foundation supplied by the matrix is designed not only to provide a quick start but also to support longterm development of broadcoverage grammarsboth initiatives have confirmed the utility of the matrix starter kit and already have contributed to a series of discussions on crosslingual hpsg design aspects specifically in the areas of argument structure representations in the lexicon and basic assumptions about constituent structure the user groups have suggested refinements and extensions of the basic inventory and it is expected that general solutions as they are identified jointly will propagate into the existing grammars tooas an example of the level of detail involved in the grammar matrix in this section we consider the analysis of intersective and scopal modificationthe matrix is built to give minimal recursion semantics representationsthe two english examples in exemplify the difference between intersective and scopal modification1 the mrss for are given in and the mrss are ordered tuples consisting of a top handle an instance or event variable a bag of elementary predications and a bag of scope constraints in a wellformed mrs the handles can be 1these examples also differ in that probably is a prehead modifier while on a spaceship is a posthead modifierthis wordorder distinction crosscuts the semantic distinction and our focus is on the latter so we will not consider the wordorder aspects of these examples here identified in one or more ways respecting the scope constraints such that the dependencies between the eps form a treefor a detailed description of mrs see the works cited abovehere we will focus on the difference between the intersective modifier on and the scopal modifier probablyin the ep contributed by on shares its handle with the ep contributed by the verb it is modifying as such the two will always have the same scope no quantifier can intervenefurther the second argument of the onrel is the event variable of the studyrelthe first argument e is the event variable of the onrel and the third argument z is the instance variable of the spaceshiprelin the ep contributed by the scopal modifier probably has its own handle which is not shared by anythingfurthermore it takes a handle rather than the event variable of the studyrel as its argument h8 is equal modulo quantifiers to the handle of the studyrel and h7 is equal modulo quantifiers to the argument of the prpstnrel the prpstnrel is the ep representing the illocutionary force of the whole expressionthis means that quantifiers associated with the nps keanu and kung fu can scope inside or outside probablywhile the details of modifier placement which parts of speech can modify which kinds of phrases etc differ across languages we believe that all languages display a distinction between scopal and intersective modificationaccordingly the types necessary for describing these two kinds of modification are included in the matrixthe types isectmodphrase and scopalmodphrase encode the information necessary to build up in a compositional manner the modifier portions of the mrss in and these types are embedded in the type hierarchy of the matrixthrough their supertype headmodphrsimple they inherit information common to many types of phrases including the basic feature geometry head feature and nonlocal feature passing and semantic compositionalitythese types also have subtypes in the matrix specifying the two wordorder possibilities giving a total of four subtypes2 the most important difference between these types is in the treatment of the handle of the head daughters semantics to distinguish intersective and scopal modificationin isectmodphrase the top handles of the head and nonhead daughters are identified this allows for mrss like where the eps contributed by the head and the modifier take the same scopethe type scopalmodphrase bears no such constraintthis allows for mrss like where the modifiers semantic contribution takes the handle of the heads semantics as its argument so that the modifier outscopes the headin both types of modifier phrase a constraint inherited from the supertype ensures that the handle of the modifier is also the handle of the whole phrasethe constraints on the local value inside the modifiers mod value regulate which lexical items can appear in which kind of phraseintersective modifiers specify lexically that they are mod and scopal modifiers specify lexically that they are mod 3 these constraints exemplify the kind of information that will be developed in the lexical hierarchy of the matrixit is characteristic of broadcoverage grammars that every particular analysis interacts with many other analysesmodularization is an ongoing concern both for maintainability of individual grammars and for providing the right level of abstraction in the matrixfor the same reasons we have only been able to touch on the highlights of the semantic analysis of modification here but hope that this quick tour will suffice to illustrate the extent of the jumpstart the matrix can give in the development of new grammarsthe initial version of the matrix while sufficient to support some useful grammar work will require substantial further development on several fronts including lexical representation syntactic generalization sociolinguistic variation processing issues and evaluationthis first version drew most heavily from the implementation of the english grammar with some further insights drawn from the grammar of japaneseextensions to the matrix will be based on careful study of existing implemented grammars for other languages notably german spanish and japanese as well as feedback from those using the first version of the matrixfor lexical representation one of the most urgent needs is to provide a languageindependent type hierarchy for the lexicon at least for major parts of speech establishing the mechanisms used for linking syntactic subcategorization to semantic predicateargument structurelexical rules provide a second mechanism for expressing generalizations within the lexicon and offer ready opportunities for crosslinguistic abstractions for both inflectional and derivational regularitieswork is also progressing on establishing a standard relational database for storing information for the lexical entries themselves improving both scalability and clarity compared to the current simple text file representationformbased tools will be provided both for constructing lexical entries and for viewing the contents of the lexiconthe primary focus of work on syntactic generalization in the matrix is to support more freedom in word order for both complements and modifiersthe first step will be a relatively conservative extension along the lines of netter allowing the grammar writer more control over how a head combines with complements of different types and their interleaving with modifier phrasesother areas of immediate crosslinguistic interest include the hierarchy of head types control phenomena clitics auxiliary verbs nounnoun compounds and more generally phenomena that involve the wordphrase distinction such as noun incorporationa study of the existing grammars for english german japanese and spanish reveals a high degree of languagespecificity for several of these phenomena but also suggests promise of reusable abstractionsseveral kinds of sociolinguistic variation require extensions to the matrix including grammaticized aspects of pragmatics such as politeness and empathy as well as dialect and register alternationsthe grammar of japanese provides a starting point for representations of both empathy and politenessimplementations of familiar vs formal verb forms in german and spanish provide further instances of politeness to help build the crosslinguistic abstractionsextensions for dialect variation will build on some exploratory work in adapting the english grammar to support american british and australian regionalisms both lexical and syntactic while restricting dialect mixture in generation and associated spurious ambiguity in parsingwhile the development of the matrix will be built largely on the lkb platform support will also be needed for using the emerging grammars on other processing platforms and for linking to other packages for preprocessing the linguistic inputseveral other platforms exist which can efficiently parse text using the existing grammars including the pet system developed in c at saarland university and the dfki the page system developed in lisp at the dfki the lilfes system developed at tokyo university and a parallel processing system developed in objective c at delft university as part of the matrix package sample configuration files and documentation will be provided for at least some of these additional platformsexisting preprocessing packages can also significantly reduce the effort required to develop a new grammar particularly for coping with the morphologysyntax interfacefor example the chasen package for segmenting japanese input into words and morphemes has been linked to at least the lkb and pet systemssupport for connecting implementations of languagespecific preprocessing packages of this kind will be preserved and extended as the matrix developslikewise configuration files are included to support generation at least within the lkb provided that the grammar conforms to certain assumptions about semantic representation using the minimal recursion semantics frameworkfinally a methodology is under development for constructing and using test suites organized around a typology of linguistic phenomena using the implementation platform of the incr tsdbo profiling package these test suites will enable better communication about current coverage of a given grammar built using the matrix and serve as the basis for identifying additional phenomena that need to be addressed crosslinguistically within the matrixof course the development of the typology of phenomena is itself a major undertaking for which a systematic crosslinguistic approach will be needed a discussion of which is outside the scope of this reportbut the intent is to seed this classification scheme with a set of relatively coarsegrained phenomenon classes drawn from the existing grammars then refine the typology as it is applied to these and new grammars built using the matrixone important part of the matrix package will be a library of phenomenonbased analyses drawn from the existing grammars and over time from users of the matrix to provide working examples of how the matrix can be applied and extendedeach case study will be a set of grammar files simplified for relevance along with documentation of the analysis and a test suite of sample sentences which define the range of data covered by the analysisthis library too will be organized around the typology of phenomena introduced above but will also make explicit reference to language families since both similarities and differences among related languages will be of interest in these case studiesexamples to be included in the first release of this library include numeral classifiers in japanese subject pro drop in spanish partialvp fronting in german and verb diathesis in norwegianthe matrix itself is not a grammar but a collection of generalizations across grammarsas such it cannot be tested directly on corpora from particular languages and we must find other means of evaluationwe envision overall evaluation of the matrix based on case studies of its performance in helping grammar engineers quickly start new grammars and in helping them scale those grammars upevaluation in detail will based on automatable deletionsubstitution metrics ie tools that determine which types from the matrix get used as is which get used with modifications and which get ignored in various matrixderived grammarsfurthermore if the matrix evolves to include defeasible constraints these tools will check which constraints get overridden and whether the value chosen is indeed common enough to be motivated as a default valuethis evaluation in detail should be paired with feedback from the grammar engineers to determine why changes were madethe main goal of evaluation is of course to improve the matrix over timethis raises the question of how to propagate changes in the matrix to grammars based on earlier versionsthe following three strategies seem promising segregate changes that are important to sync to develop a methodology for communicating changes in the matrix their motivation and their implementation to the user community and develop tools for semiautomating resynching of existing grammars to upgrades of the matrixthese tools could use the type hierarchy to predict where conflicts are likely to arise and bring these to the engineers attention possibly inspired by the approach under development at csli for the dynamic maintenance of the lingo redwoods treebank finally while initial development of the matrix has been and will continue to be highly centralized we hope to provide support for proposed matrix improvements from the user communityuser feedback will already come in the form of case studies for the library as discussed in section 5 above but also potentially in proposals for modification of the matrix drawing on experiences in grammar developmentin order to provide users with some crosslinguistic context in which to develop and evaluate such proposals themselves we intend to provide some sample matrixderived grammars and corresponding testsuites with the matrixa user could thus make a proposed change to the matrix run the testsuites for several languages using the supplied grammars which draw from that changed matrix and use incr tsdbo to determine which phenomena have been affected by the changeit is clear that full automation of this evaluation process will be difficult but at least some classes of changes to the matrix will permit this kind of quick crosslinguistic feedback to users with only a modest amount of additional infrastructurethis project carries linguistic computational and practical interestthe linguistic interest lies in the hpsg communitys general bottomup approach to language universals which involves aiming for good coverage of a variety of languages first and leaving the task of what they have in common for laternow that we have implementations with fairly extensive coverage for a somewhat typologically diverse set of languages it is a good time to take the next step in this program working to extract and generalize what is similar across these existing widecoverage grammarsmoreover the central role of types in the representation of linguistic generalizations enables the kind of underspecification which is useful for expressing what is common among related languages while allowing for the further specialization which necessarily distinguishes one language from anotherthe computational interest is threefoldfirst there is the question of what formal devices the grammar matrix will requireshould it include defaultswhat about domain union the selection and deployment of formal devices should be informed by ongoing research on processing schemes and here the crosslinguistic perspective can be particularly helpfulwhere there are several equivalent analyses of the same linguistic phenomena the choice of analysis can have processing implications that are not necessarily apparent in a single grammarsecond having a set ofwidecoverage hpsgs with fairly standardized fundamentals could prove interesting for research on stochastic processing and disambiguation especially if the languages differ in gross typological features such as word orderfinally there are also computational issues involved in how the grammar matrix would evolve over time as it is used in new grammarsthe matrix enables the developer of a grammar for a new language to get a quick start on producing a system that parses and generates with nontrivial semantics while also building the foundation for a widecoverage grammar of the languagebut the matrix itself may well change in parallel with the development of the grammar for a particular language so appropriate mechanisms must be developed to support the merging of enhancements to boththere is also practical industrial benefit to this projectcompanies that are consumers of these grammars benefit when grammars of multiple languages work with the same parsing and generation algorithms and produce standardized semantic representations derived from a rich linguistically motivated syntaxsemantics interfacemore importantly the grammar matrix will help to remove one of the primary remaining obstacles to commercial deployment of grammars of this type and indeed of the commercial use of deep linguistic analysis the immense cost of developing the resourcesince the grammar matrix draws on prior research and existing grammars it necessarily reflects contributions from many peoplerob malouf jeff smith john beavers and kathryn campbellkibler have contributed to the lingo erg melanie siegel is the original developer for the japanese grammartim baldwin ann copestake ivan sag tom wasow and other members of the lingo laboratory at csli have had a great deal of influence on the design of the grammatical analyses and corresponding mrs representationswarmest thanks to lars hellan and his colleagues at ntnu and jan tore lønning and his students at oslo university for their cooperation patience and tolerance
W02-1502
the grammar matrix an opensource starterkit for the rapid development of crosslinguistically consistent broadcoverage precision grammarsthe grammar matrix is an opensource starterkit for the development of broadcoverage hpsgsby using a type hierarchy to represent crosslinguistic generalizations and providing compatibility with other opensource tools for grammar engineering evaluation parsing and generation it facilitates not only quick startup but also rapid growth towards the wide coverage necessary for robust natural language processing and the precision parses and semantic representations necessary for natural language understandingour lingo grammar matrix project is both a repository of reusable linguistic knowledge and a method of delivering this knowledge to a user in the form of an extensible precision implemented grammar
the parallel grammar project we report on the parallel grammar project which uses the xle parser and grammar development platform for six languages english largescale grammar development platforms are expensive and time consuming to produceas such a desideratum for the platforms is a broad utilization scopea grammar development platform should be able to be used to write grammars for a wide variety of languages and a broad range of purposesin this paper we report on the parallel grammar project which uses the xle parser and grammar development platform for six languages english french german japanese norwegian and urduall of the grammars use the lexicalfunctional grammar formalism which produces cstructures and fstructures as the syntactic analysislfg assumes a version of chomskys universal grammar hypothesis namely that all languages are structured by similar underlying principleswithin lfg fstructures are meant to encode a language universal level of analysis allowing for crosslinguistic parallelism at this level of abstractionalthough the construction of cstructures is governed by general wellformedness principles this level of analysis encodes language particular differences in linear word order surface morphological vs syntactic structures and constituencythe pargram project aims to test the lfg formalism for its universality and coverage limitations and to see how far parallelism can be maintained across languageswhere possible the analyses produced by the grammars for similar constructions in each language are parallelthis has the computational advantage that the grammars can be used in similar applications and that machine translation can be simplifiedthe results of the project to date are encouragingdespite differences between the languages involved and the aims and backgrounds of the project groups the pargram grammars achieve a high level of parallelismthis parallelism applies to the syntactic analyses produced as well as to grammar development itself the sharing of templates and feature declarations the utilization of common techniques and the transfer of knowledge and technology from one grammar to anotherthe ability to bundle grammar writing techniques such as templates into transferable technology means that new grammars can be bootstrapped in a relatively short amount of timethere are a number of other largescale grammar projects in existence which we mention briefly herethe lsgram project funded by the eucommission under lre was concerned with the development of grammatical resources for nine european languages danish dutch english french german greek italian portuguese and spanishthe project started in january 1994 and ended in july 1996development of grammatical resources was carried out in the framework of the advanced language engineering platform the coverage of the grammars implemented in lsgram was however much smaller than the coverage of the english or german grammar in pargraman effort which is closer in spirit to pargram is the implemention of grammar development platforms for hpsgin the verbmobil project hpsg grammars for english german and japanese were developed on two platforms lkb and pagethe page system developed and maintained in the language technology lab of the german national research center on artificial intelligence dfki gmbh is an advanced nlp core engine that facilitates the development of grammatical and lexical resources building on typed feature logicsto evaluate the hpsg platforms and to compare their merits with those of xle and the pargram projects one would have to organize a special workshop particularly as the hpsg grammars in verbmobil were written for spoken language characterized by short utterances whereas the lfg grammars were developed for parsing technical manuals andor newspaper textsthere are some indications that the german and english grammars in pargram exceed the hpsg grammars in coverage on the german hpsg grammarthis paper is organized as followswe first provide a history of the projectthen we discuss how parallelism is maintained in the projectfinally we provide a summary and discussionthe pargram project began in 1994 with three languages english french and germanthe grammar writers worked closely together to solidify the grammatical analyses and conventionsin addition as xle was still in development its abilities grew as the size of the grammars and their needs grewafter the initial stage of the project more languages were addedbecause japanese is typologically very different from the initial three european languages of the project it represented a challenging casedespite this typological challenge the japanese grammar has achieved broad coverage and high performance within a year and a halfthe south asian language urdu also provides a widely spoken typologically distinct languagealthough it is of indoeuropean origin it shares many characteristics with japanese such as verbfinality relatively free word order complex predicates and the ability to drop any argument norwegian assumes a typological middle position between german and english sharing different properties with each of themboth the urdu and the norwegian grammars are still relatively smalleach grammar project has different goals and each site employs grammar writers with different backgrounds and skillsthe english german and japanese projects have pursued the goal of having broad coverage industrial grammarsthe norwegian and urdu grammars are smaller scale but are experimenting with incorporating different kinds of information into the grammarthe norwegian grammar includes a semantic projection their analyses produce not only c and fstructures but also semantic structuresthe urdu grammar has implemented a level of argument structure and is testing various theoretical linguistic ideashowever even when the grammars are used for different purposes and have different additional features they have maintained their basic parallelism in analysis and have profited from the shared grammar writing techniques and technologytable shows the size of the grammarsthe first figure is the number of lefthand side categories in phrasestructure rules which compile into a collection of finitestate machines with the listed number of states and arcsmaintaining parallelism in grammars being developed at different sites on typologically distinct languages by grammar writers from different linguistic traditions has proven successfulat project meetings held twice a year analyses of sample sentences are compared and any differences are discussed the goal is to determine whether the differences are justified or whether the analyses should be changed to maintain parallelismin addition all of the fstructure features and their values are compared this not only ensures that trivial differences in naming conventions do not arise but also gives an overview of the constructions each language covers and how they are analyzedall changes are implemented before the next project meetingeach meeting also involves discussion of constructions whose analysis has not yet been settled on eg the analysis of partitives or proper namesif an analysis is agreed upon all the grammars implement it if only a tentative analysis is found one grammar implements it and reports on its successfor extremely complicated or fundamental issues eg how to represent predicate alternations subcommittees examine the issue and report on it at the next meetingthe discussion of such issues may be reopened at successive meetings until a concensus is reachedeven within a given linguistic formalism lfg for pargram there is usually more than one way to analyze a constructionmoreover the same theoretical analysis may have different possible implementations in xlethese solutions often differ in efficiency or conceptual simplicity and one of the tasks within the pargram project is to make design decisions which favor one theoretical analysis and concomitant implementation over anotherwhenever possible the pargram grammars choose the same analysis and the same technical solution for equivalent constructionsthis was done for example with imperativesimperatives are always assigned a null pronominal subject within the fstructure and a feature indicating that they are imperatives as in another example of this type comes from the analysis of specifiersspecifiers include many different types of information and hence can be analyzed in a number of waysin the pargram analysis the cstructure analysis is left relatively free according to language particular needs and slightly varying theoretical assumptionsfor instance the norwegian grammar unlike the other grammars implements the principles in concerning the relationship between an x based cstructure and the fstructurethis allows norwegian specifiers to be analyzed as functional heads of dps etc whereas they are constituents of nps in the other grammarshowever at the level of fstructure this information is part of a complex spec feature in all the grammarsthus parallelism is maintained at the level of fstructure even across different theoretical preferencesan example is shown in for norwegian and english in which the spec consists of a quant and a poss a alle mine hester all my horses all my horses interrogatives provide an interesting example because they differ significantly in the cstructures of the languages but have the same basic fstructurethis contrast can be seen between the german example in and the urdu one in in german the interrogative word is in first position with the finite verb second english and norwegian pattern like germanin urdu the verb is usually in final position but the interrogative can appear in a number of positions including following the verb despite these differences in word order and hence in cstructure the fstructures are parallel with the interrogative being in a focusint and the sentence having an interrogative stmttype as in in the project grammars many basic constructions are of this typehowever as we will see in the next section there are times when parallelism is not possible and not desirableeven in these cases though the grammars which can be parallel are so three of the languages might have one analysis while three have anotherparallelism is not maintained at the cost of misrepresenting the languagethis is reflected by the fact that the cstructures are not parallel because word order varies widely from language to language although there are naming conventions for the nodesinstead the bulk of the parallelism is in the fstructurehowever even in the fstructure situations arise in which what seems to be the same construction in different languages do not have the same analysisan example of this is predicate adjectives as in it top red it is red in english the copular verb is considered the syntactic head of the clause with the pronoun being the subject and the predicate adjective being an xcomphowever in japanese the adjective is the mainpredicate with the pronoun being the subjectas such these receive the nonparallel analyses seen in for japanese and for englishanother situation that arises is when a feature or construction is syntactically encoded in one language but not anotherin such cases the information is only encoded in the languages that need itthe equivalence captured by parallel analyses is not for example translational equivalencerather parallelism involves equivalence with respect to grammatical properties eg construction typesone consequence of this is that a typologically consistent use of grammatical terms embodied in the feature names is enforcedfor example even though there is a tradition for referring to the distinction between the pronouns he and she as a gender distinction in english this is a different distinction from the one called gender in languages like german french urdu and norwegian where gender refers to nominal agreement classesparallelism leads to the situation where the feature gend occurs in german french urdu and norwegian but not in english and japanesethat is parallelism does not mean finding the same features in all languages but rather using the same features in the same way in all languages to the extent that they are justified therea french example of grammatical gender is shown in note that determiner adjective and participle agreement is dependent on the gender of the nounthe fstructure for the nouns crayon and plume are as in with an overt gend feature ale petit crayon est casse them littlem pencilm is brokenm the little pencil is broken bla petite plume est cassee thef littlef penf is brokenf the little pen is broken fstructures for the equivalent words in english and japanese will not have a gend featurea similar example comes from japanese discourse particlesit is wellknown that japanese has syntactic encodings for information such as honorificationthe verb in the japanese sentence encodes information that the subject is respected while the verb in shows politeness from the writer to the reader of the sentencethe fstructures for the verbs in are as in a final example comes from english progressives as in in order to distinguish these two forms the english grammar uses a prog feature within the tenseaspect system shows the fstructure for however this distinction is not found in the other languagesfor example is used to express both and in german a er weinte he cried he cried as seen in the german fstructure is left underspecified for prog because there is no syntactic reflex of itif such a feature were posited rampant ambiguity would be introduced for all past tense forms in germaninstead the semantics will determine whether such forms are progressiveanother type of situation arises when one language provides evidence for a certain feature space or type of analysis that is neither explicitly mirrored nor explicitly contradicted by another languagein theoretical linguistics it is commonly acknowledged that what one language codes overtly may be harder to detect for another languagethis situation has arisen in the pargram projectcase features fall under this topicgerman japanese and urdu mark nps with overt case morphologyin comparison english french and norwegian make relatively little use of case except as part of the pronominal systemnevertheless the fstructure analyses for all the languages contain a case feature in the specification of noun phrasesthis overspecification of information expresses deeper linguistic generalizations and keeps the fstructural analyses as parallel as possiblein addition the features can be put to use for the isolated phenomena in which they do play a rolefor example english does not mark animacy grammatically in most situationshowever providing a anim feature to known animates such as peoples names and pronouns allows the grammar to encode information that is relevant for interpretationconsider the relative pronoun who in a the girlanim whoanim left b the boxanim whoanim left the relative pronoun has a anim feature that is assigned to the noun it modifies by the relative clause rulesas such a noun modified by a relative clause headed by who is interpreted as animatein the case of canonical inanimates as in this will result in a pragmatically odd interpretation which is encoded in the fstructureteasing apart these different phenomena crosslinguistically poses a challenge that the pargram members are continually engaged inas such we have developed several methods to help maintain parallelismthe parallelism among the grammars is maintained in a number of waysmost of the work is done during two weeklong project meetings held each yearthree main activities occur during these meetings comparison of sample fstructures comparison of features and their values and discussions of new or problematic constructionsa month before each meeting the host site chooses around fifteen sentences whose analysis is to be compared at the meetingthese can be a random selection or be thematic eg all dealing with predicatives or with interrogativesthe sentences are then parsed by each grammar and the output is comparedfor the more recent grammars this may mean adding the relevant rules to the grammars resulting in growth of the grammar for the older grammars this may mean updating a construction that has not been examined in many yearsanother approach that was taken at the beginning of the project was to have a common corpus of about 1000 sentences that all of the grammars were to parsefor the english french and german grammars this was an aligned tractor manualthe corpus sentences were used for the initial fstructure comparisonshaving a common corpus ensured that the grammars would have roughly the same coveragefor example they all parsed declarative and imperative sentenceshowever the nature of the corpus can leave major gaps in coverage in this case the manual contained no interrogativesthe xle platform requires that a grammar declare all the features it uses and their possible valuespart of the urdu feature table is shown in as seen in for quant attributes which take other attributes as their values must also be declaredan example of such a feature was seen in for spec which takes quant and poss features among others as its values prontype pers poss null proper date location name title psem locational directional ptype sem nosem quantform the feature declarations of all of the languages are compared feature by feature to ensure parallelismthe most obvious use of this is to ensure that the grammars encode the same features in the same wayfor example at a basic level one feature declaration might have specified gen for gender while the others had chosen the name gend this divergence in naming is regularizedmore interesting cases arise when one language uses a feature and another does not for analyzing the same phenomenawhen this is noticed via the featuretable comparison it is determined why one grammar needs the feature and the other does not and thus it may be possible to eliminate the feature in one grammar or to add it to anotheron a deeper level the feature comparison is useful for conducting a survey of what constructions each grammar has and how they are implementedfor example if a language does not have an adegree feature the question will arise as to whether the grammar analyzes comparative and superlative adjectivesif they do not then they should be added and should use the adegree feature if they do then the question arises as to why they do not have this feature as part of their analysisfinally there is the discussion of problematic constructionsthese may be constructions that already have analyses which had been agreed upon in the past but which are not working properly now that more data has been consideredmore frequently they are new constructions that one of the grammars is considering addingpossible analyses for the construction are discussed and then one of the grammars will incorporate the analysis to see whether it worksif the analysis works then the other grammars will incorporate the analysisconstructions that have been discussed in past pargram meetings include predicative adjectives quantifiers partitives and cleftseven if not all of the languages have the construction in question as was the case with clefts the grammar writers for that language may have interesting ideas on how to analyze itthese group discussions have proven particularly useful in extending grammar coverage in a parallel fashiononce a consensus is reached it is the responsibility of each grammar to make sure that its analyses match the new standardas such after each meeting the grammar writers will rename features change analyses and implement new constructions into their grammarsmost of the basic work has now been accomplishedhowever as the grammars expand coverage more constructions need to be integrated into the grammars and these constructions tend to be ones for which there is no standard analysis in the linguistic literature so differences can easily arise in these areasthe experiences of the pargram grammar writers has shown that the parallelism of analysis and implementation in the pargram project aids further grammar development effortsmany of the basic decisions about analyses and formalism have already been made in the projectthus the grammar writer for a new language can use existing technology to bootstrap a grammar for the new language and can parse equivalent constructions in the existing languages to see how to analyze a constructionthis allows the grammar writer to focus on more difficult constructions not yet encountered in the existing grammarsconsider first the japanese grammar which was started in the beginning of 2001at the initial stage the work of grammar development involved implementing the basic constructions already analyzed in the other grammarsit was found that the grammar writing techniques and guidelines to maintain parallelism shared in the pargram project could be efficiently applied to the japanese grammarduring the next stage lfg rules needed for grammatical issues specific to japanese have been gradually incorporated and at the same time the biannual pargram meetings have helped significantly to keep the grammars parallelgiven this system in a year and a half using two grammar writers the japanese grammar has attained coverage of 99 for 500 sentences of a copier manual and 95 for 10000 sentences of an ecrm corpusnext consider the norwegian grammar which joined the pargram group in 1999 and also emphasized slightly different goals from the other groupsrather than prioritizing large textual coverage from the outset the norwegian group gave priority to the development of a core grammar covering all major construction types in a principled way based on the proposals in and the inclusion of a semantic projection in addition to the fstructurein addition time was spent on improving existing lexical resources and adapting them to the xle formatroughly two manyears has been spent on the grammar itselfthe pargram cooperation on parallelism has ensured that the derived fstructures are interesting in a multilingual context and the grammar will now serve as a basis for grammar development in other closely related scandinavian languagesthus the pargram project has shown that it is possible to use a single grammar development platform and a unified methodology of grammar writing to develop largescale grammars for typologically different languagesthe grammars analyses show a large degree of parallelism despite being developed at different sitesthis is achieved by intensive meetings twice a yearthe parallelism can be exploited in applications using the grammars the fewer the differences the simpler a multilingual application can be on a machinetranslation prototype using pargram
W02-1503
the parallel grammar projectwe report on the parallel grammar project which uses the xle parser and grammar development platform for six languages english french german japanese norwegian and urduthe pargram english lfg is a handcrafter broadcoverage grammar develope d with the xle platform
japanese dependency analysis using cascaded chunking in this paper we propose a new statistical japanese dependency parser using a cascaded chunking model conventional japanese statistical dependency parsers are mainly based on a probabilistic model which is not always efficient or scalable we propose a new method that is simple and efficient since it parses a sentence deterministically only deciding whether the current segment modifies the segment on its immediate right hand side experiments using the kyoto university corpus show that the method outperforms previous systems as well as improves the parsing and training efficiency dependency analysis has been recognized as a basic process in japanese sentence analysis and a number of studies have been proposedjapanese dependency structure is usually defined in terms of the relationship between phrasal units called bunsetsu segments most of the previous statistical approaches for japanese dependency analysis are based on a probabilistic model consisting of the following two stepsfirst they estimate modification probabilities in other words how probable one segment tends to modify anothersecond the optimal combination of dependencies is searched from the all candidates dependenciessuch a probabilistic model is not always efficient since it needs to calculate the probabilities for all possible dependencies and creates n2 training examples per sentencein addition the probabilistic model assumes that each pairs of dependency structure is independentin this paper we propose a new japanese dependency parser which is more efficient and simpler than the probabilistic model yet performs better in training and testing on the kyoto university corpusthe method parses a sentence deterministically only deciding whether the current segment modifies segment on its immediate right hand sidemoreover it does not assume the independence constraint between dependenciesthis section describes the general formulation of the probabilistic model for parsing which has been applied to japanese statistical dependency analysisfirst of all we define a sentence as a sequence of segments b and its syntactic structure as a sequence of dependency patterns d dep dep where dep j means that the segment bi depends on segment bjin this framework we assume that the dependency sequence d satisfies the following two constraintsstatistical dependency analysis is defined as a searching problem for the dependency pattern d that maximizes the conditional probability p of the input sequence under the abovementioned constraintsif we assume that the dependency probabilities are mutually independent p can be rewritten as modifies bj fzj is an n dimensional feature vector that represents various kinds of linguistic features related to the segments bz and bjwe obtain dbest argmaxd p taking into all the combination of these probabilitiesgenerally the optimal solution dbest can be identified by using bottomup parsing algorithm such as cyk algorithmthe problem in the dependency structure analysis is how to estimate the dependency probabilities accuratelya number of statistical and machine learning approaches such as maximum likelihood estimation decision trees maximum entropy models and support vector machines have been applied to estimate these probabilitiesin order to apply a machine learning algorithm to dependency analysis we have to prepare the positive and negative examplesusually in a probabilistic model all possible pairs of segments that are in a dependency relation are used as positive examples and two segments that appear in a sentence but are not in a dependency relation are used as negative examplesthus a total of n2 training examples must be produced per sentencein the probabilistic model we have to estimate the probabilities of each dependency relationhowever some machine learning algorithms such as svms cannot estimate these probabilities directlykudo and matsumoto used the sigmoid function to obtain pseudo probabilities in svmshowever there is no theoretical endorsement for this heuristicsmoreover the probabilistic model is not good in its scalability since it usually requires a total of n2 training examples per sentenceit will be hard to combine the probabilistic model with some machine learning algorithms such as svms which require a polynomial computational cost on the number of given training examplesin this paper we introduce a new method for japanese dependency analysis which does not require the probabilities of dependencies and parses a sentence deterministicallythe proposed method can be combined with any type of machine learning algorithm that has classification abilitythe original idea of our method stems from the cascaded chucking method which has been applied in english parsing let us introduce the basic framework of the cascaded chunking parsing method we apply this cascaded chunking parsing technique to japanese dependency analysissince japanese is a headfinal language and the chunking can be regarded as the creation of a dependency between two segments we can simplify the process of japanese dependency analysis as follows figure 1 shows an example of the parsing process with the cascaded chunking modelthe input for the model is the linguistic features related to the modifier and modifiee and the output from the model is either of the tags in training the model simulates the parsing algorithm by consulting the correct answer from the training annotated corpusduring the training positive and negative examples are collectedin testing the model consults the trained system and parses the input with the cascaded chunking algorithmwe think this proposed cascaded chunking model has the following advantages compared with the traditional probabilistic modelsif we use the cyk algorithm the probabilistic model requires o parsing time on the other hand the cascaded chunking model requires o in the worst case when all segments modify the rightmost segmentthe actual parsing time is usually lower than o since most of segments modify segment on its immediate right hand sidefurthermore in the cascaded chunking model the training examples are extracted using the parsing algorithm itselfthe training examples required for the cascaded chunking model is much smaller than that for the probabilistic modelthe model reduces the training cost significantly and enables training using larger amounts of annotated corpus no assumption on the independence between dependency relations the probabilistic model assumes that dependency relations are independenthowever there are some cases in which one cannot parse a sentence correctly with this assumptionfor example coordinate structures cannot be always parsed with the independence constraintthe cascaded chunking model parses and estimates relations simultaneouslythis means that one can use all dependency relations which have narrower scope than that of the current focusing relation being considered as feature setswe describe the details in the next sectionthe cascaded chunking model can be combined with any machine learning algorithm that works as a binary classifier since the cascaded chunking model parses a sentence deterministically only deciding whether or not the current segment modifies the segment on its immediate right hand sideprobabilities of dependencies are not always necessary for the cascaded chunking modellinguistic features that are supposed to be effective in japanese dependency analysis are head words and their partsofspeech tags functional words and inflection forms of the words that appear at the end of segments distance between two segments existence of punctuation marksas those are solely defined by the pair of segments we refer to them as the static featuresjapanese dependency relations are heavily constrained by such static features since the inflection forms and postpositional particles constrain the dependency relationhowever when a sentence is long and there are more than one possible dependency static features by themselves cannot determine the correct dependencyto cope with this problem kudo and matsumoto introduced a new type of features called dynamic features which are created dynamically during the parsing processfor example if some relation is determined this modification relation may have some influence on other dependency relationtherefore once a segment has been determined to modify another segment such information is kept in both of the segments and is added to them as a new featurespecifically we take the following three types of dynamic features in our experimentshe her warm heart be moved athe segments which modify the current candidate modifiee bthe segments which modify the current candidate modifier c the segment which is modified by the current candidate modifieealthough any kind of machine learning algorithm can be applied to the cascaded chunking model we use support vector machines for our experiments because of their stateoftheart performance and generalization abilitysvm is a binary linear classifier trained from the samples each of which belongs either to positive or negative class as follows where xi is a feature vector of the ith sample represented by an n dimensional vector and yi is the class or negative class label of the ith samplesvms find the optimal separating hyperplane based on the maximal margin strategythe margin can be seen as the distance between the critical examples and the separating hyperplanewe omit the details here the maximal margin strategy can be realized by the following optimization problemfurthermore svms have the potential to carry out nonlinear classificationsthough we leave the details to the optimization problem can be rewritten into a dual form where all feature vectors appear as their dot productsby simply substituting every dot product of xi and xj in dual form with a kernel function k svms can handle nonlinear hypothesesamong many kinds of kernel functions available we will focus on the dth polynomial kernel k d use of dth polynomial kernel functions allows us to build an optimal separating hyperplane which takes into account all combinations of features up to dwe used the following two annotated corpora for our experimentsthis data set consists of the kyoto university text corpus version 20 we used 7958 sentences from the articles on january 1st to january 7th as training examples and 1246 sentences from the articles on january 9th as the test datathis data set was used in and in order to investigate the scalability of the cascaded chunking model we prepared larger data setwe used all 38383 sentences of the kyoto university text corpus version 30the training and test data were generated by a twofold cross validationthe feature sets used in our experiments are shown in table 1the static features are basically taken from uchimotos list head word is the rightmost content word in the segmentfunctional word is set as follows fw the rightmost functional word if there is a functional word in the segment fw the rightmost inflection form if there is a predicate in the segment fw same as the hw otherwisethe static features include the information on existence of brackets question marks and punctuation marks etcbesides there are features that show the relative relation of two segments such as distance and existence of brackets quotation marks and punctuation marks between themfor a segment x and its dynamic feature y we set the functional representation feature of x based on the fw of x as follows fr lexical form of xfw if pos of xfw is particle adverb adnominal or conjunction fr inflectional form ofxfw ifxfw has an inflectional form fr the pos tag ofxfw otherwisefor a segment x and its dynamic feature c we set pos tag and possubcategory of the hw of xall our experiments are carried out on alphasever 8400 for training and linux for testingwe used a third degree polynomial kernel function which is exactly the same setting in performance on the test data is measured using dependency accuracy and sentence accuracydependency accuracy is the percentage of correct dependencies out of all dependency relationssentence accuracy is the percentage of sentences in which all dependencies are determined correctlythe results for the new cascaded chunking model as well as for the previous probabilistic model based on svms are summarized in table 2we cannot employ the experiments for the probabilistic model using large dataset since the data size is too large for our current svms learning program to terminate in a realistic time periodeven though the number of training examples used for the cascaded chunking model is less than a quarter of that for the probabilistic model and the used feature set is the same dependency accuracy and sentence accuracy are improved using the cascaded chunking model the time required for training and parsing are significantly reduced by applying the cascaded chunking model as can be seen table 2 the cascaded chunking model is more accurate efficient and scalable than the probabilistic modelit is difficult to apply the probabilistic model to the large data set since it takes no less than 336 hours to carry out the experiments even with the standard data set and svms require quadratic or more computational cost on the number of training examplesfor the first impression it may seems natural that higher accuracy is achieved with the probabilistic model since all candidate dependency relations are used as training exampleshowever the experimental results show that the cascaded chunking model performs betterhere we list what the most significant contributions are and how well the cascaded chunking model behaves compared with the probabilistic modelthe probabilistic model is trained with all candidate pairs of segments in the training corpusthe problem of this training is that exceptional dependency relations may be used as training examplesfor example suppose a segment which appears to right hand side of the correct modifiee and has a similar content word the pair with this segment becomes a negative examplehowever this is negative because there is a better and correct candidate at a different point in the sentencetherefore this may not be a true negative example meaning that this can be positive in other sentencesin addition if a segment is not modified by a modifier because of cross dependency constraints but has a similar content word with correct modifiee this relation also becomes an exceptionactually we cannot ignore these exceptions since most segments modify a segment on its immediate right hand sideby using all candidates of dependency relation as the training examples we have committed to a number of exceptions which are hard to be trained uponlooking in particular on a powerful heuristics for dependency structure analysis a segment tends to modify a nearer segment if possible it will be most important to train whether the current segment modifies the segment on its immediate right hand sidethe cascaded chunking model is designed along with this heuristics and can remove the exceptional relations which has less potential to improve performancefigure 3 shows the relationship between the size of the training data and the parsing accuracythis figure also shows the accuracy with and without the dynamic featuresgenerally the results with the dynamic feature set is better than the results without itthe dynamic features constantly outperform static features when the size of the training data is largein most cases the improvements is considerabletable 3 summarizes the performance without some dynamic featuresfrom these results we can conclude that all dynamic features are effective in improving the performancetable 4 summarizes recent results on japanese dependency analysisuchimoto et al report that using the kyoto university corpus for their training and testing they achieve around 8793 accuracy by building statistical model based on the maximum entropy frameworkthey extend the original probabilistic model which learns only two class modify and not modify to the one that learns three classes between modify and beyondtheir model can also avoid the influence of the exceptional dependency relationsusing same training and test data we can achieve accuracy of 8929the difference is considerablekanayama et al use an hpsgbased japanese grammar to restrict the candidate dependenciestheir model uses at most three candidates restricted by the grammar as features the nearest the second nearest and the farthest from the modifierthus their model can take longer context into account and disambiguate complex dependency relationshowever the features are still static and dynamic features are not used in their modelwe cannot directly compare their model with ours because they use a different corpus edr corpus which is ten times as large as the corpus we usednevertheless they reported an accuracy 8855 which is worse than our modelharuno et al report that using the edr corpus for their training and testing they achieve around 8503 accuracy with decision tree and boostingalthough decision tree can take combinations of features as svms it easily overfits on its ownto avoid overfitting decision tree is usually used as an weak learner for boostingcombining boosting technique with decision tree the performance may be improvedhowever haruno et al report that the performance with decision tree falls down when they added lexical entries with lower frequencies as features even using boostingwe think that decision tree requires a careful feature selection for achieving higher accuracywe presented a new japanese dependency parser using a cascaded chunking model which achieves 9046 accuracy using the kyoto university corpusour model parses a sentence deterministically only deciding whether the current segment modifies the segment on its immediate right hand sideour model outperforms the previous probabilistic model with respect to accuracy and efficiencyin addition we showed that dynamic features significantly contribute to improve the performance
W02-2016
japanese dependency analysis using cascaded chunkingin this paper we propose a new statistical japanese dependency parser using a cascaded chunking modelconventional japanese statistical dependency parsers are mainly based on a probabilistic model which is not always efficient or scalablewe propose a new method that is simple and efficient since it parses a sentence deterministically only deciding whether the current segment modifies the segment on its immediate right hand sideexperiments using the kyoto university corpus show that the method outperforms previous systems as well as improves the parsing and training efficiencyour cascaded chunking model does not require the probabilities of dependencies and parses a sentence deterministically
a comparison of algorithms for maximum entropy parameter estimation conditional maximum entropy models provide a general purpose machine learning technique which has been successfully applied to fields as diverse as computer vision and econometrics and which is used for a wide variety of classification problems in natural language processing however the flexibility of me models is not without cost while parameter estimation for me models is conceptually straightforward in practice me models for typical natural language tasks are very large and may well contain many thousands of free parameters in this paper we consider a number of algorithms for estimating the parameters of me models including iterative scaling gradient ascent conjugate gradient and variable metric methods surprisingly the standardly used iterative scaling algorithms perform quite poorly in comparison to the others and for all of the test problems a limitedmemory variable metric algorithm outperformed the other choices maximum entropy models variously known as loglinear gibbs exponential and multinomial logit models provide a general purpose machine learning technique for classification and prediction which has been successfully applied to fields as diverse as computer vision and econometricsin natural language processing recent years have seen me techniques used for sentence boundary detection part of speech tagging parse selection and ambiguity resolution and stochastic attributevalue grammars to name just a few applications a leading advantage of me models is their flexibility they allow stochastic rule systems to be augmented with additional syntactic semantic and pragmatic featureshowever the richness of the representations is not without costeven modest me models can require considerable computational resources and very large quantities of annotated training data in order to accurately estimate the models parameterswhile parameter estimation for me models is conceptually straightforward in practice me models for typical natural language tasks are usually quite large and frequently contain hundreds of thousands of free parametersestimation of such large models is not only expensive but also due to sparsely distributed features sensitive to roundoff errorsthus highly efficient accurate scalable methods are required for estimating the parameters of practical modelsin this paper we consider a number of algorithms for estimating the parameters of me models including generalized iterative scaling and improved iterative scaling as well as general purpose optimization techniques such as gradient ascent conjugate gradient and variable metric methodssurprisingly the widely used iterative scaling algorithms perform quite poorly and for all of the test problems a limited memory variable metric algorithm outperformed the other choicessuppose we are given a probability distribution p over a set of events x which are characterized by a d dimensional feature vector function f x rdin addition we have also a set of contexts w and a function y which partitions the members of xin the case of a stochastic contextfree grammar for example x might be the set of possible trees the feature vectors might represent the number of times each rule applied in the derivation of each tree w might be the set of possible strings of words and y the set of trees whose yield is w w a conditional maximum entropy model qθ for p has the parametric form is the inner product of the parameter vector and a feature vectorgiven the parametric form of an me model in fitting an me model to a collection of training data entails finding values for the parameter vector θ which minimize the kullbackleibler divergence between the model q0 and the empirical distribution p ratio of epf to eqf with the restriction that j fj c for each event x in the training data we can adapt gis to estimate the model parameters θ rather than the model probabilities q yielding the update rule the step size and thus the rate of convergence depends on the constant c the larger the value of c the smaller the step sizein case not all rows of the training data sum to a constant the addition of a correction feature effectively slows convergence to match the most difficult caseto avoid this slowed convergence and the need for a correction feature della pietra et al propose an improved iterative scaling algorithm whose update rule is the solution to the equation the gradient of the log likelihood function or the epf wx pqfexpδ vector of its first derivatives with respect to the parameter θ is since the likelihood function is concave over the parameter space it has a global maximum where the gradient is zerounfortunately simply setting g 0 and solving for θ does not yield a closed form solution so we proceed iterativelyat each step we adjust an estimate of the parameters θ to a new estimate θ based on the divergence between the estimated probability distribution q and the empirical distribution p we continue until successive improvements fail to yield a sufficiently large decrease in the divergencewhile all parameter estimation algorithms we will consider take the same general form the method for computing the updates δ at each search step differs substantiallyas we shall see this difference can have a dramatic impact on the number of updates required to reach convergenceone popular method for iteratively refining the model parameters is generalized iterative scaling due to darroch and ratcliff an extension of iterative proportional fitting gis scales the probability distribution q by a factor proportional to the where m is the sum of the feature values for an event x in the training datathis is a polynomial in exp and the solution can be found straightforwardly using for example the newtonraphson methoditerative scaling algorithms have a long tradition in statistics and are still widely used for analysis of contingency tablestheir primary strength is that on each iteration they only require computation of the expected values eqthey do not depend on evaluation of the gradient of the loglikelihood function which depending on the distribution could be prohibitively expensivein the case of me models however the vector of expected values required by iterative scaling essentially is the gradient g thus it makes sense to consider methods which use the gradient directlythe most obvious way of making explicit use of the gradient is by cauchys method or the method of steepest ascentthe gradient of a function is a vector which points in the direction in which the functions value increases most rapidlysince our goal is to maximize the loglikelihood function a natural strategy is to shift our current estimate of the parameters in the direction of the gradient via the update rule where the step size α is chosen to maximize l δfinding the optimal step size is itself an optimization problem though only in one dimension and in practice only an approximate solution is required to guarantee global convergencesince the loglikelihood function is concave the method of steepest ascent is guaranteed to find the global maximumhowever while the steps taken on each iteration are in a very narrow sense locally optimal the global convergence rate of steepest ascent is very pooreach new search direction is orthogonal to the previous directionthis leads to a characteristic zigzag ascent with convergence slowing as the maximum is approachedone way of looking at the problem with steepest ascent is that it considers the same search directions many timeswe would prefer an algorithm which considered each possible search direction only once in each iteration taking a step of exactly the right length in a direction orthogonal to all previous search directionsthis intuition underlies conjugate gradient methods which choose a search direction which is a linear combination of the steepest ascent direction and the previous search directionthe step size is selected by an approximate line search as in the steepest ascent methodseveral nonlinear conjugate gradient methods such as the fletcherreeves and the polakribierepositive algorithms have been proposedwhile theoretically equivalent they use slighly different update rules and thus show different numeric propertiesanother way of looking at the problem with steepest ascent is that while it takes into account the gradient of the loglikelihood function it fails to take into account its curvature or the gradient of the gradientthe usefulness of the curvature is made clear if we consider a secondorder taylor series approximation of l where h is hessian matrix of the loglikelihood function the d d matrix of its second partial derivatives with respect to θif we set the derivative of to zero and solve for δ we get the update rule for newtons method newtons method converges very quickly but it requires the computation of the inverse of the hessian matrix on each iterationwhile the loglikelihood function for me models in is twice differentiable for large scale problems the evaluation of the hessian matrix is computationally impractical and newtons method is not competitive with iterative scaling or first order methodsvariable metric or quasinewton methods avoid explicit evaluation of the hessian by building up an approximation of it using successive evaluations of the gradientthat is we replace h1 in with a local approximation of the inverse hessian b with b a symmatric positive definite matrix which satisfies the equation where y g gvariable metric methods also show excellent convergence properties and can be much more efficient than using true newton updates but for large scale problems with hundreds of thousands of parameters even storing the approximate hessian is prohibitively expensivefor such cases we can apply limited memory variable metric methods which implicitly approximate the hessian matrix in the vicinity of the current estimate of θ using the previous m values of y and δsince in practical applications values of m between 3 and 10 suffice this can offer a substantial savings in storage requirements over variable metric methods while still giving favorable convergence properties1the performance of optimization algorithms is highly dependent on the specific properties of the problem to be solvedworstcase analysis typically pace constraints preclude a more detailed discussion of these methods herefor algorithmic details and theoretical analysis of first and second order methods see eg nocedal or nocedal and wright does not reflect the actual behavior on actual problemstherefore in order to evaluate the performance of the optimization techniques sketched in previous section when applied to the problem of parameter estimation we need to compare the performance of actual implementations on realistic data sets minka offers a comparison of iterative scaling with other algorithms for parameter estimation in logistic regression a problem similar to the one considered here but it is difficult to transfer minkas results to me modelsfor one he evaluates the algorithms with randomly generated training datahowever the performance and accuracy of optimization algorithms can be sensitive to the specific numerical properties of the function being optimized results based on random data may or may not carry over to more realistic problemsand the test problems minka considers are relatively small as we have seen though algorithms which perform well for small and medium scale problems may not always be applicable to problems with many thousands of dimensionsas a basis for the implementation we have used petsc a software library designed to ease development of programs which solve large systems of partial differential equations petsc offers data structures and routines for parallel and sequential storage manipulation and visualization of very large sparse matricesfor any of the estimation techniques the most expensive operation is computing the probability distribution q and the expectations eqf for each iterationin order to make use of the facilities provided by petsc we can store the training data as a matrix f with rows corresponding to events and columns to featuresthen given a parameter vector θ the unnormalized probabilities q0 are the matrixvector product and the feature expectations are the transposed matrixvector product by expressing these computations as matrixvector operations we can take advantage of the high performance sparse matrix primitives of petscfor the comparison we implemented both generalized and improved iterative scaling in c using the primitives provided by petscfor the other optimization techniques we used tao a library layered on top of the foundation of petsc for solving nonlinear optimization problems tao offers the building blocks for writing optimization programs as well as highquality implementations of standard optimization algorithms before turning to the results of the comparison two additional points need to be madefirst in order to assure a consistent comparison we need to use the same stopping rule for each algorithmfor these experiments we judged that convergence was reached when the relative change in the loglikelihood between iterations fell below a predetermined thresholdthat is each run was stopped when where the relative tolerance ε 107for any particular application this may or may not be an appropriate stopping rule but is only used here for purposes of comparisonfinally it should be noted that in the current implementation we have not applied any of the possible optimizations that appear in the literature to speed up normalization of the probability distribution qthese improvements take advantage of a models structure to simplify the evaluation of the denominator in the particular data sets examined here are unstructured and such optimizations are unlikely to give any improvementhowever when these optimizations are appropriate they will give a proportional speedup to all of the algorithmsthus the use of such optimizations is independent of the choice of parameter estimation methodto compare the algorithms described in 2 we applied the implementation outlined in the previous section to four training data sets drawn from the domain of natural language processingthe rules and lex datasets are examples of stochastic attribute value grammars one with a small set of scfglike features and with a very large set of finegrained lexical features the summary dataset is part of a sentence extraction task and the shallow dataset is drawn from a text chunking application these datasets vary widely in their size and composition and are representative of the kinds of datasets typically encountered in applying me models to nlp classification tasksthe results of applying each of the parameter estimation algorithms to each of the datasets is summarized in table 2for each run we report the kl divergence between the fitted model and the training data at convergence the prediction accuracy of fitted model on a heldout test set the number of iterations required the number of loglikelihood and gradient evaluations required and the total elapsed time 2 there are a few things to observe about these resultsfirst while iis converges in fewer steps the gis it takes substantially more timeat least for this implementation the additional bookkeeping overhead required by iis more than cancels any improvements in speed offered by accelerated convergencethis may be a misleading conclusion however since a more finely tuned implementation of iis may well take much less time per iteration than the one used for these experimentshowever even if each iteration of iis could be made as fast as an iteration of gis the benefits of iis over gis would in these cases be quite modestsecond note that for three of the four datasets the kl divergence at convergence is roughly the same for all of the algorithmsfor the summary dataset however they differ by up to two orders of magnitudethis is an indication that the convergence test in is sensitive to the rate of convergence and thus to the choice of algorithmany degree of precision desired could be reached by any of the algorithms with the appropriate value of εhowever gis say would require many more iterations than reported in table 2 to reach the precision achieved by the limited memory variable metric algorithmthird the prediction accuracy is in most cases more or less the same for all of the algorithmssome variability is to be expectedall of the data sets being considered here are badly illconditioned and many different models will yield the same likelihoodin a few cases however the prediction accuracy differs more substantiallyfor the two savg data sets gis has a small advantage over the other methodsmore dramatically both iterative scaling methods perform very poorly on the shallow datasetin this case the training data is very sparsemany features are nearly pseudominimal in the sense of johnson et al and so receive weights approaching smoothing the reference probabilities would likely improve the results for all of the methods and reduce the observed differenceshowever this does suggest that gradientbased methods are robust to certain problems with the training datafinally the most significant lesson to be drawn from these results is that with the exception of steepest ascent gradientbased methods outperform iterative scaling by a wide margin for almost all the datasets as measured by both number of function evaluations and by the total elapsed timeand in each case the limited memory variable metric algorithm performs substantially better than any of the competing methodsin this paper we have described experiments comparing the performance of a number of different algorithms for estimating the parameters of a conditional me modelthe results show that variants of iterative scaling the algorithms which are most widely used in the literature perform quite poorly when compared to general function optimization algorithms such as conjugate gradient and variable metric methodsand more specifically for the nlp classification tasks considered the limited memory variable metric algorithm of benson and more outperforms the other choices by a substantial marginthis conclusion has obvious consequences for the fieldme modeling is a commonly used machine learning technique and the application of improved parameter estimation algorithms will it practical to construct larger more complex modelsand since the parameters of individual models can be estimated quite quickly this will further open up the possibility for more sophisticated model and feature selection techniques which compare large numbers of alternative model specificationsthis suggests that more comprehensive experiments to compare the convergence rate and accuracy of various algorithms on a wider range of problems is called forin addition there is a larger lesson to be drawn from these resultswe typically think of computational linguistics as being primarily a symbolic disciplinehowever statistical natural language processing involves nontrivial numeric computationsas these results show natural language processing can take great advantage of the algorithms and software libraries developed by and for more quantitatively oriented engineering and computational sciencesthe research of dr malouf has been made possible by a fellowship of the royal netherlands academy of arts and sciences and by the nwo pionier project algorithms for linguistic processingthanks also to stephen clark andreas eisele detlef prescher miles osborne and gertjan van noord for helpful comments and test data
W02-2018
a comparison of algorithms for maximum entropy parameter estimationconditional maximum entropy models provide a general purpose machine learning technique which has been successfully applied to fields as diverse as computer vision and econometrics and which is used for a wide variety of classification problems in natural language processinghowever the flexibility of me models is not without costwhile parameter estimation for me models is conceptually straightforward in practice me models for typical natural language tasks are very large and may well contain many thousands of free parametersin this paper we consider a number of algorithms for estimating the parameters of me models including iterative scaling gradient ascent conjugate gradient and variable metric methodssurprisingly the standardly used iterative scaling algorithms perform quite poorly in comparison to the others and for all of the test problems a limitedmemory variable metric algorithm outperformed the other choiceswe introduce the opensource toolkit for advanced discriminative model which uses a limitedmemory variable metric
introduction to the conll2002 shared task languageindependent named entity recognition named entities are phrases that contain the names of persons organizations locations times and quantitiesexample per wolff currently a journalist in loc argentina played with per del bosque in the final years of the seventies in org real madrid this sentence contains four named entities wol and del bosque are persons argentina is a location and real madrid is a organizationthe shared task of conll2002 concerns languageindependent named entity recognitionwe will concentrate on four types of named entities persons locations organizations and names of miscellaneous entities that do not belong to the previous three groupsthe participants of the shared task have been offered training and test data for two european languages spanish and dutchthey have used the data for developing a namedentity recognition system that includes a machine learning componentthe organizers of the shared task were especially interested in approaches that make use of additional nonannotated data for improving their performancethe conll2002 named entity data consists of six files covering two languages spanish and dutcheach of the languages has a training file a development file and a test filethe learning methods will be trained with the training datathe development data can be used for tuning the parameters of the learning methodswhen the best parameters are found the method can be trained on the training data and tested on the test datathe results of the different learning methods on the test sets will be compared in the evaluation of the shared taskthe split between development data and test data has been chosen to avoid that systems are being tuned to the test dataall data files contain one word per line with empty lines representing sentence boundariesadditionally each line contains a tag which states whether the word is inside a named entity or notthe tag also encodes the type of named entityhere is a part of the example sentence words tagged with o are outside of named entitiesthe bxxx tag is used for the first word in a named entity of type xxx and ixxx is used for all other words in named entities of type xxxthe data contains entities of four types persons organizations locations and miscellaneous names the tagging scheme is a variant of the iob scheme originally put forward by ramshaw and marcus we assume that named entities are nonrecursive and nonoverlappingin case a named entity is embedded in another named entity usually only the top level entity will be markedthe spanish data is a collection of news wire articles made available by the spanish efe news agencythe articles are from may 2000the annotation was carried out by the talp research center2 of the technical university of catalonia and the center of language and computation of the university of barcelona and funded by the european commission through the namic project the data contains words and entity tags onlythe training development and test data files contain 273037 54837 and 53049 lines respectivelythe dutch data consist of four editions of the belgian newspaper quotde morgenquot of 2000 the data was annotated as a part of the atranos project4 at the university of antwerp in belgium europethe annotator has followed the mitre and saic guidelines for named entity recognition as well as possiblethe data consists of words entity tags and partofspeech tags which have been derived by a dutch partofspeech tagger additionally the article boundaries in the text have been marked explicitly with lines containing the tag docstartthe training development and test data files contain 218737 40656 and 74189 lines respectivelythe performance in this task is measured with f31 rate which is equal to precisionrecall with 31 precision is the percentage of named entities found by the learning system that are correctrecall is the percentage of named entities present in the corpus that are found by the systema named entity is correct only if it is an exact match of the corresponding entity in the data filetwelve systems have participated in this shared taskthe results for the test sets for spanish and dutch can be found in table 1a baseline rate was computed for both setsit was produced by a system which only identified entities which had a unique class in the training dataif a phrase was part of more than one entity the system would select the longest oneall systems that participated in the shared task have outperformed the baseline systemmcnamee and mayfield have applied support vector machines to the data of the shared tasktheir system used many binary features for representing words they have evaluated different parameter settings of the system and have selected a cascaded approach in which first entity boundaries were predicted and then entity classes black and vasilakopoulos have evaluated two approaches to the shared taskthe first was a transformationbased method which generated in rules in a single pass rather than in many passesthe second method was a decision tree methodthey found that the transformationbased method consistently outperformed the decision trees tsukamoto mitsuishi and sassano used a stacked adaboost classifier for finding named entitiesthey found that cascading classifiers helped improved performancetheir final system consisted of a cascade of five learners each of which performed 10000 boosting rounds malouf tested different models with the shared task data a statistical baseline model a hidden markov model and maximum entropy models with different featuresthe latter proved to perform bestthe maximum entropy models benefited from extra feature which encoded capitalization information positional information and information about the current word being part of a person name earlier in the texthowever incorporating a list of person names in the training process did not help jansche employed a firstorder markov model as a named entity recognizerhis system used two separate passes one for extracting entity boundaries and one for classifying entitieshe evaluated different features in both subprocessesthe categorization process was trained separately from the extraction process but that did not seem to have harmed overall performance patrick whitelaw and munro present slinerc a languageindependent named entity recognizerthe system uses tries as well as character ngrams for encoding wordinternal and contextual informationadditionally it relies on lists of entities which have been compiled from the training datathe overall system consists of six stages three regarding entity recognition and three for entity categorizationstages use the output of previous stages for obtaining an improved performance tjong kim sang has applied a memorybased learner to the data of the shared taskhe used a twostage processing strategy as well first identifying entities and then classifying themapart from the base classifier his system made use of three extra techniques for boosting performance cascading classifiers feature selection and system combinationeach of these techniques were shown to be useful burger henderson and morgan have evaluated three approaches to finding named entitiesthey started with a baseline system which consisted of an hmmbased phrase taggerthey gave the tagger access to a list of approximately 250000 named entities and the performance improvedafter this several smoothed word classes derived from the available data were incorporated into the training processthe system performed better with the derived word lists than with the external named entity lists cucerzan and yarowsky approached the shared task by using wordinternal and contextual information stored in characterbased triestheir system obtained good results by using partofspeech tag information and employing the one sense per discourse principlethe authors expect a performance increase when the system has access to external entity lists but have not presented the results of this in detail wu ngai carpuat larsen and yang have applied adaboostmh to the shared task data and compared the performance with that of a maximum entropybased named entity taggertheir system used lexical and partofspeech information contextual and wordinternal clues capitalization information knowledge about entity classes of previous occurrences of words and a small external list of named entity wordsthe boosting techniques operated on decision stumps decision trees of depth onethey outperformed the maximum entropybased named entity tagger florian employed three stacked learners for named entity recognition transformationbased learning for obtaining baselevel nontyped named entities snow for improving the quality of these entities and the forwardbackward algorithm for finding categories for the named entitiesthe combination of the three algorithms showed a substantially improved performance when compared with a single algorithm and an algorithm pair carreras marquez and padro have approached the shared task by using adaboost applied to fixeddepth decision treestheir system used many different input features contextual information wordinternal clues previous entity classes partofspeech tags and external word lists it processed the data in two stages first entity recognition and then classificationtheir system obtained the best results in this shared task for both the spanish and dutch test data sets we have described the conll2002 shared task languageindependent named entity recognitiontwelve different systems have been applied to data covering two western european languages spanish and dutcha boosted decision tree method obtained the best performance on both data sets tjong kim sang is supported by iwt stww as a researcher in the atranos project
W02-2024
introduction to the conll2002 shared task languageindependent named entity recognitionwe describe the conll2002 shared task languageindependent named entity recognitionwe give background information on the data sets and the evaluation method present a general overview of the systems that have taken part in the task and discuss their performancewe focus on named entity recognition for spanish and dutch
inducing translation lexicons via diverse similarity measures and bridge languages string date occurrence local 125 context all 125 word distribution narrow 125 083 083 083 125 wide idf rf burstiness 1 exactmatch accuracy 08 06 04 02 4 3 2 1 combo combo minus levenshtein 0 0 500 1000 1500 2000 2500 3000 3500 4000 4500 test words covered 0 500 1000 1500 2000 combo levenshtein only 2 1 0 500 1000 1500 2000 4 combo combo minus context 3 2 online paper dictionary scoring online dictionary scoring 1 4 0 500 1000 1500 2000 3 combo combo minus date 2 1 0 500 1000 1500 2000 combo combo minus rfjdfburstiness 4 3 2 1 4 3 0 500 1000 1500 2000 4 3 bulgczech bulgczech w retrained levenshtein bulgczech w retrained levenshtein context 2 1 0 500 1000 1500 2000 rank crib scr combined string datelocal widecos narrowcos burstiness rf 1 018 protestn abhorrencen breakv protestv protestn protestn protestn 2 019 openingn abominationn resistancen protestn systemn reluctancen portn 3 024 breakn allergyn stressv breakv breakv breakn openingn 4 028 mouthn animosityn protestv hatev protestv kickv stressv 5 029 objectionn antagonismn escapev openingn antagonismn protestv protestv 6 antipathyn protestn escapev hatev escapev escapev 7 030 oppositionn aperturen openingn stressv dislikev oppositionn resistancen 8 033 reluctancen aversej breakn systemn resentmentn mouthn breakn 9 033 portn aversionn kickv defiancen unitn unitn breakv 10 036 holen boren systemn mouthn disgustv formationn oppositionn 11 038 stressn borev oppositionn contradictionn reluctancen portn unitn 12 038 escapen boringj kickn kickv formationn stressv holen 13 038 formationn boringn formationn resentmentn animosityn objectionn kickv 14 040 animosityn breakn punchn dislikev disliken protestationn outletn 15 040 resentmentn breakv unitn reluctancen escapev hatev columnn resistancen rank crib scr combined string datelocal widecos narrowcos burstiness rf 1 freedomn independencen independencen independencen evidencev freev 2 009 freedomn relationn singlej easen necessityn coldj 3 011 dependv freej coldn irrelevantj fairj abandonv 4 013 relationn irrelevancen siden siden easev singlev importancen 5 020 consequencen illegalityn importancen independentj applicabilityn applicationn easev 6 021 liftv illegitimacyn dependv consequencen singlej independencen licencen 7 021 importancen independentj freedomn disagreementn currencyn liftv 8 022 obligationn singlej abandonv liftv freev missn 9 023 easev lifen lackv coldn inadequacyn greenn 10 023 independentj freedomn dependv dependv priden involvementn 11 023 singlej irrelevantn momentn priden coldj greenj 12 024 abandonv missv importancen siden irrelevantj consequencen 13 024 integrityn imperativej relationn realtyn sidev utilityn 14 024 necessityn safetyn lackn consequencen disagreementn lackv 15 024 irrelevantj obligationn necessityn dragn independentn independentn indpndncen rank crib scr combined string datelocal widecos narrowcos burstiness rf 1 quartern currencyn currencyn exchangev blessv 2 043 chopv calibren goodj applaudv praisev makingn chopv 3 045 blessv chopn qualityn praisen superiorj praisen commendv 4 048 applaudv chopv classn praisev goodj classn laudv 5 049 exchangev classn exchangen goodj classn currencyn makingn 6 classv complimentn makingn goodn applaudv applaudv 7 056 commendv makingn superiorj blessv quartern quartern superiorj 8 057 classv qualityj exchangev superiorj qualityn superiorj praisen 9 068 quarterv qualityn superiorn goodn biennialj goodn superiorn 10 071 complimentv quartern praisev exchangev exchangen qualityn complimentn 11 081 scrollv quarterv praisen chopv blessv superiorn scrolln 12 230 superiorj applaudv goodn exchangen praisen laudv exchangev 13 230 classn biennialj blessv qualityn exchangev praisev chopn 14 234 qualityn biennialn currencyn classn exchangen goodn 15 235 makingn blessn calibern biennialj blessv calibren 17 32 laudv rank crib scr combined string datelocal widecos narrowcos burstiness rf 1 risev bearv widown standv horsen 2 030 sufferv endurev sufferv standv standv raisev expirev 3 031 bearv expirev standv leavev leavev sufferv proceedv 4 039 leavev leavev limitn sufferv bearv bearv quantityn 5 041 proceedv proceedv raisev endurev boundaryn leavev boundaryn 6 041 endurev raisev bearv limitn endurev risev limitn 7 042 raisev risev leavev raisev limitn proceedv endurev 8 044 risev shallowj horsen quantityn sufferv endurev widown 9 045 expirev boundaryn proceedv proceedv limitn bearv 10 045 limitn sufferv expirev horsen raisev expirev sufferv 11 052 boundaryn mischiefn quantityn widown expirev quantityn standv 12 057 quantityn boundaryn proceedv boundaryn risev widown mischiefn 13 061 widown horsen endurev shallowj horsen horsen raisev 14 062 horsen limitn widown risev quantityn boundaryn shallowj 15 072 shallowj quantityn mischiefn expirev shallowj risev 5 tables show the performance of individual similarity measures as well as their combined choice after model retraining correct translations are shown in bold note that in many cases the stringsimilaritybased orderings of the bridge candidates underperform individual nonstring similarity measures and they consistently underperform the weighted combiof all 8 similarity measures note that in the case of correct translation successfully above its quite closely related competitor almost every nonstringbased similarity measure in isolation this behavior illustrates the contribution of consensus modeling over this set of diverse similarity measures mann g and d yarowsky 2001 multipath translation induction via bridge languages in process of intrafamily translation was handled by weighted string distance models of cognate similarity with a probabilistic representation of common intrafamily orthographic transformationsthese models were iteratively reestimated using an expectationmaximization algorithm when intrafamily orthographic shifts are clear and systematic such models can be quite effective on their ownin practice the technique described suffers from the problem of faux amis false cognatesfor example serbianczech faux amis such as prazanprizen and prazanpazen can outrank the correct but orthographically less similar prazanprazdny causing the english bridge pathways to the correct english translations blank and empty to be scored below the incorrect translation paths to favor grace and patronagethis paper addresses the abovedescribed model deficiency by proposing developing and evaluating the use of 7 additional similarity models which successfully capture a set of complementary distributional behaviorsan algorithm combining them with weighted string distance significantly outperforms the previous bridge language approach on both englishserbian and englishgujarati test setsour goal was to learn translation lexicons using resources that are available on the internet at no monetary costno seed dictionary is required between english and the language of interest a sizeable dictionary between the bridge language and english is necessaryour work with serbian involved the use of a czechenglish dictionary initially containing roughly 171k czechenglish pairs including 54k unique czech word types and 43k unique english typesthe hindienglish dictionary contained around 74k pairsthe serbiangujarati vocabularies we used were built by extracting all word types from the respective corpora then filtering out lowfrequency words and very short words the corpora used here are composed of news data the majority of which was downloaded from the internetthe english corpus contains 192m tokens serbian 12m gujarati 2menglish was lemmatized using a highquality lemmatization utility the serbian using minimally supervised morphological analysis as described in yarowsky and wicentowski gujarati was not lemmatizedwhere possible date labels were extracted for news storiesthis resulted in 1690 separate labeled days of news for serbian and 233 for gujaratifor each language task english news data was marked as originating either locally or nonwords with length 5 characters were excluded locally with respect to areas where the language is spoken in order to facilitate computation of datedistributional similarities across both strongly related sameregion news sources and a general worldwide aggregate news corpus
W02-2026
inducing translation lexicons via diverse similarity measures and bridge languagesthis paper presents a method for inducing translation lexicons between two distant languages without the need for either parallel bilingual corpora or a direct bilingual seed dictionarythe algorithm successfully combines temporal occurrence similarity across dates in news corpora wide and local crosslanguage context similarity weighted levenshtein distance relative frequency and burstiness similarity measuresthese similarity measures are integrated with the bridge language concept under a robust method of classifier combination for both the slavic and northern indian language familieswe induce translation lexicons for languages without common parallel corpora using a bridge language that is related to the target languageswe create bagofwords context vectors around both the source and target language words and then project the source vectors into the target space via the current small translation dictionary
an evaluation exercise for word alignment bibref1 bibref2 bibref3 bibref4 bibref5 bibref6 bibref7 bibref8 limited unlimited unlimited limited unlimited unlimited limited unlimited baseline of bilingual bracketing baseline of bilingual bracketing english pos tagging baseline of bilingual bracketing english pos tagging and base np reverse direction of bibref1 reverse direction of bibref2 reverse direction of bibref3 intersection of bibref1 bibref3 intersection of bibref3 bibref6 proalignef1 unlimited cohesion between source and target language english parser distributional similarity for english words ralignef1 limited giza recursive parallel segmentation umdef1 limited ibm model 2 trained with 120 of the corpus distortion 2 iterations 4 xrcebaseef1 xrcenolemef2 xrcenolemef3 limited giza with english and french lemmatizer giza only trained with 14 of the corpus giza only trained with 12 of the corpus table 2 short description for englishfrench systems system resources description bibrre1 bibrre2 bibrre3 limited unlimited unlimited baseline of bilingual bracketing baseline of bilingual bracketing english pos tagging baseline of bilingual bracketing english pos tagging and base np the task of word alignment consists of finding correspondences between words and phrases in parallel textsassuming a sentence aligned bilingual corpus in languages l1 and l2 the task of a word alignment system is to indicate which word token in the corpus of language l1 corresponds to which word token in the corpus of language l2as part of the hltnaacl 2003 workshop on building and using parallel texts data driven machine translation and beyond we organized a shared task on word alignment where participating teams were provided with training and test data consisting of sentence aligned parallel texts and were asked to provide automatically derived word alignments for all the words in the test setdata for two language pairs were provided englishfrench representing languages with rich resources and romanianenglish representing languages with scarce resources similar with the machine translation evaluation exercise organized by nist1 two subtasks were defined with teams being encouraged to participate in both subtasks use any resources in addition to those providedsuch resources had to be explicitly mentioned in the system descriptiontest data were released one week prior to the deadline for result submissionsparticipating teams were asked to produce word alignments following a common format as specified below and submit their output by a certain deadlineresults were returned to each team within three days of submissionthe word alignment result files had to include one line for each wordtoword alignmentadditionally lines in the result files had to follow the format specified in fig1while the sip and confidence fields overlap in their meaning the intent of having both fields available is to enable participating teams to draw their own line on what they consider to be a sure or probable alignmentboth these fields were optional with some standard values assigned by defaultconsider the following two aligned sentences where o confidence is a real number in the range this field is optional and by default confidence number of 1 was assumed aligned and counts towards the final evaluation figuresalternatively systems could also provide an sip marker andor a confidence score as shown in the following example with missing sip fields considered by default to be s and missing confidence scores considered by default 1the annotation guide and illustrative word alignment examples were mostly drawn from the blinker annotation projectplease refer to for additional detailsfrench fixe moi ton salaire et je te le donnerai and he said from the english sentence has no corresponding translation in french and therefore all these words are aligned with the token id 0 18 1 0 18 2 0 18 3 0 18 4 0 since the words do not correspond one to one and yet the two phrases mean the same thing in the given context the phrases should be linked as wholes by linking each word in one to each word in anotherfor the example above this translates into 12 wordtoword alignmentsthe shared task included two different language pairs the alignment of words in englishfrench parallel texts and in romanianenglish parallel textsfor each language pair training data were provided to participantssystems relying only on these resources were considered part of the limited resources subtasksystems making use of any additional resources were classified under the unlimited resources categorytwo sets of training data were made available for pages containing potential parallel translations were manually identified next texts were automatically downloaded and sentence aligneda manual verification of the alignment was also performedthese data collection process resulted in a corpus of about 850000 romanian words and about 900000 english wordsall data were pretokenizedfor english and french we used a version of the tokenizers provided within the egypt toolkit2for romanian we used our own tokenizeridentical tokenization procedures were used for training trial and test datatwo sets of trial data were made available at the same time training data became availabletrial sets consisted of sentence aligned texts provided together with manually determined word alignmentsthe main purpose of these data was to enable participants to better understand the format required for the word alignment result filestrial sets consisted of 37 englishfrench and 17 romanianenglish aligned sentencesa total of 447 englishfrench aligned sentences and 248 romanianenglish aligned sentences were released one week prior to the deadlineparticipants were required to run their word alignment systems on these two sets and submit word alignmentsteams were allowed to submit an unlimited number of results sets for each language pairthe gold standard for the two language pair alignments were produced using slightly different alignment procedures which allowed us to study different schemes for producing gold standards for word aligned datafor englishfrench annotators where instructed to assign a sure or probable tag to each word alignment they producedthe intersection of the sure alignments produced by the two annotators led to the final sure aligned set while the reunion of the probable alignments led to the final probable aligned setthe sure alignment set is guaranteed to be a subset of the probable alignment setthe annotators did not produce any null alignmentsinstead we assigned null alignments as a default backup mechanism which forced each word to belong to at least one alignmentthe englishfrench aligned data were produced by franz och and hermann ney for romanianenglish annotators were instructed to assign an alignment to all words with specific instructions as to when to assign a null alignmentannotators were not asked to assign a sure or probable labelinstead we had an arbitration phase where a third annotator judged the cases where the first two annotators disagreedsince an interannotator agreement was reached for all word alignments the final resulting alignments were considered to be sure alignmentsevaluations were performed with respect to four different measuresthree of them precision recall and fmeasure represent traditional measures in information retrieval and were also frequently used in previous word alignment literaturethe fourth measure was originally introduced by and proposes the notion of quality of word alignmentgiven an alignment a and a gold standard alignment each such alignment set eventually consisting of two sets as ap and 9s 9p corresponding to sure and probable alignments the following measures are defined each word alignment submission was evaluated in terms of the above measuresmoreover we conducted two sets of evaluations for each submission nullalign where each word was enforced to belong to at least one alignment if a word did not belong to any alignment a null probable alignment was assigned by defaultthis set of evaluations pertains to full coverage word alignmentswe conducted therefore 14 evaluations for each submission file aer sureprobable precision sureprobable recall and sureprobable fmeasure with a different figure determined for nullalign and nonullalign alignmentsseven teams from around the world participated in the word alignment shared tasktable 1 lists the names of the participating systems the corresponding institutions and references to papers in this volume that provide detailed descriptions of the systems and additional analysis of their resultsall seven teams participated in the romanianenglish subtask and five teams participated in the englishfrench subtask3 there were no restrictions placed on the number of submissions each team could makethis resulted in a total of 27 submissions from the seven teams where 14 sets of results were submitted for the englishfrench subtask and 13 for the romanianenglish subtaskof the 27 total submissions there were 17 in the limited resources subtask and 10 in the unlimited resources subtasktables 2 and 3 show all of the submissions for each team in the two subtasks and provide a brief description of their approacheswhile each participating system was unique there were a few unifying themesfour teams had approaches that relied on an ibm model of statistical machine translation umd was a straightforward implementation of ibm model 2 bibr employed a boosting procedure in deriving an ibm model 1 lexicon ralign used ibm model 2 as a foundation for their recursive splitting procedure and xrce used ibm model 4 as a base for alignment with lemmatized text and bilingual lexiconstwo teams made use of syntactic structure in the text to be alignedproalign satisfies constraints derived from a dependency tree parse of the english sentence being alignedbibr also employs syntactic constraints that must be satisfiedhowever these come from parallel text that has been shallowly parsed via a method known as bilingual bracketingthree teams approached the shared task with baseline or prototype systemsfourday combines several intuitive baselines via a nearest neighbor classifier racai carries out a greedy alignment based on an automatically extracted dictionary of translations and umds implementation of ibm model 2 provides an experimental platform for their future work incorporating prior knowledge about cognatesall three of these systems were developed within a short period of time before and during the shared tasktables 4 and 5 list the results obtained by participating systems in the romanianenglish tasksimilarly results obtained during the englishfrench task are listed in tables 6 and 7for romanianenglish limited resources xrce systems seem to lead to the best resultsthese are systems that are based on giza with or without additional resources for unlimited resources proalignre1 has the best performancefor englishfrench ralignef1 has the best performance for limited resources while proalignef1 has again the largest number of top ranked figures for unlimited resourcesto make a crosslanguage comparison we paid particular attention to the evaluation of the sure alignments since these were collected in a similar fashion the results obtained for the englishfrench sure alignments are significantly higher than those for romanianenglish sure alignments similarly aer for englishfrench is clearly better than the aer for romanianenglish this difference in performance between the two data sets is not a surpriseas expected word alignment like many other nlp tasks highly benefits from large amounts of training dataincreased performance is therefore expected when larger training data sets are availablethe only evaluation set where romanianenglish data leads to better performance is the probable alignments setwe believe however that these figures are not directly comparable since the englishfrench probable alignments were obtained as a reunion of the alignments assigned by two different annotators while for the romanianenglish probable set two annotators had to reach an agreement interestingly in an overall evaluation the limited resources systems seem to lead to better results than those with unlimited resourcesout of 28 different evaluation figures 20 top ranked figures are provided by systems with limited resourcesthis suggests that perhaps using a large number of additional resources does not seem to improve a lot over the case when only parallel texts are employedranked results for all systems are plotted in figures 2 and 3in the graphs systems are ordered based on their aer scoressystem names are preceded by a marker to indicate the system type l stands for limited resources and you stands for unlimited resourcesa shared task on word alignment was organized as part of the hltnaacl 2003 workshop on building and using parallel textsin this paper we presented the task definition and resources involved and shortly described the participating systemsthe shared task included romanianenglish and englishfrench subtasks and drew the participation of seven teams from around the worldcomparative evaluations of results led to interesting insights regarding the impact on performance of various alignment algorithms large or small amounts of training data and type of resources availabledata and evaluation software used in this exercise are available online at httpwwwcsunteduradawptthere are many people who contributed greatly to making this word alignment evaluation task possiblewe are grateful to all the participants in the shared task for their hard work and involvement in this evaluation exercisewithout them all these comparative analyses of word alignment techniques would not be possiblewe are very thankful to franz och from isi and hermann ney from rwth aachen for kindly making their englishfrench word aligned data available to the workshop participants the hansards made available by ulrich germann from isi constituted invaluable data for the englishfrench shared taskwe would also like to thank the student volunteers from the department of english babesbolyai university clujnapoca romania who helped creating the romanianenglish word aligned datawe are also grateful to all the program committee members of the current workshop for their comments and suggestions which helped us improve the definition of this shared taskin particular we would like to thank dan melamed for suggesting the two different subtasks and michael carl and phil resnik for initiating interesting discussions regarding phrasebased evaluations
W03-0301
an evaluation exercise for word alignmentthis paper presents the task definition resources participating systems and comparative results for the shared task on word alignment which was organized as part of the hltnaacl 2003 workshop on building and using parallel textsthe shared task included romanianenglish and englishfrench subtasks and drew the participation of seven teams from around the worldwe present a small dataset of 447 pairs of nonoverlapping sentences which can be used to evaluate the performance of wordalignment systems
learning subjective nouns using extraction pattern bootstrapping we explore the idea of creating a subjectivity classifier that uses lists of subjective nouns learned by bootstrapping algorithms the goal of our research is to develop a system that can distinguish subjective sentences from objective sentences first we use two bootstrapping algorithms that exploit extraction patterns to learn sets of subjective nouns then we train a naive bayes classifier using the subjective nouns discourse features and subjectivity clues identified in prior research the bootstrapping algorithms learned over 1000 subjective nouns and the subjectivity classifier performed well achieving 77 recall with 81 precision many natural language processing applications could benefit from being able to distinguish between factual and subjective informationsubjective remarks come in a variety of forms including opinions rants allegations accusations suspicions and speculationideally information extraction systems should be able to distinguish between factual information and nonfactual information question answering systems should distinguish between factual and speculative answersmultiperspective question answering aims to present multiple answers to the user based upon speculation or opinions derived from different sourcesmultidocument summarization systems need to summarize different opinions and perspectivesspam filtering systems must recognize rants and emotional tirades among other thingsin general nearly any system that seeks to identify information could benefit from being able to separate factual and subjective informationsubjective language has been previously studied in fields such as linguistics literary theory psychology and content analysissome manuallydeveloped knowledge resources exist but there is no comprehensive dictionary of subjective languagemetabootstrapping and basilisk are bootstrapping algorithms that use automatically generated extraction patterns to identify words belonging to a semantic categorywe hypothesized that extraction patterns could also identify subjective wordsfor example the pattern expressed often extracts subjective nouns such as concern hope and supportfurthermore these bootstrapping algorithms require only a handful of seed words and unannotated texts for training no annotated data is needed at allin this paper we use the metabootstrapping and basilisk algorithms to learn lists of subjective nouns from a large collection of unannotated textsthen we train a subjectivity classifier on a small set of annotated data using the subjective nouns as features along with some other previously identified subjectivity featuresour experimental results show that the subjectivity classifier performs well and that the learned nouns improve upon previous stateoftheart subjectivity results in 2002 an annotation scheme was developed for a yous governmentsponsored project with a team of 10 researchers the scheme was inspired by work in linguistics and literary theory on subjectivity which focuses on how opinions emotions etc are expressed linguistically in context the scheme is more detailed and comprehensive than previous oneswe mention only those aspects of the annotation scheme relevant to this paperthe goal of the annotation scheme is to identify and characterize expressions of private states in a sentenceprivate state is a general covering term for opinions evaluations emotions and speculations for example in sentence the writer is expressing a negative evaluationsentence reflects the private state of western countriesmugabes use of overwhelmingly also reflects a private state his positive reaction to and characterization of his victoryour data consists of englishlanguage versions of foreign news documents from fbis the yous foreign broadcast information servicethe data is from a variety of publications and countriesthe annotated corpus used to train and test our subjectivity classifiers consists of 109 documents with a total of 2197 sentenceswe used a separate annotated tuning corpus of 33 documents with a total of 698 sentences to establish some experimental parameterseach document was annotated by one or both of two annotators a and t to allow us to measure interannotator agreement the annotators independently annotated the same 12 documents with a total of 178 sentenceswe began with a strict measure of agreement at the sentence level by first considering whether the annotator marked any privatestate expression of any strength anywhere in the sentenceif so the sentence should be subjectiveotherwise it is objectivetable 1 shows the contingency tablethe percentage agreement is 88 and the rc value is 071one would expect that there are clear cases of objective sentences clear cases of subjective sentences and borderline sentences in betweenthe agreement study supports thisin terms of our annotations we define a sentence as borderline if it has at least one privatestate expression identified by at least one annotator and all strength ratings of privatestate expressions are lowtable 2 shows the agreement results when such borderline sentences are removed the percentage agreement increases to 94 and the rc value increases to 087as expected the majority of disagreement cases involve lowstrength subjectivitythe annotators consistently agree about which are the clear cases of subjective sentencesthis leads us to define the goldstandard that we use in our experimentsa sentence is subjective if it contains at least one privatestate expression of medium or higher strengththe second class which we call objective consists of everything elsethus sentences with only mild traces of subjectivity are tossed into the objective category making the systems goal to find the clearly subjective sentencesin the last few years two bootstrapping algorithms have been developed to create semantic dictionaries by exploiting extraction patterns metabootstrapping and basilisk extraction patterns were originally developed for information extraction tasks they represent lexicosyntactic expressions that typically rely on shallow parsing and syntactic role assignmentfor example the pattern was hired would apply to sentences that contain the verb hired in the passive voicethe subject would be extracted as the hireemetabootstrapping and basilisk were designed to learn words that belong to a semantic category both algorithms begin with unannotated texts and seed words that represent a semantic categorya bootstrapping process looks for words that appear in the same extraction patterns as the seeds and hypothesizes that those words belong to the same semantic classthe principle behind this approach is that words of the same semantic class appear in similar pattern contextsfor example the phrases lived in and traveled to will cooccur with many noun phrases that represent locationsin our research we want to automatically identify words that are subjectivesubjective terms have many different semantic meanings but we believe that the same contextual principle applies to subjectivityin this section we briefly overview these bootstrapping algorithms and explain how we used them to generate lists of subjective nounsthe metabootstrapping process begins with a small set of seed words that represent a targeted semantic category and an unannotated corpusfirst metaboot automatically creates a set of extraction patterns for the corpus by applying and instantiating syntactic templatesthis process literally produces thousands of extraction patterns that collectively will extract every noun phrase in the corpusnext metaboot computes a score for each pattern based upon the number of seed words among its extractionsthe best pattern is saved and all of its extracted noun phrases are automatically labeled as the targeted semantic category2 metaboot then rescores the extraction patterns using the original seed words as well as the newly labeled words and the process repeatsthis procedure is called mutual bootstrappinga second level of bootstrapping makes the algorithm more robustwhen the mutual bootstrapping process is finished all nouns that were put into the semantic dictionary are reevaluatedeach noun is assigned a score based on how many different patterns extracted itonly the five best nouns are allowed to remain in the dictionarythe other entries are discarded and the mutual bootstrapping process starts over again using the revised semantic dictionarybasilisk is a more recent bootstrapping algorithm that also utilizes extraction patterns to create a semantic dictionarysimilarly basilisk begins with an unannotated text corpus and a small set of seed words for a semantic categorythe bootstrapping process involves three steps basilisk automatically generates a set of extraction patterns for the corpus and scores each pattern based upon the number of seed words among its extractionsthis step is identical to the first step of metabootstrappingbasilisk then puts the best patterns into a pattern pool all nouns3 extracted by a pattern in the pattern pool are put into a candidate word poolbasilisk scores each noun based upon the set of patterns that extracted it and their collective association with the seed words the top 10 nouns are labeled as the targeted semantic class and are added to the dictionarythe bootstrapping process then repeats using the original seeds and the newly labeled wordsthe main difference between basilisk and metabootstrapping is that basilisk scores each noun based on collective information gathered from all patterns that extracted itin contrast metabootstrapping identifies a single best pattern and assumes that everything it extracted belongs to the same semantic classthe second level of bootstrapping smoothes over some of the problems caused by this assumptionin comparative experiments basilisk outperformed metabootstrappingbut since our goal of learning subjective nouns is different from the original intent of the algorithms we tried them bothwe also suspected they might learn different words in which case using both algorithms could be worthwhilethe metabootstrapping and basilisk algorithms need seed words and an unannotated text corpus as inputsince we did not need annotated texts we created a much larger training corpus the bootstrapping corpus by gathering 950 new texts from the fbis source mentioned in section 22to find candidate seed words we automatically identified 850 nouns that were positively correlated with subjective sentences in another data sethowever it is crucial that the seed words occur frequently in our fbis texts or the bootstrapping process will not get off the groundso we searched for each of the 850 nouns in the bootstrapping corpus sorted them by frequency and manually selected 20 highfrequency words that we judged to be strongly subjectivetable 3 shows the 20 seed words used for both metabootstrapping and basiliskwe ran each bootstrapping algorithm for 400 iterations generating 5 words per iterationbasilisk generated 2000 nouns and metabootstrapping generated 1996 nouns4 table 4 shows some examples of extraction patterns that were discovered to be associated with subjective nounsmetabootstrapping and basilisk are semiautomatic lexicon generation tools because although the bootstrapping process is 100 automatic the resulting lexicons need to be reviewed by a human5 so we manually reviewed the 3996 words proposed by the algorithmsthis process is very fast it takes only a few seconds to classify each wordthe entire review process took approximately 34 hoursone author did this labeling this person did not look at or run tests on the experiment corpuswe classified the words as strongsubjective weaksubjective or objectiveobjective terms are not subjective at all strongsubjective terms have strong unambiguously subjective connotations such as bully or barbarianweaksubjective was used for three situations words that have weak subjective connotations such as aberration which implies something out of the ordinary but does not evoke a strong sense of judgement words that have multiple senses or uses where one is subjective but the other is notfor example the word plague can refer to a disease or an onslaught of something negative words that are objective by themselves but appear in idiomatic expressions that are subjectivefor example the word eyebrows was labeled weaksubjective because the expression raised eyebrows probably occurs more often in our corpus than literal references to eyebrowstable 5 shows examples of learned words that were classified as strongsubjective or weaksubjectiveonce the words had been manually classified we could go back and measure the effectiveness of the algorithmsthe graph in figure 1 tracks their accuracy as the bootstrapping progressedthe xaxis shows the number of words generated so farthe yaxis shows the percentage of those words that were manually classified as subjectiveas is typical of bootstrapping algorithms accuracy was high during the initial iterations but tapered off as the bootstrapping continuedafter 20 words both algorithms were 95 accurateafter 100 words basilisk was 75 accurate and metaboot was 81 accurateafter 1000 words accuracy dropped to about 28 for metaboot but basilisk was still performing reasonably well at 53although 53 accuracy is not high for a fully automatic process basilisk depends on a human to review the words so 53 accuracy means that the human is accepting every other word on averagethus the reviewers time was still being spent productively even after 1000 words had been hypothesizedtable 6 shows the size of the final lexicons created by the bootstrapping algorithmsthe first two columns show the number of subjective terms learned by basilisk and metabootstrappingbasilisk was more prolific generating 825 subjective terms compared to 522 for metabootstrappingthe third column shows the intersection between their word liststhere was substantial overlap but both algorithms produced many words that the other did notthe last column shows the results of merging their listsin total the bootstrapping algorithms produced 1052 subjective nounsto evaluate the subjective nouns we trained a naive bayes classifier using the nouns as featureswe also incorporated previously established subjectivity clues and added some new discourse featuresin this section we describe all the feature sets and present performance results for subjectivity classifiers trained on different combinations of these featuresthe threshold values and feature representations used in this section are the ones that produced the best results on our separate tuning corpuswe defined four features to represent the sets of subjective nouns produced by the bootstrapping algorithmsbastrong the set of strongsubjective nouns generated by basilisk baweak the set of weaksubjective nouns generated by basilisk mbstrong the set of strongsubjective nouns generated by metabootstrapping mbweak the set of weaksubjective nouns generated by metabootstrapping for each set we created a threevalued feature based on the presence of 0 1 or 2 words from that setwe used the nouns as feature sets rather than define a separate feature for each word so the classifier could generalize over the set to minimize sparse data problemswe will refer to these as the subjnoun featureswiebe bruce ohara developed a machine learning system to classify subjective sentenceswe experimented with the features that they used both to compare their results to ours and to see if we could benefit from their featureswe will refer to these as the wbo featureswbo includes a set of stems positively correlated with the subjective training examples and a set of stems positively correlated with the objective training examples we defined a threevalued feature for the presence of 0 1 or 2 members of subjstems in a sentence and likewise for objstemsfor our experiments subjstems includes stems that appear 7 times in the training set and for which the precision is 125 times the baseline word precision for that training set objstems contains the stems that appear 7 times and for which at least 50 of their occurrences in the training set are in objective sentenceswbo also includes a binary feature for each of the following the presence in the sentence of a pronoun an adjective a cardinal number a modal other than will and an adverb other than notwe also added manuallydeveloped features found by other researcherswe created 14 feature sets representing some classes from some framenet lemmas with frame element experiencer adjectives manually annotated for polarity and some subjectivity clues listed in we represented each set as a threevalued feature based on the presence of 0 1 or 2 members of the setwe will refer to these as the manual featureswe created discourse features to capture the density of clues in the text surrounding a sentencefirst we computed the average number of subjective clues and objective clues per sentence normalized by sentence lengththe subjective clues subjclues are all sets for which 3valued features were defined above the objective clues consist only of objstemsfor senavgclueratesubj to be the average of cluerate over all sentences s and similarly for avgcluerateobjnext we characterize the number of subjective and objective clues in the previous and next sentences as higherthanexpected lowerthanexpected or expected the value for clueratesubj is high if clueratesubj avgclueratesubj 13 low if clueratesubj avgsentlen13 low if length avgsentlen13 and medium otherwisewe conducted experiments to evaluate the performance of the feature sets both individually and in various combinationsunless otherwise noted all experiments involved training a naive bayes classifier using a particular set of featureswe evaluated each classifier using 25fold cross validation on the experiment corpus and used paired ttests to measure significance at the 95 confidence levelas our evaluation metrics we computed accuracy as the percentage of the systems classifications that match the goldstandard and precision and recall with respect to subjective sentences represents the common baseline of assigning every sentence to the most frequent classthe mostfrequent baseline achieves 59 accuracy because 59 of the sentences in the goldstandard are subjectiverow is a naive bayes classifier that uses the wbo features which performed well in prior research on sentencelevel subjectivity classification row shows a naive bayes classifier that uses unigram bagofwords features with one binary feature for the absence or presence in the sentence of each word that appeared during trainingpang et al reported that a similar experiment produced their best results on a related classification taskthe difference in accuracy between rows and is not statistically significant next we trained a naive bayes classifier using only the subjnoun featuresthis classifier achieved good precision but only moderate recall upon further inspection we discovered that the subjective nouns are good subjectivity indicators when they appear but not every subjective sentence contains one of themand relatively few sentences contain more than one making it difficult to recognize contextual effects we concluded that the appropriate way to benefit from the subjective nouns is to use them in tandem with other subjectivity cluestable 8 shows the results of naive bayes classifiers trained with different combinations of featuresthe accuracy differences between all pairs of experiments in table 8 are statistically significantrow uses only the wbo features row uses the wbo features as well as the subjnoun featuresthere is a synergy between these feature sets using both types of features achieves better performance than either one alonethe difference is mainly precision presumably because the classifier found more and better combinations of featuresin row we also added the manual and discourse featuresthe discourse features explicitly identify contexts in which multiple clues are foundthis classifier produced even better performance achieving 813 precision with 774 recallthe 761 accuracy result is significantly higher than the accuracy results for all of the other classifiers finally higher precision classification can be obtained by simply classifying a sentence as subjective if it contains any of the strongsubjective nounson our data this method produces 87 precision with 26 recallthis approach could support applications for which precision is paramountseveral types of research have involved documentlevel subjectivity classificationsome work identifies inflammatory texts or classifies reviews as positive or negative tongs system generates sentiment timelines tracking online discussions and creating graphs of positive and negative opinion messages over timeresearch in genre classification may include recognition of subjective genres such as editorials in contrast our work classifies individual sentences as does the research in sentencelevel subjectivity classification is useful because most documents contain a mix of subjective and objective sentencesfor example newspaper articles are typically thought to be relatively objective but reported that in their corpus 44 of sentences were subjectivesome previous work has focused explicitly on learning subjective words and phrases describes a method for identifying the semantic orientation of words for example that beautiful expresses positive sentimentsresearchers have focused on learning adjectives or adjectival phrases and verbs but no previous work has focused on learning nounsa unique aspect of our work is the use of bootstrapping methods that exploit extraction patterns used patterns representing partofspeech sequences recognized adjectival phrases and learned ngramsthe extraction patterns used in our research are linguistically richer patterns requiring shallow parsing and syntactic role assignmentin recent years several techniques have been developed for semantic lexicon creation semantic word learning is different from subjective word learning but we have shown that metabootstrapping and basilisk could be successfully applied to subjectivity learningperhaps some of these other methods could also be used to learn subjective wordsthis research produced interesting insights as well as performance resultsfirst we demonstrated that weakly supervised bootstrapping techniques can learn subjective terms from unannotated textssubjective features learned from unannotated documents can augment or enhance features learned from annotated training data using more traditional supervised learning techniquessecond basilisk and metabootstrapping proved to be useful for a different task than they were originally intendedby seeding the algorithms with subjective words the extraction patterns identified expressions that are associated with subjective nounsthis suggests that the bootstrapping algorithms should be able to learn not only general semantic categories but any category for which words appear in similar linguistic phrasesthird our best subjectivity classifier used a wide variety of featuressubjectivity is a complex linguistic phenomenon and our evidence suggests that reliable subjectivity classification requires a broad array of features
W03-0404
learning subjective nouns using extraction pattern bootstrappingwe explore the idea of creating a subjectivity classifier that uses lists of subjective nouns learned by bootstrapping algorithmsthe goal of our research is to develop a system that can distinguish subjective sentences from objective sentencesfirst we use two bootstrapping algorithms that exploit extraction patterns to learn sets of subjective nounsthen we train a naive bayes classifier using the subjective nouns discourse features and subjectivity clues identified in prior researchthe bootstrapping algorithms learned over 1000 subjective nouns and the subjectivity classifier performed well achieving 77 recall with 81 precisionwe use manually derived pattern templates to extract subjective nouns by bootstrappingwe mine subjective nouns from unannotated texts with two bootstrapping algorithms that exploit lexicosyntactic extraction patterns and manuallyselected subjective seeds
unsupervised personal name disambiguation this paper presents a set of algorithms for distinguishing personal names with multiple real referents in text based on little or no supervision the approach utilizes an unsupervised clustering technique over a rich feature space of biographic facts which are automatically extracted via a languageindependent bootstrapping process the induced clustering of named entities are then partitioned and linked to their real referents via the automatically extracted biographic data performance is evaluated based on both a test set of handlabeled multireferent personal names and via automatically generated pseudonames one open problem in natural language ambiguity resolution is the task of proper noun disambiguationwhile word senses and translation ambiguities may typically have 220 alternative meanings that must be resolved through context a personal name such as jim clark may potentially refer to hundreds or thousands of distinct individualseach different referent typically has some distinct contextual characteristicsthese characteristics can help distinguish resolve and trace the referents when the surface names appear in online documentsa search of google shows 76000 web pages mentioning jim clark of which the first 10 unique referents are in this paper we present a method for distinguishing the real world referent of a given name in contextapproaches to this problem include wacholder et al focusing on the variation of surface name for a given referent and smith and crane resolving geographic name ambiguitywe present preliminary evaluation on pseudonames conflations of multiple personal names constructed in the same way pseudowords are used for word sense disambiguation we then present corroborating evidence from real personal name polysemy to show that this technique works in practiceanother topic of recent interest is in producing biographical summaries from corpora along with disambiguation our system simultaneously collects biographic information the relevant biographical attributes are depicted along with a clustering which shows the distinct referents past work on this task has primarily approached personal name disambiguation using document context profiles or vectors which recognize and distinguish identical name instances based on partially indicative words in context such as computer or car in the clark casehowever in the specialized case of personal names there is more precise information availablein particular information extraction techniques can add high precision categorical information such as approximate agedateofbirth nationality and occupationthis categorical data can support or exclude a candidate namehreferent matches with higher confidence and greater pinpoint accuracy than via simple context vectorstyle features aloneanother major source of disambiguation information for proper nouns is the space of associated nameswhile these names could be used in a undifferentiated vectorbased bagofwords model further accuracy can be gained by extracting specific types of association such as familial relationships employment relationships and nationality as distinct from simple term cooccurrence in a windowthe jim clark married to vickie parkerclark is likely not the same jim clark married to patty clarkadditionally information about ones associates can help predict information about the person in questionsomeone who frequently associates with egyptians is likely to be egyptian or at the very least has a close connection to egyptone standard method for generating extraction patterns is simply to write them by handin this paper we have experimented with generating patterns automatically from datathis has the advantage of being more flexible portable and scalable and potentially having higher precision and recallit also has the advantage of being applicable to new languages for which no developer with sufficient knowledge of the language is available d web pages in the late 90s there was a substantial body of research on learning information extraction patterns from templates these techniques provide a way to bootstrap information extraction patterns from a set of example extractions or seed facts where a tuple with the filled roles for the desired pattern are givenfor the task of extracting biographical information each example would include the personal name and the biographic featurefor example training data for the pattern born in might be amadeus given this set of examples each method generates patterns differentlyin this paper we employ and extend the method described by ravichandran and hovy shown in figure 1for each seed fact pair for a given template then serve as extraction patterns for previously unknown fact pairs and their precision in fact extraction can be calculated with respect to a set of currently known factswe examined a subset of the available and desirable extracted informationwe learned patterns for birth year and occupation and handcoded patterns for birth location spouse birthday familial relationships collegiate affiliations and nationalityother potential patterns currently under investigation include employeremployee and place of residencewe adapted the information extraction pattern generation techniques described above to multiple languagesin particular the methodology proposed by ravichandran and hovy requires no parsing or other language specific resources so is an ideal candidate for multilingual usein this paper we conducted an initial test test of the viability of inducing these information extraction patterns across languagesto test we constructed a initial database of 5 people and their birthdays and used this to induce the english patternswe then increased the database to 50 people and birthdays and induced patterns for spanish presenting the results abovefigure 2 shows the top precision patterns extracted for english and for spanishit can be seen that the spanish patterns are of the same length with similar estimated precision as well as similar word and punctuation distribution as the english onesin fact the purely syntactic patterns look identicalthe only difference being that to generate equivalent spanish data a database of training examples an order of magnitude larger was requiredthis may be because for each database entry more pages were available on english websites than on spanish websitesthis section examines clustering of web pages which containing an ambiguous personal name the cluster method we employed is bottomup centroid agglomerative clusteringin this method each document is assigned a vector of automatically extracted featuresat each stage of the clustering the two most similar vectors are merged to produce a new cluster with a vector equal to the centroid of the vectors in the clusterthis step is repeated until all documents are clusteredto generate the vectors for each document we explored a variety of methods word weight weight adderley 530 0 snipes 516 0 coltrane 506 0 montreux 501 0 bitches 499 0 danson 497 0 hemp 497 0 mullally 495 0 porgy 494 0 remastered 492 0 actor 350 240 1926 0 220 trumpeter 0 220 midland 0 139 table 3 the 10 words with highest mutual information with the document collection and all of extended feature words for davisharrelson pseudoname in our baseline models we used term vectors composed either of all words or of only proper nounsto assess similarity between vectors we utilized standard cosine similarity ab abwe experimentally determined that the use of proper nouns alone led to more pure clusteringas a result for the remainder of the experiments we used only proper nouns in the vectors except for those common words introduced by the various feature setsselective term weighting has been shown to be highly effective for information retrievalfor this study we investigated both the use of standard tfidf weighting and weighting based on the mutual information where given a document collection c from these we select words which appear more than a1 20 times in the collection and have a i greater than a2 10these words are to the documents feature vector with a weight equal to logthe next set of models use the features extracted using the methodology described in section 2biographical information such as birth year and occupation when found is quite useful in connecting documentsif a document connects a name with a birth year and another document connects the same name with the same birth year typically those two documents refer to the same personthese extracted features were used to categorically cluster documents in which they appearedbecause of their high degree of precision and specificity documents which contained similar extracted features are virtually guaranteed to have the same referentby clustering these documents first large high quality clusters formed which then then provided an anchor for the remaining pagesby examining the dendrogram in figure 3 it is clear that the clusters start with documents with matching features and then the other documents cluster around this corein addition to improving disambiguation performance these extracted features help distinguish the different clusters and provide information about the different peopleanother method for using these extracted features is to give higher weight to words which have ever been seen as filling a patternfor example if 1756 is extracted as a birth year from a syntacticbased pattern for the polysemous name then whenever 1756 is observed anywhere in context it is given a higher weighting and added to the document vector as a potential biographic featurein our experiments we did this only for words which appeared as values for a feature more than a threshold of 4 timesthen whenever the word was seen in a document it was given a weight equal to the log of the number of times the word was seen as an extracted featureideally the raw unsupervised clustering would yield a top level distinction between the different referentshowever this is rarely the casewith this type of agglomerative clustering the most similar pages are clustered first and outliers are assigned as stragglers at the top levels of the cluster treethis typically leads to a full clustering where the toplevel clusters are significantly less discriminative than those at the rootsin order to compensate for this effect we performed a type of tree refactoring which attempted to pick out and utilize seed clusters from within the entire clusteringin the refactoring the clustering is stopped before it runs to completion based on the percentage of documents clustered and the relative size of the clusters achievedat this intermediate stage relatively large and highprecision clusters are found these automaticallyinduced clusters are then used as seeds for the next stage where the unclustered documents are assigned to the seed with the closest distance measure an alternative to this form of cluster refactoring would be to initially cluster only pages with extracted featuresthis would yield a set of cluster seeds divided by features which could then be used for further clusteringhowever this method relies on having a number of pages with extracted features that overlap from each referentthis can only be figure 3 nnpfeateztfeatmi clustering visualization for davisharrelson pseudoname assured when the feature set is rich or a large document space is assumedto test these clustering methods we collected web pages by making requests to the google website for a set of target personal names there was no requirediscography solos ment that the web page be focused on that name nor was there a minimum number of name occurrencesas a result some pages clustered only mentioned the name in passing or in a specialized commercial context the pseudonames were created as followsthe retrieval results from two different randomlyselected people were taken and all references to either name replaced by a unique shared pseudonamethe resulting collection then consisted of documents which were ambiguous as to whom they talked aboutthe aim of the clustering was then to distinguish this artificially conflated pseudonamein addition a test set of four naturally occurring polysemous names containing an average of 60 instances each was manually annotated with distinguishing nameid numbers and used for a parallel evaluationthe experiments consist of two partsthe first output is the clustering visualizations whose utility can be judged by inspectionthe second is a quantitative analysis of the different methodologiesboth are conducted over test sets of pseudonames and naturally occurring ambiguitiesfigures 23 and 4 each have two subfiguresthe lefttop figure shows the extracted seed setsthe rightbottom figure shows the final clustering of the entire document collectionin each figure there are three columns of information before the dendrogramthe first column contains high weighted document content wordsthe second column contains the extracted features from the documentthe third column indicates the real referentthis is either the real name of the conflated pseudoname or a number indicating the referent this presentation allows a quick scan of the clustering to reveal correlationsin general the visualizations are informativeoccasionally the extractions errone time when the patterns themselves cannot be syntactically faulted comes in the case where woody harrelsons wife is extracted as demi moorethe information was extracted from the sentence architect woody harrelson and his wife realtor demi moore which appears as a plot description for the movie indefigure 4 nnpfeatextfeatmi clustering visualization of jim clark pages 1race car driver 4netscape founder amultiple referents cent proposalhere untangling of synecdoche is neededfor miles davis the incorrectly extracted birth years refer to record release dates which take the same surface form as birth years in some genresfigure 4 shows a clustering for a naturally occuring name ambiguity in particular that of web pages which refer to jim clarkthe set was constructed by retrieving 100 web pages and then labeling the pages with respect to their referentas can be seen the clusterings are highly coherentall of the relevant pages are included in the seed set and few inappropriate pages are addedthis type of clustering would be useful to someone searching for a specific individual named jim clarkonce the clustering had been performed a user could scan the output and identity the jim clark of interest based both on extracted features and key wordsfor automated pseudoname evaluation purposes we selected a set of 8 different people for conflation who we presumed had one vastly predominant sensewe selected these people giving room for historical figures figures from pop culture and modern media culture as well as ordinary peoplewe added people with similar backgrounds the full list was composed of these 8 individuals for each we submitted google queries and retrieved up to 1000 pages eachwe then took these hit returns and subsampled to a maximum of 100 pages per personthe person with the smallest representation was anna shusterman with 26 pageswe subsampled by taking the first 100 as ordered lexicallythis may have biased the results somewhat towards unreliable web pages since pages with numeric addresses tend to be newer and more transientwe evaluated two guanularities of feature extractionthe small feature set uses high precision rules to extract occupation birthday spouse birth location and schoolthe large feature set adds higher recall patterns for the previous relationships and as well as parentchild relationshipsas can be seen from the table the highest performing system combines proper nouns relevant words and the high precision extracted features the extended features do not give additional benefit to this combinationas can be seen from the table the large feature set yields better overall performance than the smaller feature setthis suggests that the increased coverage outweighs the introduced noisefor the feattfidf system accuracy at the twoclass disambiguation was above 80 for 25 out of the 28 pairswithout these pairs the average twoclass disambiguation performance over the remaining pairs is 90in two of the problematic cases the contexts of the names are easily confusable as the individuals share the same profession and many of the same keywordsmore complete biographic profiles and different clustering biases would be helpful in fully partitioning these caseshowever in practice these pseudoname pair situations may be more difficult than expected for naturally occurring name pairsin many occupations that are typically newsworthy there may be a tendency for individuals to avoid using identical names to minimize confusionwhen people with identical names do indeed share the same field one would expect a greater effort to providing disambiguating contextual features to distinguish themwe have made some preliminary investigations into selecting pages according to the number of mentions as opposed to by randomthe results have not been conclusive and continuing work is investigating the becausethe above results have utilized pseudoname test sets where high accuracy ground truth is automatically available in large quantities o examples per name to better distinguish model performancetable 6 shows the performance on the four o example handlabeled test sets for naturally occurring polysemous person namesgiven that this is an nary classification task for consistency with the above experiments the data were assigned to one of 3 clusters corresponding to the 2 automatically derived firstpass majority seed sets and the residual otheruse classification but evaluated strictly on performance for the two major senseswhile additional analyses could be accomplished on the residual sets this is difficult given their small size and lack of evidence on many singlemention web pagesthus the task of accurately partitioning the two most common uses and clustering the residual examples for visual exploration may be a natural and practical use for these classification and visualization technologiesin this paper we have presented a set of algorithms for finding the real referents for ambiguous personal names in text using unsupervised clustering and feature extraction methodsin particular we have shown how to learn and use automatically extracted biographic information to improve clustering results and have demonstrated this improvement by evaluating on pseudonameswe have presented initial results on learning these patterns to extract biographic information for multiple languages and intend to use these techniques for largescale multilingual polysemous name clusteringthe results presented here support the automatic clustering of polysemous personal name referents and visualization of these induced clusters and their motivating featuresthese distinct referents can be verified by inspection both of extracted features and of the high weighted terms for each documentthese clusterings may be useful in two waysfirst as a useful visualization tool themselves and second as seeds for disambiguating further entities
W03-0405
unsupervised personal name disambiguationthis paper presents a set of algorithms for distinguishing personal names with multiple real referents in text based on little or no supervisionthe approach utilizes an unsupervised clustering technique over a rich feature space of biographic facts which are automatically extracted via a languageindependent bootstrapping processthe induced clustering of named entities are then partitioned and linked to their real referents via the automatically extracted biographic dataperformance is evaluated based on both a test set of handlabeled multireferent personal names and via automatically generated pseudonameswe extract biographic facts such as date or place of birth occupation relatives among others to help resolve ambiguous names of people
bootstrapping postaggers using unlabelled data this paper investigates booststrapping partofspeech taggers using cotraining in which two taggers are iteratively retrained on each others output since the output of the taggers is noisy there is a question of which newly labelled examples to add to the training set we investigate selecting examples by directly maximising tagger agreement on unlabelled data a method which has been theoretically and empirically motivated in the cotraining literature our results show that agreementbased cotraining can significantly improve tagging performance for small seed datasets further results show that this form of cotraining considerably outperforms selftraining however we find that simply retraining on all the newly labelled data can in some cases yield comparable results to agreementbased cotraining with only a fraction of the computational cost cotraining and several variants of cotraining have been applied to a number of nlp problems including word sense disambiguation named entity recognition noun phrase bracketing and statistical parsing in each case cotraining was used successfully to bootstrap a model from only a small amount of labelled data and a much larger pool of unlabelled dataprevious cotraining approaches have typically used the score assigned by the model as an indicator of the reliability of a newly labelled examplein this paper we take a different approach based on theoretical work by dasgupta et al and abney in which newly labelled training examples are selected using a greedy algorithm which explicitly maximises the pos taggers agreement on unlabelled datawe investigate whether cotraining based upon directly maximising agreement can be successfully applied to a pair of partofspeech taggers the markov model tnt tagger and the maximum entropy cc tagger there has been some previous work on boostrapping pos taggers and cucerzan and yarowsky but to our knowledge no previous work on cotraining pos taggersthe idea behind cotraining the pos taggers is very simple use output from the tnt tagger as additional labelled data for the maximum entropy tagger and vice versa in the hope that one tagger can learn useful information from the output of the othersince the output of both taggers is noisy there is a question of which newly labelled examples to add to the training setthe additional data should be accurate but also useful providing the tagger with new informationour work differs from the blum and mitchell formulation of cotraining by using two different learning algorithms rather than two independent feature sets our results show that when using very small amounts of manually labelled seed data and a much larger amount of unlabelled material agreementbased cotraining can significantly improve pos tagger accuracywe also show that simply retraining on all of the newly labelled data is surprisingly effective with performance depending on the amount of newly labelled data added at each iterationfor certain sizes of newly labelled data this simple approach is just as effective as the agreementbased methodwe also show that cotraining can still benefit both taggers when the performance of one tagger is initially much better than the otherwe have also investigated whether cotraining can improve the taggers already trained on large amounts of manually annotated datausing standard sections of the wsj penn treebank as seed data we have been unable to improve the performance of the taggers using selftraining or cotrainingmanually tagged data for english exists in large quantities which means that there is no need to create taggers from small amounts of labelled materialhowever our experiments are relevant for languages for which there is little or no annotated datawe only perform the experiments in english for convenienceour experiments can also be seen as a vehicle for exploring aspects of cotraininggiven two views of a classification task cotraining can be informally described as follows the intuition behind the algorithm is that each classifier is providing extra informative labelled data for the other classifierblum and mitchell derive paclike guarantees on learning by assuming that the two views are individually sufficient for classification and the two views are conditionally independent given the classcollins and singer present a variant of the blum and mitchell algorithm which directly maximises an objective function that is based on the level of agreement between the classifiers on unlabelled datadasgupta et al provide a theoretical basis for this approach by providing a paclike analysis using the same independence assumption adopted by blum and mitchellthey prove that the two classifiers have low generalisation error if they agree on unlabelled dataabney argues that the blum and mitchell independence assumption is very restrictive and typically violated in the data and so proposes a weaker independence assumption for which the dasgupta et al results still holdabney also presents a greedy algorithm that maximises agreement on unlabelled data which produces comparable results to collins and singer on their named entity classification taskgoldman and zhou show that if the newly labelled examples used for retraining are selected carefully cotraining can still be successful even when the views used by the classifiers do not satisfy the independence assumptionin remainder of the paper we present a practical method for cotraining pos taggers and investigate the extent to which example selection based on the work of dasgupta et al and abney can be effectivethe two pos taggers used in the experiments are tnt a publicly available markov model tagger and a reimplementation of the maximum entropy tagger mxpost the me tagger which we refer to as cc uses the same features as mxpost but is much faster for training and tagging fast training and tagging times are important for the experiments performed here since the bootstrapping process can require many tagging and training iterationsthe model used by tnt is a standard tagging markov model consisting of emission probabilities and transition probabilities based on trigrams of tagsit also deals with unknown words using a suffix analysis of the target word tnt is very fast for both training and taggingthe cc tagger differs in a number of ways from tntfirst it uses a conditional model of a tag sequence given a string rather than a joint modelsecond me models are used to define the conditional probabilities of a tag given some contextthe advantage of me models over the markov model used by tnt is that arbitrary features can easily be included in the context so as well as considering the target word and the previous two tags the me models also consider the words either side of the target word and for unknown and infrequent words various properties of the string of the target worda disadvantage is that the training times for me models are usually relatively slow especially with iterative scaling methods for alternative methodshere we use generalised iterative scaling but our implementation is much faster than ratnaparkhis publicly available taggerthe cc tagger trains in less than 7 minutes on the 1 million words of the penn treebank and tags slightly faster than tntsince the taggers share many common features one might think they are not different enough for effective cotraining to be possiblein fact both taggers are sufficiently different for cotraining to be effectivesection 4 shows that both taggers can benefit significantly from the information contained in the others outputthe performance of the taggers on section 00 of the wsj penn treebank is given in table 1 for different seed set sizes the seed data is taken from sections 221 of the treebankthe table shows that the performance of tnt is significantly better than the performance of cc when the size of the seed data is very smallthe cotraining framework uses labelled examples from one tagger as additional training data for the otherfor the purposes of this paper a labelled example is a tagged sentencewe chose complete sentences rather than smaller units because this simplifies the experiments and the publicly available version of tnt requires complete tagged sentences for trainingit is possible that cotraining with subsentential units might be more effective but we leave this as future workthe cotraining process is given in figure 1at each stage in the process there is a cache of unlabelled sentences which is labelled by each taggerthe cache size could be increased at each iteration which is a common practice in the cotraining literaturea subset of those sentences labelled by tnt is then added to the training data for cc and vice versablum and mitchell use the combined set of newly labelled examples for training each view but we follow goldman and zhou in using separate labelled setsin the remainder of this section we consider two possible methods for selecting a subsetthe cache is cleared after each iterationthere are various ways to select the labelled examples for each taggera typical approach is to select those examples assigned a high score by the relevant classifier under the assumption that these examples will be the most reliablea scorebased selection method is difficult to apply in our experiments however since tnt does not provide scores for tagged sentenceswe therefore tried two alternative selection methodsthe first is to simply add all of the cache labelled by one tagger to the training data of the otherwe refer to this method as naive cotrainingthe second more sophisticated method is to select that subset of the labelled cache which maximises the agreement of the two taggers on unlabelled datawe call this method agreementbased cotrainingfor a large cache the number ofpossible subsets makes exhaustive search intractable and so we randomly sample the subsetsthe pseudocode for the agreementbased selection method is given in figure 2the current tagger is the one being retrained while the other tagger is kept staticthe cotraining process uses the selection method for selecting sentences from the cache note that during the selection process we repeatedly sample from all possible subsets of the cache this is done by first randomly choosing the size of the subset and then randomly choosing sentences based on the sizethe number of subsets we consider is determined by the number of times the loop is traversed in figure 2if tnt is being trained on the output of cc then the most recent version of cc is used to measure agreement so we first attempt to improve one tagger then the other rather than both at the same timethe agreement rate of the taggers on unlabelled sentences is the pertoken agreement rate that is the number of times each word in the unlabelled set of sentences is assigned the same tag by both taggersfor the small seed set experiments the seed data was an arbitrarily chosen subset of sections 1019 of the wsj penn treebank the unlabelled training data was taken from 50 000 sentences of the 1994 wsj section of the north american news corpus and the unlabelled data used to measure agreement was around 10 000 sentences from sections 15 of the treebanksection 00 of the treebank was used to measure the accuracy of the taggersthe cache size was 500 sentencesfigure 3 shows the results for selftraining in which each tagger is simply retrained on its own labelled cache at each roundtnt does improve using selftraining from 814 to 822 but cc is unaffectedrerunning these experiments using a range of unlabelled training sets from a variety of sources showed similar behaviourtowards the end of the cotraining run more material is being selected for cc than tntthe experiments using a seed set size of 50 showed a similar trend but the difference between the two taggers was less markedby examining the subsets chosen from the labelled cache at each round we also observed that a large proportion of the cache was being selected for both taggersagreementbased cotraining for pos taggers is effective but computationally demandingthe previous two agreement maximisation experiments involved retraining each tagger 2 500 timesgiven this and the observation that maximisation generally has a preference for selecting a large proportion of the labelled cache we looked at naive cotraining simply retraining upon all available matetnt and cc the upper curve is for cc rial at each roundtable 2 shows the naive cotraining results after 50 rounds of cotraining when varying the size of the cache50 manually labelled sentences were used as the seed materialtable 3 shows results for the same experiment but this time with a seed set of 500 manually labelled sentenceswe see that naive cotraining improves as the cache size increasesfor a large cache the performance levels for naive cotraining are very similar to those produced by our agreementbased cotraining methodafter 50 rounds of cotraining using 50 seed sentences the agreement rates for naive and agreementbased cotraining were very similar from an initial value of 73 to 97 agreementnaive cotraining is more efficient than agreementbased cotrainingfor the parameter settings used in the previous experiments agreementbased cotraining required the taggers to be retrained 10 to 100 times more often then naive cotrainingthere are advantages to agreementbased cotraining howeverfirst the agreementbased method dynamically selects the best sample at each stage which may not be the whole cachein particular when the agreement rate cannot be improved upon the selected sample can be rejectedfor naive cotraining new samples will always be added and so there is a possibility that the noise accumulated at later stages will start to degrade performance second for naive cotraining the optimal amount of data to be added at each round is a parameter that needs to be determined on held out data whereas the agreementbased method determines this automaticallywe also performed a number of experiments using much more unlabelled training material than beforeinstead of using 50 000 sentences from the 1994 wsj section of the north american news corpus we used 417 000 sentences and ran the experiments until the unlabelled data had been exhaustedone experiment used naive cotraining with 50 seed sentences and a cache of size 500this led to an agreement rate of 99 with performance levels of 854 and 854 for tnt and cc respectively230 000 sentences had been processed and were used as training material by the taggersthe other experiment used our agreementbased cotraining approach the agreement rate was 98 with performance levels of 860 and 859 for both taggers124 000 sentences had been processed of which 30 000 labelled sentences were selected for training tnt and 44 000 labelled sentences were selected for training cccotraining using this much larger amount of unlabelled material did improve our previously mentioned results but not by a large marginit is interesting to consider what happens when one view is initially much more accurate than the other viewwe trained one of the taggers on much more labelled seed data than the other to see how this affects the cotraining processboth taggers were initialised with either 500 or 50 seed sentences and agreementbased cotraining was applied using a cache size of 500 sentencesthe results are shown in table 4cotraining continues to be effective even when the two taggers are imbalancedalso the final performance of the taggers is around the same value irrespective of the direction of the imbalancealthough bootstrapping from unlabelled data is particularly valuable when only small amounts of training material are available it is also interesting to see if selftraining or cotraining can improve state of the art pos taggersfor these experiments both cc and tnt were initially trained on sections 0018 of the wsj penn treebank and sections 1921 and 2224 were used as the development and test setsthe 19941996 wsj text from the nanc was used as unlabelled material to fill the cachethe cache size started out at 8000 sentences and increased by 10 in each round to match the increasing labelled training datain each round of selftraining or naive cotraining 10 of the cache was randomly selected and added to the labelled training datathe experiments ran for 40 roundsthe performance of the different training regimes is listed in table 5these results show no significant improvement using either selftraining or cotraining with very large seed datasetsselftraining shows only a slight improvement for cc1 while naive cotraining performance is always worsewe have shown that cotraining is an effective technique for bootstrapping pos taggers trained on small amounts of labelled datausing unlabelled data we are able to improve tnt from 813 to 860 whilst cc shows a much more dramatic improvement of 732 to 859our agreementbased cotraining results support the theoretical arguments of abney and dasgupta et al that directly maximising the agreement rates between the two taggers reduces generalisation errorexamination of the selected subsets showed a preference for a large proportion of the cachethis led us to propose a naive cotraining approach which significantly reduced the computational cost without a significant performance penaltywe also showed that naive cotraining was unable to improve the performance of the taggers when they had already been trained on large amounts of manually annotated datait is possible that agreementbased cotraining using more careful selection would result in an improvementwe leave these experiments to future work but note that there is a large computational cost associated with such experimentsthe performance of the bootstrapped taggers is still a long way behind a tagger trained on a large amount of manually annotated datathis finding is in accord with earlier work on bootstrapping taggers using them an interesting question would be to determine the minimum number of manually labelled examples that need to be used to seed the system before we can achieve comparable results as using all available manually labelled sentencesfor our experiments cotraining never led to a decrease in performance regardless of the number of iterationsthe opposite behaviour has been observed in other applications of cotraining whether this robustness is a property of the tagging problem or our approach is left for future workthis work has grown out of many fruitful discussions with the 2002 jhu summer workshop team that worked on weakly supervised bootstrapping of statistical parsersthe first author was supported by epsrc grant grm96889 and the second author by a commonwealth scholarship and a sydney university travelling scholarshipwe would like to thank the anonymous reviewers for their helpful comments and also iain rae for computer support
W03-0407
bootstrapping postaggers using unlabelled datathis paper investigates booststrapping partofspeech taggers using cotraining in which two taggers are iteratively retrained on each other outputsince the output of the taggers is noisy there is a question of which newly labelled examples to add to the training setwe investigate selecting examples by directly maximising tagger agreement on unlabelled data a method which has been theoretically and empirically motivated in the cotraining literatureour results show that agreementbased cotraining can significantly improve tagging performance for small seed datasetsfurther results show that this form of cotraining considerably outperforms selftraininghowever we find that simply retraining on all the newly labelled data can in some cases yield comparable results to agreementbased cotraining with only a fraction of the computational costwe report positive results with little labeled training data but negative results when the amount of labeled training data increaseswe define selftraining as a procedure in which a tagger is retrained on its own labeled cache at each round
introduction to the conll2003 shared task languageindependent named entity recognition named entities are phrases that contain the names of persons organizations and locationsexample org youn official per ekeus heads for loc baghdad this sentence contains three named entities ekeus is a person youn is a organization and baghdad is a locationnamed entity recognition is an important task of information extraction systemsthere has been a lot of work on named entity recognition especially for english for an overviewthe message understanding conferences have offered developers the opportunity to evaluate systems for english on the same data in a competitionthey have also produced a scheme for entity annotation more recently there have been other system development competitions which dealt with different languages the shared task of conll2003 concerns languageindependent named entity recognitionwe will concentrate on four types of named entities persons locations organizations and names of miscellaneous entities that do not belong to the previous three groupsthe shared task of conll2002 dealt with named entity recognition for spanish and dutch the participants of the 2003 shared task have been offered training and test data for two other european languages english and germanthey have used the data for developing a namedentity recognition system that includes a machine learning componentthe shared task organizers were especially interested in approaches that made use of resources other than the supplied training data for example gazetteers and unannotated datain this section we discuss the sources of the data that were used in this shared task the preprocessing steps we have performed on the data the format of the data and the method that was used for evaluating the participating systemsthe conll2003 named entity data consists of eight files covering two languages english and german1for each of the languages there is a training file a development file a test file and a large file with unannotated datathe learning methods were trained with the training datathe development data could be used for tuning the parameters of the learning methodsthe challenge of this years shared task was to incorporate the unannotated data in the learning process in one way or anotherwhen the best parameters were found the method could be trained on the training data and tested on the test datathe results of the different learning methods on the test sets are compared in the evaluation of the shared taskthe split between development data and test data was chosen to avoid systems being tuned to the test datathe english data was taken from the reuters corpus2this corpus consists of reuters news stories between august 1996 and august 1997for the training and development set ten days worth of data were taken from the files representing the end of august 1996for the test set the texts were from december 1996the preprocessed raw data covers the month of september 1996the text for the german data was taken from the eci multilingual text corpus3this corpus consists of texts in many languagesthe portion of data that was used for this task was extracted from the german newspaper frankfurter rundshauall three of the training development and test sets were taken from articles written in one week at the end of august 1992the raw data were taken from the months of september to december 1992table 1 contains an overview of the sizes of the data filesthe unannotated data contain 17 million tokens and 14 million tokens the participants were given access to the corpus after some linguistic preprocessing had been done for all data a tokenizer partofspeech tagger and a chunker were applied to the raw datawe created two basic languagespecific tokenizers for this shared taskthe english data was tagged and chunked by the memorybased mbt tagger the german data was lemmatized tagged and chunked by the decision tree tagger treetagger named entity tagging of english and german training development and test data was done by hand at the university of antwerpmostly muc conventions were followed an extra named entity category called misc was added to denote all names which are not already in the other categoriesthis includes adjectives like italian and events like 1000 lakes rally making it a very diverse categoryall data files contain one word per line with empty lines representing sentence boundariesat the end of each line there is a tag which states whether the current word is inside a named entity or notthe tag also encodes the type of named entityhere is an example sentence each line contains four fields the word its partofspeech tag its chunk tag and its named entity tagwords tagged with o are outside of named entities and the ixxx tag is used for words inside a named entity of type xxxwhenever two entities of type xxx are immediately next to each other the first word of the second entity will be tagged bxxx in order to show that it starts another entitythe data contains entities of four types persons organizations locations and miscellaneous names this tagging scheme is the iob scheme originally put forward by ramshaw and marcus we assume that named entities are nonrecursive and nonoverlappingwhen a named entity is embedded in another named entity usually only the top level entity has been annotatedtable 2 contains an overview of the number of named entities in each data filethe performance in this task is measured with fβ1 rate table 3 main features used by the the sixteen systems that participated in the conll2003 shared task sorted by performance on the english test dataaff affix information bag bag of words cas global case information chu chunk tags doc global document information gaz gazetteers lex lexical features ort orthographic information pat orthographic patterns pos partofspeech tags pre previously predicted ne tags quo flag signing that the word is between quotes tri trigger words with β1 precision is the percentage of named entities found by the learning system that are correctrecall is the percentage of named entities present in the corpus that are found by the systema named entity is correct only if it is an exact match of the corresponding entity in the data filesixteen systems have participated in the conll2003 shared taskthey employed a wide variety of machine learning techniques as well as system combinationmost of the participants have attempted to use information other than the available training datathis information included gazetteers and unannotated data and there was one participant who used the output of externally trained named entity recognition systemsthe most frequently applied technique in the conll2003 shared task is the maximum entropy modelfive systems used this statistical learning methodthree systems used maximum entropy models in isolation two more systems used them in combination with other techniques maximum entropy models seem to be a good choice for this kind of task the top three results for english and the top two results for german were obtained by participants who employed them in one way or anotherhidden markov models were employed by four of the systems that took part in the shared task however they were always used in combination with other learning techniquesklein et al also applied the related conditional markov models for combining classifierslearning methods that were based on connectionist approaches were applied by four systemszhang and johnson used robust risk minimization which is a winnow techniqueflorian et al employed the same technique in a combination of learnersvoted perceptrons were applied to the shared task data by carreras et al and hammerton used a recurrent neural network for finding named entitiesother learning approaches were employed less frequentlytwo teams used adaboostmh and two other groups employed memorybased learning transformationbased learning support vector machines and conditional random fields were applied by one system eachcombination of different learning systems has proven to be a good method for obtaining excellent resultsfive participating groups have applied system combinationflorian et al tested different methods for combining the results of four systems and found that robust risk minimization worked bestklein et al employed a stacked learning system which contains hidden markov models maximum entropy models and conditional markov modelsmayfield et al stacked two learners and obtained better performancewu et al applied both stacking and voting to three learnersmunro et al employed both voting and bagging for combining classifiersthe choice of the learning approach is important for obtaining a good system for recognizing named entitieshowever in the conll2002 shared task we found out that choice of features is at least as importantan overview of some of the types of features chosen by the shared task participants can be found in table 3all participants used lexical features except for whitelaw and patrick who implemented a characterbased methodmost of the systems employed partofspeech tags and two of them have recomputed the english tags with better taggers othographic information affixes gazetteers and chunk information were also incorporated in most systems although one group reports that the available chunking information did not help other features were used less frequentlytable 3 does not reveal a single feature that would be ideal for named entity recognitioneleven of the sixteen participating teams have attempted to use information other than the training data that was supplied for this shared taskall included gazetteers in their systemsfour groups examined the usability of unannotated data either for extracting training instances or obtaining extra named entities for gazetteers a reasonable number of groups have also employed unannotated data for obtaining capitalization features for wordsone participating team has used externally trained named entity recognition systems for english as a part in a combined system table 4 shows the error reduction of the systems table 4 error reduction for the two development data sets when using extra information like gazetteers unannotated data or externally developed named entity recognizers the lines have been sorted by the sum of the reduction percentages for the two languages with extra information compared to while using only the available training datathe inclusion of extra named entity recognition systems seems to have worked well generally the systems that only used gazetteers seem to gain more than systems that have used unannotated data for other purposes than obtaining capitalization informationhowever the gain differences between the two approaches are most obvious for english for which better gazetteers are availablewith the exception of the result of zhang and johnson there is not much difference in the german results between the gains obtained by using gazetteers and those obtained by using unannotated dataa baseline rate was computed for the english and the german test setsit was produced by a system which only identified entities which had a unique class in the training dataif a phrase was part of more than one entity the system would select the longest oneall systems that participated in the shared task have outperformed the baseline systemfor all the f01 rates we have estimated significance boundaries by using bootstrap resampling from each output file of a system 250 random samples of sentences have been chosen and the distribution of the f01 rates in these samples is assumed to be the distribution of the performance of the systemwe assume that performance a is significantly different from performance b if a is not within the center 90 of the distribution of bthe performances of the sixteen systems on the two test data sets can be found in table 5for english the combined classifier of florian et al achieved the highest overall f01 ratehowever the difference between their performance and that of the maximum entropy approach of chieu and ng is not significantan important feature of the best system that other participants did not use was the inclusion of the output of two externally trained named entity recognizers in the combination processflorian et al have also obtained the highest f01 rate for the german datahere there is no significant difference between them and the systems of klein et al and zhang and johnson we have combined the results of the sixteen system in order to see if there was room for improvementwe converted the output of the systems to the same iob tagging representation and searched for the set of systems from which the best tags for the development data could be obtained with majority votingthe optimal set of systems was determined by performing a bidirectional hillclimbing search with beam size 9 starting from zero featuresa majority vote of five systems performed best on the english development dataanother combination of five systems obtained the best result for the german development datawe have performed a majority vote with these sets of systems on the related test sets and obtained f01 rates of 9030 for english and 7417 for german we have described the conll2003 shared task languageindependent named entity recognitionsixteen systems have processed english and german named entity datathe best performance for both languages has been obtained by a combined learning system that used maximum entropy models transformationbased learning hidden markov models as well as robust risk minimization apart from the training data this system also employed gazetteers and the output of two externally trained named entity recognizersthe performance of the system of chieu et al was not significantly different from the best performance for english and the method of klein et al and the approach of zhang and johnson were not significantly worse than the best result for germaneleven teams have incorporated information other than the training data in their systemfour of them have obtained error reductions of 15 or more for english and one has managed this for germanthe resources used by these systems gazetteers and externally trained named entity systems still require a lot of manual worksystems that employed unannotated data obtained performance gains around 5the search for an excellent method for taking advantage of the fast amount of available raw text remains opentjong kim sang is financed by iwt stww as a researcher in the atranos projectde meulder is supported by a bof grant supplied by the university of antwerp
W03-0419
introduction to the conll2003 shared task languageindependent named entity recognitionwe describe the conll2003 shared task languageindependent named entity recognitionwe give background information on the data sets and the evaluation method present a general overview of the systems that have taken part in the task and discuss their performance
language independent ner using a maximum entropy tagger entity recognition systems need to integrate a wide variety of information for optimal performance this paper demonstrates that a maximum entropy tagger can effectively encode such information and identify named entities with very high accuracy the tagger uses features which can be obtained for a variety of languages and works effectively not only for english but also for other languages such as german and dutch named entity recognition1 can be treated as a tagging problem where each word in a sentence is assigned a label indicating whether it is part of a named entity and the entity typethus methods used for part of speech tagging and chunking can also be used for nerthe papers from the conll2002 shared task which used such methods burger et al reported results significantly lower than the best system however zhou and su have reported state of the art results on the muc6 and muc7 data using a hmmbased taggerzhou and su used a wide variety of features which suggests that the relatively poor performance of the taggers used in conll2002 was largely due to the feature sets used rather than the machine learning methodwe demonstrate this to be the case by improving on the best dutch results from conll2002 using a maximum entropy taggerwe report reasonable precision and recall for the conll2003 english test data and an fscore of 684 for the conll2003 german test dataincorporating a diverse set of overlapping features in a hmmbased tagger is difficult and complicates the smoothing typically used for such taggersin contrast a me tagger can easily deal with diverse overlapping featureswe also use a gaussian prior on the parameters for effective smoothing over the large feature spacethe me tagger is based on ratnaparkhi s pos tagger and is described in curran and clark the tagger uses models of the form where y is the tag x is the context and the fi are the features with associated weights λithe probability of a tag sequence y1 yn given a sentence w1 wn is approximated as follows where xi is the context for word withe tagger uses beam search to find the most probable sequence given the sentencethe features are binary valued functions which pair a tag with various elements of the context for example generalised iterative scaling is used to estimate the values of the weightsthe tagger uses a gaussian prior over the weights which allows a large number of rare but informative features to be used without overfittingwe used three data sets the english and german data for the conll2003 shared task and the dutch data for the conll2002 shared task each word in the data sets is annotated with a named entity tag plus pos tag and the words in the german and english data also have a chunk tagour system does not currently exploit the chunk tagsthere are 4 types of entities to be recognised persons locations organisations and miscellaneous entities not belonging to the other three classesthe 2002 data uses the iob2 format in which a bxxx tag indicates the first word of an entity of type xxx and ixxx is used for subsequent words in an entity of type xxxthe tag o indicates words outside of a named entitythe 2003 data uses a variant of iob2 iob1 in which ixxx is used for all words in an entity including the first word unless the first word separates contiguous entities of the same type in which case bxxx is usedtable 1 lists the contextual predicates used in our baseline system which are based on those used in the curran and clark ccg supertaggerthe first set of features apply to rare words ie those which appear less than 5 times in the training datathe first two kinds of features encode prefixes and suffixes less than length 5 and the remaining rare word features encode other morphological characteristicsthese features are important for tagging unknown and rare wordsthe remaining features are the word pos tag and ne tag history features using a window size of 2note that the nei2nei1 feature is a composite feature of both the previous and previousprevious ne tagstable 2 lists the extra features used in our final systemthese features have been shown to be useful in other ner systemsthe additional orthographic features have proved useful in other systems for example carreras et al borthwick and zhou and su some of the rows in table 2 describe sets of contextual predicatesthe wi is only digits predicates apply to words consisting of all digitsthey encode the length of the digit string with separate predicates for lengths 14 and a single predicate for lengths greater than 4titlecase applies to words with an initial uppercase letter followed by all lowercase mixedcase applies to words with mixed lower and uppercase the length predicates encode the number of characters in the word from 1 to 15 with a single predicate for lengths greater than 15the next set of contextual predicates encode extra information about ne tags in the current contextthe memory ne tag predicate records the ne tag that was most recently assigned to the current wordthe use of beamsearch tagging means that tags can only be recorded from previous sentencesthis memory is cleared at the beginning of each documentthe unigram predicates encode the most probable tag for the next words in the windowthe unigram probabilities are relative frequencies obtained from the training datathis feature enables us to know something about the likely ne tag of the next word before reaching itmost systems use gazetteers to encode information about personal and organisation names locations and trigger wordsthere is considerable variation in the size of the gazetteers usedsome studies found that gazetteers did not improve performance whilst others gained significant improvement using gazetteers and triggers our system incorporates only english and dutch first name and last name gazetteers as shown in table 6these gazetteers are used for predicates applied to the current previous and next word in the windowcollins includes a number of interesting contextual predicates for nerone feature we have adapted encodes whether the current word is more frequently seen lowercase than uppercase in a large external corpusthis feature is useful for disambiguating beginning of sentence capitalisation and tagging sentences which are all capitalisedthe frequency counts have been obtained from 1 billion words of english newspaper text collected by curran and osborne collins also describes a mapping from words to word types which groups words with similar orthographic forms into classesthis involves mapping characters to classes and merging adjacent characters of the same typefor example moody becomes aa abc becomes aaa and 134505 becomes 000the classes are used to define unigram bigram and trigram contextual predicates over the windowwe have also defined additional composite features which are a combination of atomic features for example a feature which is active for midsentence titlecase words seen more frequently as lowercase than uppercase in a large external corpusthe baseline development results for english using the supertagger features only are given in table 3the full system results for the english development data are given in table 7clearly the additional features have a significant impact on both precision and recall scores across all entitieswe have found that the word type features are particularly useful as is the memory featurethe performance of the final system drops by 197 if these features are removedthe performance of the system if the gazetteer features are removed is given in table 4the sizes of our gazetteers are given in table 6we have experimented with removing the other contextual predicates but each time performance was reduced except for the nextnext unigram tag feature which was switched off for all final experimentsthe results for the dutch test data are given in table 5these improve upon the scores of the best performing system at conll2002 the final results for the english test data are given in table 7these are significantly lower than the results for the development datathe results for the german development and test sets are given in table 7for the german ner we removed the lowercase more frequent than uppercase featureapart from this change the system was identicalwe did not add any extra gazetteer information for germanour ner system demonstrates that using a large variety of features produces good performancethese features can be defined and extracted in a language independent manner as our results for german dutch and english showmaximum entropy models are an effective way of incorporating diverse and overlapping featuresour maximum entropy tagger employs gaussian smoothing which allows a large number of sparse but informative features to be used without overfittingusing a wider context window than 2 words may improve performance a reranking phase using global features may also improve performance we would like to thank jochen leidner for help collecting the gazetteersthis research was supported by a commonwealth scholarship and a sydney university travelling scholarship to the first author and epsrc grant grm96889
W03-0424
language independent ner using a maximum entropy taggernamed entity recognition systems need to integrate a wide variety of information for optimal performancethis paper demonstrates that a maximum entropy tagger can effectively encode such information and identify named entities with very high accuracythe tagger uses features which can be obtained for a variety of languages and works effectively not only for english but also for other languages such as german and dutchwe condition the label of a token at a particular position on the label of the most recent previous instance of that same token in a prior sentence of the same documentour named entity recogniser is run on postagged and chunked documents in the corpus to identify and extract named entities as potential topics
named entity recognition through classifier combination this paper presents a classifiercombination experimental framework for named entity recognition in which four diverse classifiers are combined under different conditions when no gazetteer or other additional training resources are used the combined system attains a performance of 916f on the english development data integrating name location and person gazetteers and named entity systems trained on additional more general data reduces the fmeasure error by a factor of 15 to 21 on the english data this paper investigates the combination of a set of diverse statistical named entity classifiers including a rulebased classifier the transformationbased learning classifier with the forwardbackward extension described in florian a hidden markov model classifier similar to the one described in bikel et al a robust risk minimization classifier based on a regularized winnow method and a maximum entropy classifier this particular set of classifiers is diverse across multiple dimensions making it suitable for combination decision arbitrary feature types while hmm is dependent on a prespecified backoff paththe remainder of the paper is organized as follows section 2 describes the features used by the classifiers section 3 briefly describes the algorithms used by each classifier and section 4 analyzes in detail the results obtained by each classifier and their combinationall algorithms described in this paper identify the named entities in the text by labeling each word with a tag corresponding to its position relative to a named entity whether it startscontinuesends a specific named entity or does not belong to any entityrrm maxent and fntbl treat the problem entirely as a tagging task while the hmm algorithm used here is constraining the transitions between the various phases similar to the method described in feature design and integration is of utmost importance in the overall classifier design a rich feature space is the key to good performanceoften high performing classifiers operating in an impoverished space are surpassed by a lower performing classifier when the latter has access to enhanced feature spaces in accordance with this observation the classifiers used in this research can access a diverse set of features when examining a word in context including in addition a ngrambased capitalization restoration algorithm has been applied on the sentences that appear in all caps2 for the english taskthis section describes only briefly the classifiers used in combination in section 4 a full description of the algorithms and their properties is beyond the scope of this paper the reader is instead referred to the original articlesthis classifier is described in detail in along with a comprehensive evaluation of its performance and therefore is not presented herethe maxent classifier computes the posterior class probability of an example by evaluating the normalized product of the weights active for the particular examplethe model weights are trained using the improved iterative scaling algorithm to avoid running in severe overtraining problems a feature cutoff of 4 is applied before the model weights are learnedat decoding time the best sequence of classifications is identified with the viterbi algorithmtransformationbased learning is an errordriven algorithm which has two major steps it starts by assigning some classification to each example and then automatically proposing evaluating and selecting the classification changes that maximally decrease the number of errorstbl has some attractive qualities that make it suitable for the languagerelated tasks it can automatically integrate heterogeneous types of knowledge without the need for explicit modeling it is errordriven and has an inherently dynamic behaviorthe particular setup in which fntbl is used in this work is described in florian in a first phase tbl is used to identify the entity boundaries followed by a sequence classification stage where the entities identified at the first step are classified using internal and external clues3the hmm classifier used in the experiments in section 4 follows the system description in and it performs sequence classification by assigning each word either one of the named entity types or the label notaname to represent quotnot a named entityquotthe states in the hmm are organized into regions one region for each type of named entity plus one for notanamewithin each of the regions a statistical bigram language model is used to compute the likelihood of words occurring within that region the transition probabilities are computed by deleted interpolation and the decoding is done through the viterbi algorithmthe particular implementation we used underperformed consistently all the other classifiers on german and is not includedthe results obtained by each individual classifier broken down by entity type are presented in table 1out of the four classifiers the maxent and rrm classifiers are the best performers followed by the modified fntbl classifier and the hmm classifierthe errorbased classifiers tend to obtain balanced precisionrecall numbers while the other two tend to be more precise at the expense of recallto facilitate comparison with other classifiers for this task most reported results 3 the method of retaining only the boundaries and reclassifying the entities was shown to improve the performance of 11 of the 12 systems participating in the conll2002 shared tasks in both languages are obtained by using features exclusively extracted from the training datain general given n classifiers one can interpret the classifier combination framework as combining probability distributions where ci is the classifier is classification output f is a combination functiona widely used combination scheme is through linear interpolation of the classifiers class probability distribution the weights ai encode the importance given to classifier i in combination for the context of word w and pi is an estimation of the probability that the correct classification is c given that the output of the classifier i on word w is cito estimate the parameters in equation the provided training data was split into 5 equal parts and each classifier was trained in a roundrobin fashion on 4 fifths of the data and applied on the remaining fifththis way the entire training data can be used to estimate the weight parameters ai and pi but at decoding time the individual classifier outputs ci are computed by using the entire training datatable 2 presents the combination results for different ways of estimating the interpolation parametersa simple combination method is the equal voting method where the parameters are computed as ai 1n and pi s where s is the kronecker operator each of the classifiers votes with equal weight for the class that is most likely under its model and the class receiving the largest number of votes winshowever this procedure may lead to ties where some classes receive the same number of votes one usually resorts to randomly selecting one of the tied candidates in this case table 2 presents the average results obtained by this method together with the variance obtained over 30 trialsto make the decision deterministically the weights associated with the classifiers can be chosen as ai pi in this method presented in table 2 as weighted voting better performing classifiers will have a higher impact in the final classificationin the voting methods each classifier gave its entire vote to one class its own outputhowever equation allows for classifiers to give partial credit to alternative classifications through the probability pi in our experiments this value is computed through 5fold crossvalidation on the training datathe space of possible choices for c w and ci is large enough to make the estimation unreliable so we use two approximations named model 1 and model 2 in table 2 pi pi and pi pi respectivelyon the development data the former estimation type obtains a lower performance than the latterin a last experiment using only features extracted from the training data we use the rrm method to compute the function f in equation allowing the system to select a good performing combination of featuresat training time the system was fed the output of each classifier on the crossclassified data the partofspeech and chunk boundary tagsat test time the system was fed the classifications of each system trained on the entire training data and the corresponding pos and chunk boundary tagsthe result obtained rivals the one obtained by model 2 both displaying a 17 reduction in fmeasure error4 indicating that maybe all sources of information have been explored and incorporatedthe rrm method is showing its combining power when additional information sources are usedspecifically the system was fed additional feature streams from a list of gazetteers and the output of two other named entity systems trained on 17m words annotated with 32 name categoriesthe rrm system alone obtains an fmeasure of 921 and can effectively integrate these information streams with the output of the four classifiers gazetteers and the two additional classifiers into obtaining 939 fmeasure as detailed in table 4 a 21 reduction in fmeasure errorin contrast combination model 2 obtains only a performance of 924 showing its limitations in combining diverse sources of informationgerman poses a completely different problem for named entity recognition the data is considerably sparsertable 3 shows the relative distribution of unknown words in the development and test corporawe note that the numbers are roughly twice as large for the development data in german as they are for englishsince the unknown words are classed by most classifiers this results in few data points to estimate classifier combinationsalso specifically for the german data traditional approaches which utilize capitalization do not work as well as in english because all nouns are capitalized in germanfor german in addition to the entity lists provided we also used a small gazetteer of names which was collected by browsing web pages in about two personhoursthe average classifier performance gain by using these features is about 15f for the testa data and about 6f for the testb datain conclusion we have shown results on a set of both wellestablished and novel classifier techniques which improve the overall performance when compared with the best performing classifier by 1721 on the english taskfor the german task the improvement yielded by classifier combination is smalleras a machine learning method the rrm algorithm seems especially suited to handle additional feature streams and therefore is a good candidate for classifier combination
W03-0425
named entity recognition through classifier combinationthis paper presents a classifiercombination experimental framework for named entity recognition in which four diverse classifiers are combined under different conditionswhen no gazetteer or other additional training resources are used the combined system attains a performance of 916f on the english development data integrating name location and person gazetteers and named entity systems trained on additional more general data reduces the fmeasure error by a factor of 15 to 21 on the english datawe test different methods for combining the results of four systems and found that robust risk minimization works best
named entity recognition with characterlevel models we discuss two namedentity recognition models which use characters and character grams either exclusively or as an important part of their data representation the first model is a characterlevel hmm with minimal context information and the second model is a maximumentropy conditional markov model with substantially richer context features our best model achieves an overall f of 8607 on the english test data this number represents a 25 error reduction over the same model without wordinternal features for most sequencemodeling tasks with wordlevel evaluation including namedentity recognition and partofspeech tagging it has seemed natural to use entire words as the basic input featuresfor example the classic hmm view of these two tasks is one in which the observations are words and the hidden states encode class labelshowever because of data sparsity sophisticated unknown word models are generally required for good performancea common approach is to extract wordinternal features from unknown words for example suffix capitalization or punctuation features one then treats the unknown word as a collection of such featureshaving such unknownword models as an addon is perhaps a misplaced focus in these tasks providing correct behavior on unknown words is typically the key challengehere we examine the utility of taking character sequences as a primary representationwe present two models in which the basic units are characters and character grams instead of words and word phrasesearlier papers have taken a characterlevel approach to named entity recognition notably cucerzan and yarowsky which used prefix and suffix tries though to our knowledge incorporating all character grams is newin section 2 we discuss a characterlevel hmm while in section 3 we discuss a sequencefree maximumentropy classifier which uses gram substring featuresfinally in section 4 we add additional features to the maxent model and chain these models into a conditional markov model as used for tagging or earlier ner work figure 1 shows a graphical model representation of our characterlevel hmmcharacters are emitted one at a time and there is one state per charactereach states identity depends only on the previous stateeach characters identity depends on both the current state and on the previous charactersin addition to this hmm view it may also be convenient to think of the local emission models as typeconditional gram modelsindeed the character emission model in this section is directly based on the gram propername classification engine described in the primary addition is the statetransition chaining which allows the model to do segmentation as well as classificationwhen using characterlevel models for wordevaluated tasks one would not want multiple characters inside a single word to receive different labelsthis can be avoided in two ways by explicitly locking state transitions inside words or by careful choice of transition topologyin our current implementation we do the lattereach state is a pair where is an entity type and indicates the length of time the system has been in state therefore a state like indicates the second letter inside a person phrasethe final letter of a phrase is a following space and the state is a special final state like additionally once reaches our gram history order it stays therewe then use empirical unsmoothed estimates for statestate transitionsthis annotation and estimation enforces consistent labellings in practicefor example can only transition to the next state or the final state final states can only transition to beginning states like for emissions we must estimate a quantity of the form for example 1 we use an gram model of order 2 the gram estimates are smoothed via deleted interpolationgiven this model we can do viterbi decoding in the standard wayto be clear on what this model does and does not capture we consider a few examples first we might be asked for in this case we know both that we are in the middle of a location that begins with denv and also that the preceding context was toin essence encoding into the state let us us distinguish the beginnings of phrases which let us us model trends like named entities generally starting with capital letters in englishsecond we may be asked for quantities like which allows us to model the ends of phraseshere we have a slight complexity by the notation one would expect such emissions to have probability 1 since nothing else can be emitted from a final statein practice we have a special stop symbol in our ngram counts and the probability of emitting a space from a final state is the probability of the ngram having chosen the stop character3 modelsthe value was the empirically optimal order3this can be cleaned up conceptually by considering the entire process to have been a hierarchical hmm where the gram model generates the entire phrase followed by a tier pop up to the phrase transition tierusing this model we tested two variants one in which preceding context was discarded and another where context was used as outlined abovefor comparison we also built a firstorder wordlevel hmm the results are shown in table 1we give f both percategory and overallthe wordlevel model and the characterlevel model are intended as a rough minimal pair in that the only information crossing phrase boundaries was the entity type isolating the effects of character vs wordlevel modeling switching to the character model raised the overall score greatly from 745 to 822on top of this context helped but substantially less bringing the total to 832we did also try to incorporate gazetteer information by adding gram counts from gazetteer entries to the training counts that back the above character emission modelhowever this reduced performance the supplied gazetteers appear to have been built from the training data and so do not increase coverage and provide only a flat distribution of name phrases whose empirical distributions are very spikedgiven the amount of improvement from using a model backed by character grams instead of word grams the immediate question is whether this benefit is complementary to the benefit from features which have traditionally been of use in word level systems such as syntactic context features topic features and so onto test this we constructed a maxent classifier which locally classifies single words without modeling the entity type sequences 4 these local classifiers map a feature representation of each word position to entity types such as person5 we present a hillclimb over feature sets for the english development set data in table 2first we tried only the local word as a feature the result was that each word was assigned its most common class in the training datathe overall fscore was 5229 well below the official conll baseline of 71186 we next added gram features specifically we framed each word with special start and end symbols and then added every contiguous substring to the feature listnote that this subsumes the entireword featuresusing the substring features alone scored 7310 already breaking the the phrasebased conll baseline though lower than the nocontext hmm which better models the context inside phrasesadding a current tag feature gave a score of 7417at this point the bulk of outstanding errors were plausibly attributable to insufficient context informationadding even just the previous and next words and tags as features raised performance to 8239more complex joint context features which paired the current word and tag with the previous and next words and tags raised the score further to 8309 nearly to the level of the hmm still without actually having any model of previous classification decisionsin order to include state sequence features which allow the classifications at various positions to interact we have to abandon classifying each position independentlysequencesensitive features can be included by chaining our local classifiers together and performing joint inference ie by building a conditional markov model also known as a maximum entropy markov model previous classification decisions are clearly relevant for example the sequence grace road is a single location not a persons name adjacent to a location adding features representing the previous classification decision raised the score 235 to 8544we found knowing that the previous word was an other was not particularly useful without also knowing its partofspeech joint tagsequence features along with longer distance sequence and tagsequence features gave 8721the remaining improvements involved a number of other features which directly targetted observed error typesthese features included letter type pattern features this improved performance substantially for example allowing the system to detect all caps regionstable 3 shows an example of a local decision for grace in the context at grace road using all of the features defined to datenote that the evidence against grace as a name completely overwhelms the gram and word preference for personother features included secondprevious and secondnext words and a marker for capitalized words whose lowercase forms had also been seenthe final system also contained some simple errordriven postprocessingin particular repeated subelements of multiword person names were given type person and a crude heuristic restoration of b prefixes was performedin total this final system had an fscore of 9231 on the english development settable 4 gives a more detailed breakdown of this score and also gives the results of this system on the english test set and both german data setsthe primary argument of this paper is that character substrings are a valuable and we believe underexploited source of model featuresin an hmm with an admittedly very local sequence model switching from a word model to a character model gave an error reduction of about 30in the final much richer chained maxent setting the reduction from the best model minus gram features to the reported best model was about 25 smaller but still substantialthis paper also again demonstrates how the ease of incorporating features into a discriminative maxent model allows for productive feature engineering
W03-0428
named entity recognition with characterlevel modelswe discuss two namedentity recognition models which use characters and character ngrams either exclusively or as an important part of their data representationthe first model is a characterlevel hmm with minimal context information and the second model is a maximumentropy conditional markov model with substantially richer context featuresour best model achieves an overall f1 of 8607 on the english test data this number represents a 25 error reduction over the same model without wordinternal featureswe find that the introduction of character ngram features improved the overall f1 score by over 20
early results for named entity recognition with conditional random fields feature induction and webenhanced lexicons models for many natural language tasks benefit from the flexibility to use overlapping nonindependent featuresfor example the need for labeled data can be drastically reduced by taking advantage of domain knowledge in the form of word lists partofspeech tags character ngrams and capitalization patternswhile it is difficult to capture such interdependent features with a generative probabilistic model conditionallytrained models such as conditional maximum entropy models handle them wellthere has been significant work with such models for greedy sequence modeling in nlp conditional random fields are undirected graphical models a special case of which correspond to conditionallytrained finite state machineswhile based on the same exponential form as maximum entropy models they have efficient procedures for complete nongreedy finitestate inference and trainingcrfs have shown empirical successes recently in pos tagging noun phrase segmentation and chinese word segmentation given these models great flexibility to include a wide array of features an important question that remains is what features should be usedfor example in some cases capturing a word trigram is important however there is not sufficient memory or computation to include all word trigramsas the number of overlapping atomic features increases the difficulty and importance of constructing only certain feature combinations growsthis paper presents a feature induction method for crfsfounded on the principle of constructing only those feature conjunctions that significantly increase loglikelihood the approach builds on that of della pietra et al but is altered to work with conditional rather than joint probabilities and with a meanfield approximation and other additional modifications that improve efficiency specifically for a sequence modelin comparison with traditional approaches automated feature induction offers both improved accuracy and significant reduction in feature count it enables the use of richer higherorder markov models and offers more freedom to liberally guess about which atomic features may be relevant to a taskfeature induction methods still require the user to create the buildingblock atomic featureslexicon membership tests are particularly powerful features in natural language tasksthe question is where to get lexicons that are relevant for the particular task at handthis paper describes weblisting a method that obtains seeds for the lexicons from the labeled data then uses the web html formatting regularities and a search engine service to significantly augment those lexiconsfor example based on the appearance of arnold palmer in the labeled data we gather from the web a large list of other golf players including tiger woods we present results on the conll2003 named entity recognition shared task consisting of news articles with tagged entities person location organization and miscthe data is quite complex for example the english data includes foreign person names a wide diversity of locations many types of organizations and a wide variety of miscellaneous named entities on this our first attempt at a ner task with just a few personweeks of effort and little work on developmentset error analysis our method currently obtains overall english f1 of 8404 on the test set by using crfs feature induction and webaugmented lexiconsgerman f1 using very limited lexicons is 6811conditional random fields are undirected graphical models used to calculate the conditional probability of values on designated output nodes given values assigned to other designated input nodesin the special case in which the output nodes of the graphical model are linked by edges in a linear chain crfs make a firstorder markov independence assumption and thus can be understood as conditionallytrained finite state machines in the remainder of this section we introduce the likelihood model inference and estimation procedures for crfslet o be some observed input data sequence such as a sequence of words in text in a document let s be a set of fsm states each of which is associated with a label l e l let s be some sequence of states by the hammersleyclifford theorem crfs define the conditional probability of a state sequence given an input sequence to be where zo is a normalization factor over all state sequences fk is an arbitrary feature function over its arguments and λk is a learned weight for each feature functiona feature function may for example be defined to have value 0 in most cases and have value 1 if and only if st1 is state 1 and st is state 2 and the observation at position t in o is a word appearing in a list of country nameshigher λ weights make their corresponding fsm transitions more likely so the weight λk in this example should be positivemore generally feature functions can ask powerfully arbitrary questions about the input sequence including queries about previous words next words and conjunctions of all these and fk can range oooocrfs define the conditional probability of a label sequence based on total probability over the state sequences pa psll pa where l is the sequence of labels corresponding to the labels of the states in sequencesnote that the normalization factor zo is the sum of the scores of all possible state sequences zo the number of state sequences is exponential in the input sequence length t in arbitrarilystructured crfs calculating the normalization factor in closed form is intractable but in linearchainstructured crfs as in forwardbackward for hidden markov models the probability that a particular transition was taken between two crf states at a particular position in the input sequence can be calculated efficiently by dynamic programmingwe define slightly modified forward values αt to be the unnormalized probability of arriving in state si given the observations we set α0 equal to the probability of starting in each state s and recurse the backward procedure and the remaining details of baumwelch are defined similarlyzo is then ps αtthe viterbi algorithm for finding the most likely state sequence given the observation sequence can be correspondingly modified from its hmm formthe weights of a crf aλ are set to maximize the conditional loglikelihood of labeled sequences in some training set d where the second sum is a gaussian prior over parameters that provides smoothing to help cope with sparsity in the training datawhen the training labels make the state sequence unambiguous the likelihood function in exponential models such as crfs is convex so there are no local maxima and thus finding the global optimum is guaranteedit has recently been shown that quasinewton methods such as lbfgs are significantly more efficient than traditional iterative scaling and even conjugate gradient this method approximates the secondderivative of the likelihood by keeping a running finitesized window of previous firstderivativeslbfgs can simply be treated as a blackbox optimization procedure requiring only that one provide the firstderivative of the function to be optimizedassuming that the training labels on instance j make its state path unambiguous let s denote that path and then the firstderivative of the loglikelihood is where ck is the count for feature k given s and o equal to ptt1 fk the sum of fk values for all positions t in the sequence s the first two terms correspond to the difference between the empirical expected value of feature fk and the models expected value n the last term is the derivative of the gaussian priorpa and a large collection of features is formed by making conjunctions of the atomic tests in certain userdefined patterns there can easily be over 100000 atomic tests and ten or more shiftedconjunction patternsresulting in several million features this large number of features can be prohibitively expensive in memory and computation furthermore many of these features are irrelevant and others that are relevant are excludedin response we wish to use just those timeshifted conjunctions that will significantly improve performancewe start with no features and over several rounds of feature induction consider a set of proposed new features select for inclusion those candidate features that will most increase the loglikelihood of the correct state path s and train weights for all featuresthe proposed new features are based on the handcrafted observational testsconsisting of singleton tests and binary conjunctions of tests with each other and with features currently in the modelthe later allows arbitrarylength conjunctions to be builtthe fact that not all singleton tests are included in the model gives the designer great freedom to use a very large variety of observational tests and a large window of time shiftsto consider the effect of adding a new feature define the new sequence model with additional feature g having weight µ to be zo defes pλ exp in the denominator is simply the additional portion of normalization required to make the new function sum to 1 over all state sequencesfollowing we efficiently assess many candidate features in parallel by assuming that the λ parameters on all included features remain fixed while estimating the gain g of a candidate feature g based on the improvement in loglikelihood it provides where lλgµ includes µ22σ2in addition we make this approach tractable for crfs with two further reasonable and mutuallysupporting approximations specific to crfs we avoid dynamic programming for inference in the gain calculation with a meanfield approximation removing the dependence among states αtβt1zo is still calculated by dynamic programming without approximationfurthermore we can calculate the gain of aggregate features irrespective of transition source g and expand them after they are selected in many sequence problems the great majority of the tokens are correctly labeled even in the early stages of trainingwe significantly gain efficiency by including in the gain calculation only those tokens that are mislabeled by the current modellet o i 1m be those tokens and o be the input sequence in which the ith error token occurs at position tthen algebraic simplification using these approximations and previous definitions gives gλ where zo is simply s pλexptthe optimal values of the µs cannot be solved in closed form but newtons method finds them all in about 12 quick iterationsthere are two additional important modeling choices because we expect our models to still require several thousands of features we save time by adding many of the features with highest gain each round of induction rather than just one because even models with a small select number of features can still severely overfit we train the model with just a few bfgs iterations before performing the next round of feature inductiondetails are in some generalpurpose lexicons such a surnames and location names are widely available however many natural language tasks will benefit from more taskspecific lexicons such as lists of soccer teams political parties ngos and english countiescreating new lexicons entirely by hand is tedious and time consumingusing a technique we call weblisting we build lexicons automatically from html data on the webprevious work has built lexicons from fixed corpora by determining linguistic patterns for the context in which relevant words appear rather than mining a small corpus we gather data from nearly the entire web rather than relying on fragile linguistic context patterns we leverage robust formatting regularities on the webweblisting finds cooccurrences of seed terms that appear in an identical html formatting pattern and augments a lexicon with other terms on the page that share the same formattingour current implementation uses googlesets which we understand to be a simple implementation of this approach based on using html list items as the formatting regularitywe are currently building a more sophisticated replacementto perform named entity extraction on the news articles in the conll2003 english shared task several families of features are used all timeshifted by 2 1 0 1 2 the word itself 16 characterlevel regular expressions mostly concerning capitalization and digit patterns such as a a aa aaaa a d where a a and d indicate the regular expressions az az and 09 8 lexicons entered by hand such as honorifics days and months 15 lexicons obtained from specific web sites such as countries publiclytraded companies surnames stopwords and universities 25 lexicons obtained by weblisting all the above tests with prefix firstmention from any previous duplicate of the current word a small amount of handfiltering was performed on some of the weblisting lexiconssince googlesets support for nonenglish is severely limited only 5 small lexicons were used for german but character bi and trigrams were addeda javaimplemented firstorder crf was trained for about 12 hours on a 1ghz pentium with a gaussian prior variance of 05 inducing 1000 or fewer features each round of 10 iterations of lbfgscandidate conjunctions are limited to the 1000 atomic and existing features with highest gainperformance results for each of the entity classes can be found in figure 1the model achieved an overall f1 of 8404 on the english test set using 6423 featuresaccuracy gains are expected from experimentation with the induction parameters and improved weblistingwe thank john lafferty fernando pereira andres corradaemmanuel drew bagnell and guy lebanon for helpful inputthis work was supported in part by the center for intelligent information retrieval spawarsyscensd grant numbers n660019918912 and n660010218903 advanced research and development activity under contract number mda90401c0984 and darpa contract f306020120566
W03-0430
early results for named entity recognition with conditional random fields feature induction and webenhanced lexicons
hedge trimmer a parseandtrim approach to headline generation and abstracts for nice summaries in workon automatic philadelphia pa pp 914 edmundson h new methods in automatic of the 16 grefenstett g producing intelligent telegraphic text reduction to provide an audio scanning serfor the blind in notes of the aiii spring on intelligent text summarization in this paper we present hedge trimmer a headline generation system that creates a headline for a newspaper story by removing constituents from a parse tree of the first sentence until a length threshold has been reachedlinguisticallymotivated heuristics guide the choice of which constituents of a story should be preserved and which ones should be deletedour focus is on headline generation for english newspaper texts with an eye toward the production of document surrogatesfor crosslanguage information retrievaland the eventual generation of readable headlines from speech broadcastsin contrast to original newspaper headlines which are often intended only to catch the eye our approach produces informative abstracts describing the main theme or event of the newspaper articlewe claim that the construction of informative abstracts requires access to deeper linguistic knowledge in order to make substantial improvements over purely statistical approachesin this paper we present our technique for producing headlines using a parseandtrim approach based on the bbn parseras described in miller et al the bbn parser builds augmented parse trees according to a process similar to that described in collins the bbn parser has been used successfully for the task of information extraction in the sift system the next section presents previous work in the area of automatic generation of abstractsfollowing this we present feasibility tests used to establish the validity of an approach that constructs headlines from words in a story taken in order and focusing on the earlier part of the storynext we describe the application of the parseandtrim approach to the problem of headline generationwe discuss the linguisticallymotivated heuristics we use to produce results that are headlinelikefinally we evaluate hedge trimmer by comparing it to our earlier work on headline generation a probabilistic model for automatic headline generation in this paper we will refer to this statistical system as hmm hedge we demonstrate the effectiveness of our linguisticallymotivated approach hedge trimmer over the probabilistic model hmm hedge using both human evaluation and automatic metricsother researchers have investigated the topic of automatic generation of abstracts but the focus has been different eg sentence extraction processing of structured templates sentence compression and generation of abstracts from multiple sources we focus instead on the construction of headlinestyle abstracts from a single storyheadline generation can be viewed as analogous to statistical machine translation where a concise document is generated from a verbose one using a noisy channel model and the viterbi search to select the most likely summarizationthis approach has been explored in and the approach we use in hedge is most similar to that of where a single sentence is shortened using statistical compressionas in this work we select headline words from story words in the order that they appear in the storyin particular the first sentence of the storyhowever we use linguistically motivated heuristics for shortening the sentence there is no statistical model which means we do not require any prior training on a large corpus of storyheadline pairslinguistically motivated heuristics have been used by to distinguish constituents of parse trees which can be removed without affecting grammaticality or correctnessgleans uses parsing and named entity tagging to fill values in headline templatesconsider the following excerpt from a news story in this case the words in bold form a fluent and accurate headline for the storyitalicized words are deleted based on information provided in a parsetree representation of the sentenceour approach is based on the selection of words from the original story in the order that they appear in the story and allowing for morphological variationto determine the feasibility of our headlinegeneration approach we first attempted to apply our selectwordsinorder technique by handwe asked two subjects to write headline headlines for 73 ap stories from the tipster corpus for january 1 1989 by selecting words in order from the storyof the 146 headlines 2 did not meet the selectwordsinorder criteria because of accidental word reorderingwe found that at least one fluent and accurate headline meeting the criteria was created for each of the storiesthe average length of the headlines was 1076 wordslater we examined the distribution of the headline words among the sentences of the stories ie how many came from the first sentence of a story how many from the second sentence etcthe results of this study are shown in figure 1we observe that 868 of the headline words were chosen from the first sentence of their storieswe performed a subsequent study in which two subjects created 100 headlines for 100 ap stories from august 6 1990514 of the headline words in the second set were chosen from the first sentencethe distribution of headline words for the second set shown in figure 2although humans do not always select headline words from the first sentence we observe that a large percentage of headline words are often found in the first sentencethe input to hedge is a story whose first sentence is immediately passed through the bbn parserthe parsetree result serves as input to a linguisticallymotivated module that selects story words to form headlines based on key insights gained from our observations of humanconstructed headlinesthat is we conducted a human inspection of the 73 tipster stories mentioned in section 3 for the purpose of developing the hedge trimmer algorithmbased on our observations of humanproduced headlines we developed the following algorithm for parsetree trimming more recently we conducted an automatic analysis of the humangenerated headlines that supports several of the insights gleaned from this initial studywe parsed 218 humanproduced headlines using the bbn parser and analyzed the resultsfor this analysis we used 72 headlines produced by a third participant1 the parsing results included 957 noun phrases and 315 clauses we calculated percentages based on headlinelevel nplevel and sentencelevel structures in the parsing resultsthat is we counted figure 3 summarizes the results of this automatic analysisin our initial human inspection we considered each of these categories to be reasonable candidates for deletion in our parse tree and this automatic analysis indicates that we have made reasonable choices for deletion with the possible exception of trailing pps which show up in over half of the humangenerated headlinesthis suggests that we should proceed with caution with respect to the deletion of trailing pps thus we consider this to be an option only if no other is availablepreposed adjuncts 0218 conjoined s 1218 conjoined vp 7218 relative clauses 3957 determiners 31957 of these only 16 were a or the slevel percentages2 time expressions 5315 trailing pps 165315 trailing sbars 24315 1 no response was given for one of the 73 stories2 trailing constituents are computed by counting the number of sbars not designated as an argument of a verb phrasefor a comparison we conducted a second analysis in which we used the same parser on just the first sentence of each of the 73 storiesin this second analysis the parsing results included 817 noun phrases and 316 clauses a summary of these results is shown in figure 4note that across the board the percentages are higher in this analysis than in the results shown in figure 3 indicating that our choices of deletion in the hedge trimmer algorithm are wellgroundedpreposed adjuncts 273 conjoined s 373 conjoined vp 2073 relative clauses 29817 determiners 205817 of these only 171 were a or the time expressions 77316 trailing pps 184316 trailing sbars 49316 each storythe first step relies on what is referred to as the projection principle in linguistic theory predicates project a subject in the surface structureour humangenerated headlines always conformed to this rule thus we adopted it as a constraint in our algorithman example of the application of step 1 above is the following where boldfaced material from the parse tree representation is retained and italicized material is eliminated with government officials said tuesdayoutput of step 1 rebels agree to talks with governmentwhen the parser produces a correct tree this step provides a grammatical headlinehowever the parser often produces an incorrect outputhuman inspection of our 624sentence duc2003 evaluation set revealed that there were two such scenarios illustrated by the following cases in the first case an s exists but it does not conform to the requirements of step 1this occurred in 26 of the sentences in the duc2003 evaluation datawe resolve this by selecting the lowest leftmost s ie the entire string what started as a local controversy has evolved into an international scandal in the example abovein the second case there is no s availablethis occurred in 34 of the sentences in the evaluation datawe resolve this by selecting the root of the parse tree this would be the entire string bangladesh and india signed a water sharing accord aboveno other parser errors were encountered in the duc2003 evaluation datastep 2 of our algorithm eliminates lowcontent unitswe start with the simplest lowcontent units the determiners a and theother determiners were not considered for deletion because our analysis of the humanconstructed headlines revealed that most of the other determiners provide important information eg negation quantifiers and deictics beyond these we found that the humangenerated headlines contained very few time expressions which although certainly not contentfree do not contribute toward conveying the overall whowhat content of the storysince our goal is to provide an informative headline the identification and elimination of time expressions provided a significant boost in the performance of our automatic headline generatorwe identified time expressions in the stories using bbns identifindertm we implemented the elimination of time expressions as a twostep process where x is tagged as part of a time expression the following examples illustrate the application of this step output of step 2 state department lifted ban it has imposed on foreign fliersoutput of step 2 international relief agency announced that it is withdrawing from north koreawe found that 532 of the stories we examined contained at least one time expression which could be deletedhuman inspection of the 50 deleted time expressions showed that 38 were desirable deletions 10 were locally undesirable because they introduced an ungrammatical fragment3 and 2 were undesirable because they removed a potentially relevant constituenthowever even an undesirable deletion often pans out for two reasons the ungrammatical fragment is frequently deleted later by some other rule and every time a constituent is removed it makes room under the threshold for some other possibly more relevant constituentconsider the following examplesexample was produced by a system which did not remove time expressionsexample shows that if the time expression sunday were removed it would make room below the 10word threshold for another important piece of informationthe final step iterative shortening removes linguistically peripheral materialthrough successive deletionsuntil the sentence is shorter than a given thresholdwe took the threshold to be 10 for the duc task but it is a configurable parameteralso given that the humangenerated headlines tended to retain earlier material more often than later material much of our iterative shortening is focused on deleting the rightmost phrasal categories until the length is below thresholdthere are four types of iterative shortening rulesthe first type is a rule we call xpoverxp which is implemented as follows in constructions of the form xp xp remove the other children of the higher xp where xp is np vp or s this is a linguistic generalization that allowed us apply a single rule to capture three different phenomena the rule is applied iteratively from the deepest rightmost applicable node backwards until the length threshold is reachedthe impact of xpoverxp can be seen in these examples of npovernp vpovervp and sovers respectively parse s det a fire killed det a np np firefighter sbar who was fatally injured as he searched the house output of npovernp fire killed firefighter has outpaced state laws but the state says the company does not have the proper licensesparse s det a company offering blood cholesterol tests in grocery stores says s s medical technology has outpaced state laws cc but s det the state stays det the company does not have det the proper licenses output of sovers company offering blood cholesterol tests in grocery store says medical technology has outpaced state laws the second type of iterative shortening is the removal of preposed adjunctsthe motivation for this type of shortening is that all of the humangenerated headlines ignored what we refer to as the preamble of the storyassuming the projection principle has been satisfied the preamble is viewed as the phrasal material occurring before the subject of the sentencethus adjuncts are identified linguistically as any xp unit preceding the first np under the s chosen by step 1this type of phrasal modifier is invisible to the xpoverxp rule which deletes material under a node only if it dominates another node of the same phrasal categorythe impact of this type of shortening can be seen in the following example parse s pp according to a nowfinalized blueprint described by yous officials and other sources det the bush administration plans to take complete unilateral control of det a postsaddam hussein iraq output of preposed adjunct removal bush administration plans to take complete unilateral control of postsaddam hussein iraq the third and fourth types of iterative shortening are the removal of trailing pps and sbars respectively these are the riskiest of the iterative shortening rules as indicated in our analysis of the humangenerated headlinesthus we apply these conservatively only when there are no other categories of rules to applymoreover these rules are applied with a backoff option to avoid overtrimming the parse treefirst the pp shortening rule is appliedif the threshold has been reached no more shortening is donehowever if the threshold has not been reached the system reverts to the parse tree as it was before any pps were removed and applies the sbar shortening ruleif the threshold still has not been reached the pp rule is applied to the result of the sbar ruleother sequences of shortening rules are possiblethe one above was observed to produce the best results on a 73sentence development set of stories from the tipster corpusthe intuition is that when removing constituents from a parse tree it is best to remove smaller portions during each iteration to avoid producing trees with undesirably few wordspps tend to represent small parts of the tree while sbars represent large parts of the treethus we try to reach the threshold by removing small constituents but if we cannot reach the threshold that way we restore the small constituents remove a large constituent and resume the deletion of small constituentsthe impact of these two types of shortening can be seen in the following examples parse s more oilcovered sea birds were found pp over the weekend output of pp removal more oilcovered sea birds were foundparse s visiting china interpol chief expressed confidence in hong kongs smooth transition sbar while assuring closer cooperation after hong kong returns output of sbar removal visiting china interpol chief expressed confidence in hong kongs smooth transitionwe conducted two evaluationsone was an informal human assessment and one was a formal automatic evaluationwe compared our current system to a statistical headline generation system we presented at the 2001 duc summarization workshop which we will refer to as hmm hedgehmm hedge treats the summarization problem as analogous to statistical machine translationthe verbose language articles is treated as the result of a concise language headlines being transmitted through a noisy channelthe result of the transmission is that extra words are added and some morphological variations occurthe viterbi algorithm is used to calculate the most likely unseen headline to have generated the seen articlethe viterbi algorithm is biased to favor headlinelike characteristics gleaned from observation of human performance of the headlineconstruction tasksince the 2002 workshop hmm hedge has been enhanced by incorporating part of speech of information into the decoding process rejecting headlines that do not contain a word that was used as a verb in the story and allowing morphological variation only on words that were used as verbs in the storyhmm hedge was trained on 700000 news articles and headlines from the tipster corpusbleu is a system for automatic evaluation of machine translationbleu uses a modified ngram precision measure to compare machine translations to reference human translationswe treat summarization as a type of translation from a verbose language to a concise one and compare automatically generated headlines to human generated headlinesfor this evaluation we used 100 headlines created for 100 ap stories from the tipster collection for august 6 1990 as reference summarizations for those storiesthese 100 stories had never been run through either system or evaluated by the authors prior to this evaluationwe also used the 2496 manual abstracts for the duc2003 10word summarization task as reference translations for the 624 test documents of that taskwe used two variants of hmm hedge one which selects headline words from the first 60 words of the story and one which selects words from the first sentence of the storytable 1 shows the bleu score using trigrams and the 95 confidence interval for the scorethese results show that although hedge trimmer scores slightly higher than hmm hedge on both data sets the results are not statistically significanthowever we believe that the difference in the quality of the systems is not adequately reflected by this automatic evaluationhuman evaluation indicates significantly higher scores than might be guessed from the automatic evaluationfor the 100 ap stories from the tipster corpus for august 6 1990 the output of hedge trimmer and hmm hedge was evaluated by one humaneach headline was given a subjective score from 1 to 5 with 1 being the worst and 5 being the bestthe average score of hmm hedge was 301 with standard deviation of 111the average score of hedge trimmer was 372 with standard deviation of 126using a tscore the difference is significant with greater than 999 confidencethe types of problems exhibited by the two systems are qualitatively differentthe probabilistic system is more likely to produce an ungrammatical result or omit a necessary argument as in the examples belowin contrast the parserbased system is more likely to fail by producing a grammatical but semantically useless headlinefinally even when both systems produce acceptable output hedge trimmer usually produces headlines which are more fluent or include more useful information demanding that chinese authorities respect culturewe have shown the effectiveness of constructing headlines by selecting words in order from a newspaper storythe practice of selecting words from the early part of the document has been justified by analyzing the behavior of humans doing the task and by automatic evaluation of a system operating on a similar principlewe have compared two systems that use this basic technique one taking a statistical approach and the other a linguistic approachthe results of the linguistically motivated approach show that we can build a working system with minimal linguistic knowledge and circumvent the need for large amounts of training datawe should be able to quickly produce a comparable system for other languages especially in light of current multilingual initiatives that include automatic parser induction for new languages eg the tides initiativewe plan to enhance hedge trimmer by using a language model of headlinese the language of newspaper headlines to guide the system in which constituents to removewe also we plan to allow for morphological variation in verbs to produce the present tense headlines typical of headlinesehedge trimmer will be installed in a translingual detection system for enhanced display of document surrogates for crosslanguage question answeringthis system will be evaluated in upcoming iclef conferencesthe university of maryland authors are supported in part by bbnt contract 0201247157 darpaito contract n6600197c8540 and nsf cise research infrastructure award eia0130422we would like to thank naomi chang and jon teske for generating reference headlines
W03-0501
hedge trimmer a parseandtrim approach to headline generationthis paper presents hedge trimmer a headline generation system that creates a headline for a newspaper story using linguisticallymotivated heuristics to guide the choice of a potential headlinewe present feasibility tests used to establish the validity of an approach that constructs a headline by selecting words in order from a storyin addition we describe experimental results that demonstrate the effectiveness of our linguisticallymotivated approach over a hmmbased model using both human evaluation and automatic metrics for comparing the two approachesour approach focuses on extracting one or two informative sentences from the document and performing linguisticallymotivated transformations to them in order to reduce the summary length
use of deep linguistic features for the recognition and labeling of semantic arguments we use deep linguistic features to predict semantic roles on syntactic arguments and show that these perform considerably better than surfaceoriented features we also show that predicting labels from a lightweight parser that generates deep syntactic features performs comparably to using a full parser that generates only surface syntactic features syntax mediates between surface word order and meaningthe goal of parsing is ultimately to provide the first step towards giving a semantic interpretation of a string of wordsso far attention has focused on parsing because the semantically annotated corpora required for learning semantic interpretation have not been availablethe completion of the first phase of the propbank represents an important stepthe propbank superimposes an annotation of semantic predicateargument structures on top of the penn treebank the arc labels chosen for the arguments are specific to the predicate not universalin this paper we find that the use of deep linguistic representations to predict these semantic labels are more effective than the generally more surfacesyntax representations previously employed specifically we show that the syntactic dependency structure that results load from the extraction of a tree adjoining grammar from the ptb and the features that accompany this structure form a better basis for determining semantic role labelscrucially the same structure is also produced when parsing with tagwe suggest that the syntactic representation chosen in the ptb is less well suited for semantic processing than the other deeper syntactic representationsin fact this deeper representation expresses syntactic notions that have achieved a wide acceptance across linguistic frameworks unlike the very particular surfacesyntactic choices made by the linguists who created the ptb syntactic annotation rulesthe outline of this paper is as followsin section 2 we introduce the propbank and describe the problem of predicting semantic tagssection 3 presents an overview of our work and distinguishes it from previous worksection 4 describes the method used to produce the tags that are the basis of our experimentssection 5 specifies how training and test data that are used in our experiments are derived from the propbanknext we give results on two sets of experimentsthose that predict semantic tags given goldstandard linguistic information are described in section 6those that do prediction from raw text are described in section 7finally in section 8 we present concluding remarksthe propbank annotates the ptb with dependency structures using sense tags for each word and local semantic labels for each argument and adjunctargument labels are numbered and used consistently across syntactic alternations for the same verb meaning as shown in figure 1adjuncts are given special tags such as tmp or loc derived from the original annotation of the penn treebankin addition to the annotated corpus propbank provides a lexicon which lists for each meaning of each annotated verb its roleset ie the possible arguments in the predicate and their labelsas an example the entry for the verb kick is given in figure 2the notion of meaning used is fairly coarsegrained typically motivated from differing syntactic behaviorsince each verb meaning corresponds to exactly one roleset these terms are often used interchangeablythe roleset also includes a descriptor field which is intended for use during annotation and as documentation but which does not have any theoretical standingeach entry also includes examplescurrently there are frames for about 1600 verbs in the corpus with a total of 2402 rolesetssince we did not yet have access to a corpus annotated with rolesets we concentrate in this paper on predicting the role labels for the argumentsit is only once we have both that we can interpret the relation between predicate and argument at a very fine level we will turn to the problem of assigning rolesets to predicates once the data is availablewe note though that preliminary investigations have shown that for about 65 of predicates in the wsj there is only one rolesetin a further 7 of predicates the set of semantic labels on the arguments of that predicate completely disambiguates the rolesetgildea and palmer show that semantic role labels can be predicted given syntactic features derived from the ptb with fairly high accuracyfurthermore they show that this method can be used in conjunction with a parser to produce parses annotated with semantic labels and that the parser outperforms a chunkerthe features they use in their experiments can be listed as followshead word the predicates head word as well as the arguments head word is usedphrase typethis feature represents the type of phrase expressing the semantic rolein figure 3 phrase type for the argument prices is nppaththis feature captures the surface syntactic relation between the arguments constituent and the predicatesee figure 3 for an examplepositionthis binary feature represents whether the argument occurs before or after the predicate in the sentencevoicethis binary feature represents whether the predicate is syntactically realized in either passive or active voicenotice that for the exception of voice the features solely represent surface syntax aspects of the input parse treethis should not be taken to mean that deep syntax features are not importantfor example in their inclusion of voice gildea and palmer note that this deep syntax feature plays an important role in connecting semantic role with surface grammatical functionaside from voice we posit that other deep linguistic features may be useful to predict semantic rolein this work we explore the use of more general deeper syntax featureswe also experiment with semantic features derived from the propbankour methodology is as followsthe first stage entails generating features representing different levels of linguistic analysisthis is done by first automatically extracting several kinds of tag from the propbankthis may in itself generate useful features because tag structures typically relate closely syntactic arguments with their corresponding predicatebeyond this our tag extraction procedure produces a set of features that relate tag structures on both the surfacesyntax as well as the deepsyntax levelfinally because a tag is extracted from the propbank we have a set of semantic features derived indirectly from the propbank through tagthe second stage of our methodology entails using these features to predict semantic roleswe first experiment with prediction of semantic roles given goldstandard parses from the test corpuswe subsequently experiment with their prediction given raw text fed through a deterministic dependency parserour experiments depend upon automatically extracting tags from the propbankin doing so we follow the work of others in extracting grammars of various kinds from the ptb whether it be tag combinatory categorial grammar or constraint dependency grammar we will discuss tags and an important principle guiding their formation the extraction procedure from the ptb that is described in including extensions to extract a tag from the propbank and finally the extraction of deeper linguistic features from the resulting taga tag is defined to be a set of lexicalized elementary trees they may be composed by several welldefined operations to form parse treesa lexicalized elementary tree where the lexical item is removed is called a tree frame or a supertagthe lexical item in the tree is called an anchoralthough the tag formalism allows wide latitude in how elementary trees may be defined various linguistic principles generally guide their formationan important principle is that dependencies including longdistance dependencies are typically localized the same elementary tree by appropriate grouping of syntactically or semantically related elementsthe extraction procedure fragments a parse tree from the ptb that is provided as input into elementary treessee figure 4these elementary trees can be composed by tag operations to form the original parse treethe extraction procedure determines the structure of each elementary tree by localizing dependencies through the use of heuristicssalient heuristics include the use of a head percolation table and another table that distinguishes between complements and adjunct nodes in the treefor our current work we use the head percolation table to determine heads of phrasesalso we treat a propbank argument as a complement and a propbank adjunct as an adjunct when such annotation is available1 otherwise we basically follow the approach of 2 besides introducing one kind of tag extraction procedure introduces the notion of grouping linguisticallyrelated extracted tree frames togetherin one approach each tree frame is decomposed into a feature vectoreach element of this vector describes a single linguisticallymotivated characteristic of the treethe elements comprising a feature vector are listed in table 1each elementary tree is decomposed into a feature vector in a relatively straightforward mannerfor example the pos feature is obtained from the preterminal node of the elementary treethere are also features that specify the syntactic transformations that an elementary tree exhibitseach such transformation is recognized by structural pattern matching the elementary tree against a pattern that identifies the transformations existencefor more details see given a set of elementary trees which compose a tag and also the feature vector corresponding to each tree it is possible to annotate each node representing an argument in the tree with role informationthese are syntactic roles including for example subject and direct objecteach argument node is labeled with two kinds of roles a surface syntactic role and a deep syntactic rolethe former is obtained through determining the position of the node with respect to the anchor of the tree using the usually positional rules for determining argument status in englishthe latter is obtained from the former and also from knowledge of the syntactic transformations that have been applied to the treefor example we determine the deep syntactic role of a whmoved element by undoing the whmovement by using the trace information in the ptbthe propbank contains all of the notation of the penn treebank as well as semantic notationfor our current work we extract two kinds of tag from the propbankone grammar semtag has elementary trees annotated with the aforementioned syntactic information as well as semantic informationsemantic information includes semantic role as well as semantic subcategorization informationthe other grammar synttag differs from semtag only by the absence of any semantic role informationfor our experiments we use a version of the propbank where the most commonly appearing predicates have been annotated not allour extracted tags are derived from sections 0221 of the ptbfurthermore training data for our experiments are always derived from these sectionssection 23 is used for test datathe entire set of semantic roles that are found in the propbank are not used in our experimentsin particular we only include as semantic roles those instances in the propbank such that in the extracted tag they are localized in the same elementary treeas a consequence adjunct semantic roles are basically absent from our test corpusfurthermore not all of the complement semantic roles are found in our test corpusfor example cases of subjectcontrol pro are ignored because the surface subject is found in a different tree frame than the predicatestill a large majority of complement semantic roles are found in our test corpus this section is devoted towards evaluating different features obtained from a goldstandard corpus in the task of determining semantic rolewe use the feature set mentioned in section 3 as well as features derived from tags mentioned in section 4in this section we detail the latter set of featureswe then describe the results of using different feature setsthese experiments are performed using the c45 decision tree machine learning algorithmthe standard settings are usedfurthermore results are always given using unpruned decision trees because we find that these are the ones that performed the best on a development setthese features are determined during the extraction of a tag supertag paththis is a path in a tree frame from its preterminal to a particular argument node in a tree framethe supertag path of the subject of the rightmost tree frame in figure 4 is vbgvpsnpsupertagthis can be the tree frame corresponding to either the predicate or the argumentsrolethis is the surfacesyntactic role of an argumentexample of values include 0 and 1 ssubcatthis is the surfacesyntactic subcategorization framefor example the ssubcat corresponding to a transitive tree frame would be np0 np1pps as arguments are always annotated with the prepositionfor example the ssubcat for the passive version of hit would be np1 np2drolethis is the deepsyntactic role of an argumentexample of values include 0 and 1 dsubcatthis is the deepsyntactic subcategorization framefor example the dsubcat corresponding to a transitive tree frame would be np0 np1generally pps as arguments are annotated with the prepositionfor example the dsubcat for load is np0 np1 np2the exception is when the argument is not realized as a pp when the predicate is realized in a nonsyntactically transformed wayfor example the dsubcat for the passive version of hit would be np0 np1semsubcatthis is the semantic subcategorization framewe first experiment with the set of features described in gildea and palmer pred hw arg hw phrase type position path voicecall this feature set gp0the error rate 100 is lower than that reported by gildea and palmer 172this is presumably because our training and test data has been assembled in a different manner as mentioned in section 5our next experiment is on the same set of features with the exception that path has been replaced with supertag paththe error rate is reduced from 100 to 97this is statistically significant albeit a small improvementone explanation for the improvement is that path does not generalize as well as supertag path doesfor example the path feature value vbgvpvpsnp reflects surface subject position in the sentence prices are falling but so does vbgvpsnp in the sentence sellers regret prices fallingbecause tag localizes dependencies the corresponding values for supertag path in these sentences would be identicalwe now experiment with our surface syntax features pred hw arg hw ssubcat and sroleits performance on semtag is 82 whereas its performance on synttag is 76 a tangible improvement over previous modelsone reason for the improvement could be that this model is assigning semantic labels with knowledge of the other roles the predicate assigns unlike previous modelsour next experiment involves using deep syntax features pred hw arg hw dsubcat and droleits performance on both semtag and synttag is 65 better than previous modelsits performance is better than surface presumably because syntactic transformations are taken to account by deep syntax featuresnote also that the transformations which are taken into account are a superset of the transformations taken into account by gildea and palmer this experiment considers use of semantic features pred hw arg hw semsubcat and droleof course there are only results for semtag which turns out to be 19this is the best performance yetin our final experiment we use supertag features pertag drolethe error rates are 28 for semtag and 74 for synttagconsidering semtag only this model performs better than its corresponding deep model probably because supertag for semtag include crucial semantic informationconsidering synttag only this model performs worse than its corresponding deep model presumably because of sparse data problems when modeling supertagsthis sparse data problem is also apparent by comparing the model based on semtag with the corresponding semtag semantic modelin this section we are concerned with the problem of finding semantic arguments and labeling them with their correct semantic role given raw text as inputin order to perform this task we parse this raw text using a combination of supertagging and lda which is a method that yields partial dependency parses annotated with tag structureswe perform this task using both semtag and synttagfor the former after supertagging and lda the task is accomplished because the tag structures are already annotated with semantic role informationfor the latter we use the best performing model from section 6 in order to find semantic roles given syntactic features from the parsesupertagging is the task of assigning a single supertag to each word given raw text as inputfor example given the sentence prices are falling a supertagger might return the supertagged sentence in figure 4supertagging returns an almostparse in the sense that it is performing much parsing disambiguationthe typical technique to perform supertagging is the trigram model akin to models of the same name for partofspeech taggingthis is the technique that we use heredata sparseness is a significant issue when supertagging with extracted grammar for this reason we smooth the emit probabilities p in the trigram model using distributional similarity following chen in particular we use jaccards coefficient as the similarity metric with a similarity threshold of 004 and a radius of 25 because these were found to attain optimal results in chen training data for supertagging is sections 0221 of the propbanka supertagging model based on semtag performs with 7632 accuracy on section 23the corresponding model for synttag performs with 8034 accuracyaccuracy is measured for all words in the sentence including punctuationthe synttag model performs better than the semtag model understandably because synttag is the simpler grammarlda is an acronym for lightweight dependency analyzer given as input a supertagged sequence of words it outputs a partial dependency parseit takes advantage of the fact that supertagging provides an almostparse in order to dependency parse the sentence in a simple deterministic fashionbasic lda is a two step procedurethe first step involves linking each word serving as a modifier with the word that it modifiesthe second step involves linking each word serving as an argument with its predicatelinking always only occurs so that grammatical requirements as stipulated by the supertags are satisfiedthe version of lda that is used in this work differs from srinivas in that there are other constraints on the linking process3 in particular a link is not established if its existence would create crossing brackets or cycles in the dependency tree for the sentencewe perform lda on two versions of section 23 one supertagged with semtag and the other with synttagthe results are shown in table 3evaluation is performed on dependencies excluding leafnode punctuationeach dependency is evaluated according to both whether the correct head and dependent is related as well as whether they both receive the correct part of speech tagthe fmeasure scores in the 70 range are relatively low compared to collins which has a corresponding score of around 90this is perhaps to be expected because collins is based on a full parsernote also that the accuracy of lda is highly dependent on the accuracy of the supertagged inputthis explains for example the fact that the accuracy on semtag supertagged input is lower than the accuracy with synttag supertagged inputthe output of lda is a partial dependency parse annotated with tag structureswe can use this output to predict semantic roles of argumentsthe manner in which this is done depends on the kind of grammar that is usedthe lda output using semtag is already annotated with semantic role information because it is encoded in the grammar itselfon the other hand the lda output using synttag contains strictly syntactic informationin this case we use the highest performing model from section 6 in order to label arguments with semantic rolesevaluation of prediction of semantic roles takes the following formeach argument labeled by a semantic role in the test corpus is treated as one trialcertain aspects of this trial are always checked for correctnessthese include checking that the semantic role and the dependencylink are correctthere are other aspects which may or may not be checked depending on the type of evaluationone aspect bnd is whether or not the arguments bracketing as specified in the dependency tree is correctanother aspect arg is whether or not the headword of the argument is chosen to be correcttable 4 show the results when we use semtag in order to supertag the input and perform ldawhen the boundaries are found finding the head word additionally does not result in a decrease of performancehowever correctly identifying the head word instead of the boundaries leads to an important increase in performancefurthermore note the low recall and high precision of the base arg evaluationin part this is due to the nature of the propbank corpus that we are usingin particular because not all predicates in our version of the propbank are annotated with semantic roles the supertagger for semtag will sometimes annotate text without semantic roles when in fact it should contain themtable 5 shows the results of first supertagging the input with synttag and then using a model trained on the deep feature set to annotate the resulting syntactic structure with semantic rolesthis twostep approach greatly increases performance over the corresponding semtag based approachthese results are comparable to the results from gildea and palmer but only roughly because of differences in corporagildea and palmer achieve a recall of 050 a precision of 058 and an fmeasure of 054 when using the full parser of collins they also experiment with using a chunker which yields a recall of 035 a precision of 050 and an fmeasure of 041we have presented various alternative approaches to predicting propbank role labels using forms of linguistic information that are deeper than the ptbs surfacesyntax labelsthese features may either be directly derived from a tag such as supertag path or indirectly via aspects of supertags such task determine recall precision f base arg 065 075 070 base bnd 048 055 051 base bnd arg 048 055 051 as deep syntactic features like drolethese are found to produce substantial improvements in accuracywe believe that such improvement is due to these features better capturing the syntactic information that is relevant for the task of semantic labelingalso these features represent syntactic categories about which there is a broad consensus in the literaturetherefore we believe that our results are portable to other frameworks and differently annotated corpora such as dependency corporawe also show that predicting labels from a lightweight parser that generates deep syntactic features performs comparably to using a full parser that generates only surface syntactic featuresimprovements along this line may be attained by use of a full tag parser such as chiang for examplethis paper is based upon work supported by the national science foundation under the kdd program through a supplement to grant noiis9817434any opinions findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the national science foundation
W03-1006
use of deep linguistic features for the recognition and labeling of semantic argumentswe use deep linguistic features to predict semantic roles on syntactic arguments and show that these perform considerably better than surfaceoriented featureswe also show that predicting labels from a lightweight parser that generates deep syntactic features performs comparably to using a full parser that generates only surface syntactic featureswe argue that deep linguistic features harvested from framenet are beneficial for the successful assignment of propbank roles to constituentswe use ltagbased decomposition of parse trees for srlinstead of using the typical parse tree features used in srl models we use the path within the elementary tree from the predicate to the constituent argument
identifying semantic roles using combinatory categorial grammar we present a system for automatically identifying propbankstyle semantic roles based on the output of a statistical parser for combinatory categorial grammar this system performs at least as well as a system based on a traditional treebank parser and outperforms it on core argument roles correctly identifying the semantic roles of sentence constituents is a crucial part of interpreting text and in addition to forming an important part of the information extraction problem can serve as an intermediate step in machine translation or automatic summarizationeven for a single predicate semantic arguments can have multiple syntactic realizations as shown by the following paraphrases recently attention has turned to creating corpora annotated with argument structuresthe propbank and the framenet projects both document the variation in syntactic realization of the arguments of predicates in general english textgildea and palmer developed a system to predict semantic roles from sentences and their parse trees as determined by the statistical parser of collins in this paper we examine how the syntactic representations used by different statistical parsers affect the performance of such a systemwe compare a parser based on combinatory categorial grammar with the collins parseras the ccg parser is trained and tested on a corpus of ccg derivations that have been obtained by automatic conversion from the penn treebank we are able to compare performance using both goldstandard and automatic parses for both ccg and the traditional treebank representationthe treebankparser returns skeletal phrasestructure trees without the traces or functional tags in the original penn treebank whereas the ccg parser returns wordword dependencies that correspond to the underlying predicateargument structure including longrange dependencies arising through control raising extraction and coordinationthe proposition bank provides a humanannotated corpus of semantic verbargument relationsfor each verb appearing in the corpus a set of semantic roles is definedroles for each verb are simply numbered arg0 arg1 arg2 etcas an example the entryspecific roles for the verb offer are given below these roles are then annotated for every instance of the verb appearing in the corpus including the following examples a variety of additional roles are assumed to apply across all verbsthese secondary roles can be thought of as being adjuncts rather than arguments although no claims are made as to optionality or other traditional argumentadjunct teststhe secondary roles include location in tokyo outside time last week on tuesday never manner easily dramatically direction south into the wind because due to pressure from washington discourse however also on the other hand extent 15 289 points purpose to satisfy requirements negation not nt modal can might should will adverbial and are represented in propbank as argm with an additional function tag for example argmtmp for temporalwe refer to propbanks numbered arguments as core argumentscore arguments represent 75 of the total labeled roles in the propbank dataour system predicts all the roles including core arguments as well as the argm labels and their function tagscombinatory categorial grammar is a grammatical theory which provides a completely transparent interface between surface syntax and underlying semantics such that each syntactic derivation corresponds directly to an interpretable semantic representation which includes longrange dependencies that arise through control raising coordination and extractionin ccg words are assigned atomic categories such as np or functor categories like np or ss adjuncts are represented as functor categories such as ss which expect and return the same typewe use indices to number the arguments of functor categories egnp2 or ss1 and indicate the wordword dependencies in the predicateargument structure as tuples where ch is the lexical category of the head word wh and wa is the head word of the constituent that fills the ith argument of chlongrange dependencies can be projected through certain types of lexical categories or through rules such as coordination of functor categoriesfor example in the lexical category of a relative pronoun the head of the np that is missing from the relative clause is unified with the head of the np that is modified by the entire relative clausefigure 1 shows the derivations of an ordinary sentence a relative clause and a rightnoderaising constructionin all three sentences the predicateargument relations between london and denied and plans and denied are the same which in ccg is expressed by the fact that london fills the first argument slot of the lexical category of denied np2 and plans fills the second slotthe relations extracted from the ccg derivation for the sentence london denied plans on monday are shown in table 1the ccg parser returns the local and longrange wordword dependencies that express the predicateargument structure corresponding to the derivationthese relations are recovered with an accuracy of around 83 or 91 by contrast standard treebank parsers such as only return phrasestructure trees from which nonlocal dependencies are difficult to recoverthe ccg parser has been trained and tested on ccgbank a treebank of ccg derivations obtained from the penn treebank from which we also obtain our training dataour aim is to use ccg derivations as input to a system for automatically producing the argument labels of propbankin order to do this we wish to correlate the ccg relations above with propbank argumentspropbank argument labels are assigned to nodes in the syntactic trees from the penn treebankwhile the ccgbank is derived from the penn treebank in many cases the constituent structures do not correspondthat is there may be no constituent in the ccg derivation corresponding to the same sequence of words as a particular constituent in the treebank treefor this reason we compute the correspondence between the ccg derivation and the propbank labels at the level of head wordsfor each role label for a verbs argument in propbank we first find the head word for its constituent according to the the head rules of we then look for the label of the ccg relation between this head word and the verb itselfin previous work using the propbank corpus gildea and palmer developed a system to predict semantic roles from sentences and their parse trees as determined by the statistical parser of collins we will briefly review their probability model before adapting the system to incorporate features from the ccg derivationsfor the treebankbased system we use the probability model of gildea and palmer probabilities of a parse constituent belonging to a given semantic role are calculated from the following features the phrase type feature indicates the syntactic type of the phrase expressing the semantic roles examples include noun phrase verb phrase and clause the parse tree path feature is designed to capture the syntactic relation of a constituent to the predicateit is defined as the path from the predicate through the parse tree to the constituent in question represented as a string of parse tree nonterminals linked by symbols indicating upward or downward movement through the tree as shown in figure 2although the path is composed as a string of symbols our systems will treat the string as an atomic valuethe path includes as the first element of the string the part of speech of the predicate and as the last element the phrase type or syntactic category of the sentence constituent marked as an argumenthe ate some pancakes figure 2 in this example the path from the predicate ate to the argument np he can be represented as vbtvpts1np with t indicating upward movement in the parse tree and 1 downward movementthe position feature simply indicates whether the constituent to be labeled occurs before or after the predicatethis feature is highly correlated with grammatical function since subjects will generally appear before a verb and objects afterthis feature may overcome the shortcomings of reading grammatical function from the parse tree as well as errors in the parser outputthe voice feature distinguishes between active and passive verbs and is important in predicting semantic roles because direct objects of active verbs correspond to subjects of passive verbsan instance of a verb was considered passive if it is tagged as a past participle unless it occurs as a descendent verb phrase headed by any form of have without an intervening verb phrase headed by any form of be the head word is a lexical feature and provides information about the semantic type of the role fillerhead words of nodes in the parse tree are determined using the same deterministic set of head word rules used by collins the system attempts to predict argument roles in new data looking for the highest probability assignment of roles ri to all constituents i in the sentence given the set of features fi pti pathi posi vi hi at each constituent in the parse tree and the predicate p we break the probability estimation into two parts the first being the probability p of a constituents role given our five features for the consituent and the predicate p due to the sparsity of the data it is not possible to estimate this probability from the counts in the training datainstead probabilities are estimated from various subsets of the features and interpolated as a linear combination of the resulting distributionsthe interpolation is performed over the most specific distributions for which data are available which can be thought of as choosing the topmost distributions available from a backoff lattice shown in figure 3the probabilities p are combined with the probabilities p for a set of roles appearing in a sentence given a predicate using the following formula this approach described in more detail in gildea and jurafsky allows interaction between the role assignments for individual constituents while making certain independence assumptions necessary for efficient probability estimationin particular we assume that sets of roles appear independent of their linear order and that the features f of a constituents are independent of other constituents features given the constituents rolein the ccg version we replace the features above with corresponding features based on both the sentences ccg derivation tree and the ccg predicateargument relations extracted from it the parse tree path feature designed to capture grammatical relations between constituents is replaced with a feature defined as follows if there is a dependency in the predicateargument structure of the ccg derivation between two words w and w the path feature from w to w is defined as the lexical category of the functor the argument slot i occupied by the argument plus an arrow to indicate whether w or w is the categorial functorfor example in our sentence london denied plans on monday the relation connecting the verb denied with plans is np2 with the left arrow indicating the lexical category included in the relation is that of the verb while the relation connecting denied with on is np2 with the right arrow indicating the the lexical category included in the relation is that of the modifierif the ccg derivation does not define a predicateargument relation between the two words we use the parse tree path feature described above defined over the ccg derivation treein our training data 77 of propbank arguments corresponded directly to a relation in the ccg predicateargument representation and the path feature was used for the remaining 23most of these mismatches arise because the ccg parser and propbank differ in their definition of head wordsfor instance the ccg parser always assumes that the head of a pp is the preposition whereas propbank roles can be assigned to the entire pp or only to the np argument of the preposition in which case the head word comes from the np in embedded clauses ccg assumes that the head is the complementizer whereas in propbank the head comes from the embedded sentence itselfin complex verb phrases the ccg parser assumes that the first auxiliary is head whereas propbank assumes it is the main verb therefore ccg assumes that not modifies might whereas propbank assumes it modifies gonealthough the head rules of the parser could in principle be changed to reflect more directly the dependencies in propbank we have not attempted to do so yetfurther mismatches occur because the predicateargument structure returned by the ccg parser only contains syntactic dependencies whereas the propbank data also contain some anaphoric dependencies eg such dependencies also do not correspond to a relation in the predicateargument structure of the ccg derivation and because the path feature to be usedthe phrase type feature is replaced with the lexical category of the maximal projection of the propbank arguments head word in the ccg derivation treefor example the category of plans is n and the category of denied is npthe voice feature can be read off the ccg categories since the ccg categories of past participles carry different features in active and passive voice np or spssnpthe head word of a constituent is indicated in the derivations returned by the ccg parserwe use data from the november 2002 release of propbankthe dataset contains annotations for 72109 predicateargument structures with 190815 individual arguments and has includes examples from 2462 lexical predicates annotations from sections 2 through 21 of the treebank were used for training section 23 was the test setboth parsers were trained on sections 2 through 21because of the mismatch between the constituent structures of ccg and the treebank we score both systems according to how well they identify the head words of propbanks argumentstable 2 gives the performance of the system on both propbanks core or numbered arguments and on all propbank roles including the adjunctlike argm rolesin order to analyze the impact of errors in the syntactic parses we present results using features extracted from both automatic parser output and the gold standard parses in the penn treebank and in ccgbankusing the gold standard parses provides an upper bound on the performance of the system based on automatic parsessince the collins parser does not provide trace information its upper bound is given by the system tested on the goldstandard treebank representation with traces removedin table 2 core indicates results on propbanks numbered arguments only and all includes numbered arguments as well as the argm rolesmost of the numbered arguments correspond to arguments that the ccg category of the verb directly subcategorizes forthe ccgbased system outperforms the system based on the collins parser on these core arguments and has comparable performance when all propbank labels are consideredwe believe that the superior performance of the ccg system on this core arguments is due to its ability to recover longdistance dependencies whereas we attribute its lower performance on noncore arguments mainly to the mismatches between propbank and ccgbankthe importance of longrange dependencies for our task is indicated by the fact that the performance on the penn treebank gold standard without traces is significantly lower than that on the penn treebank with trace informationlongrange dependencies are especially important for core arguments shown by the fact that removing trace information from the treebank parses results in a bigger drop for core arguments than for all roles the ability of the ccg parser to recover these longrange dependencies accounts for its higher performance and in particular its higher recall on core argumentsthe ccg gold standard performance is below that of the penn treebank gold standard with traceswe believe this performance gap to be caused by the mismatches between the ccg analyses and the propbank annotations described in section 52for the reasons described the head words of the constituents that have propbank roles are not necessarily the head words that stand in a predicateargument relation in ccgbankif two words do not stand in a predicateargument relation the ccg system takes recourse to the path featurethis feature is much sparser in ccg since ccg categories encode subcategorization information the number of categories in ccgbank is much larger than that of penn treebank labelsanalysis of our systems output shows that the system trained on the penn treebank gold standard obtains 555 recall on those relations that require the ccg path feature whereas the system using ccgbank only achieves 369 recall on thesealso in ccg the complementadjunct distinction is represented in the categories for the complement or adjunct and in the categories for the head pp or sdclnpin generating the ccgbank various heuristics were used to make this distinctionin particular for pps it depends on the closelyrelated function tag which is known to be unreliablethe decisions made in deriving the ccgbank often do not match the handannotated complementadjunct distinctions in propbank and this inconsistency is likely to make our ccgbankbased features less predictivea possible solution is to regenerate the ccgbank using the propbank annotationsthe impact of our headword based scoring is analyzed in table 3 which compares results when only the head word must be correctly identified and to results when both the beginning and end of the argument must be correctly identified in the sentence even if the head word is given the correct label the boundaries of the entire argument may be different from those given in the propbank annotationsince constituents in ccgbank do not always match those in propbank even the ccg gold standard parses obtain comparatively low scores according to this metricthis is exacerbated when automatic parses are consideredour ccgbased system for automatically labeling verb arguments with propbankstyle semantic roles outperforms a system using a traditional treebankbased parser for core arguments which comprise 75 of the role labels but scores lower on adjunctlike roles such as temporals and locativesthe ccg parser returns predicateargument structures that include longrange dependencies therefore it seems inherently better suited for this taskhowever the performance of our ccg system is lowered by the fact that the syntactic analyses in its training corpus differ from those that underlie propbank in important ways we would expect a higher performance for the ccgbased system if the analyses in ccgbank resembled more closely those in propbankour results also indicate the importance of recovering longrange dependencies either through the trace information in the penn treebank or directly as in the predicateargument structures returned by the ccg parserwe speculate that much of the performance improvement we show could be obtained with traditional parsers if they were designed to recover more of the information present in the penn treebank in particular the trace coindexationan interesting experiment would be the application of our rolelabeling system to the output of the trace recovery system of johnson our results also have implications for parser evaluation as the most frequently used constituentbased precision and recall measures do not evaluate how well longrange dependencies can be recovered from the output of a parsermeasures based on dependencies such as those of lin and carroll et al are likely to be more relevant to realworld applications of parsingacknowledgments this work was supported by the institute for research in cognitive science at the university of pennsylvania the propbank project an epsrc studentship and grant grm96889 and nsf itr grant 0205 456we thank mark steedman martha palmer and alexandra kinyon for their comments on this work
W03-1008
identifying semantic roles using combinatory categorial grammarwe present a system for automatically identifying propbankstyle semantic roles based on the output of a statistical parser for combinatory categorial grammarthis system performs at least as well as a system based on a traditional treebank parser and outperforms it on core argument roleswe find that using features extracted from a combinatory categorical grammar representation improves semantic labeling performance on core arguments
a general framework for distributional similarity we present a general framework for distributional similarity based on the concepts of precision and recall different parameter settings within this framework approximate different existing similarity measures as well as many more which have until now been unexplored we show that optimal parameter settings outperform two existing stateoftheart similarity measures on two evaluation tasks for high and low frequency nouns there are many potential applications of sets of distributionally similar wordsin the syntactic domain language models which can be used to evaluate alternative interpretations of text and speech require probabilistic information about words and their cooccurrences which is often not available due to the sparse data problemin order to overcome this problem researchers have proposed estimating probabilities based on sets of words which are known to be distributionally similarin the semantic domain the hypothesis that words which mean similar things behave in similar ways has led researchers to propose that distributional similarity might be used as a predictor of semantic similarityaccordingly we might automatically build thesauruses which could be used in tasks such as malapropism correction and text summarization however the loose definition of distributional similarity that two words are distributionally similar if they appear in similar contexts has led to many distributional similarity measures being proposed for example the l1 norm the euclidean distance the cosine metric jaccard coefficient the dice coefficient the kullbackleibler divergence the jensonshannon divergence the askew divergence the confusion probability hindle mutual informationbased measure and lin mibased measure further there is no clear way of deciding which is the best measureapplicationbased evaluation tasks have been proposed yet it is not clear whether there is or should be one distributional similarity measure which outperforms all other distributional similarity measures on all tasks and for all wordswe take a generic approach that does not directly reduce distributional similarity to a single dimensionthe way dimensions are combined together will depend on parameters tuned to the demands of a given applicationfurther different parameter settings will approximate different existing similarity measures as well as many more which have until now been unexploredthe contributions of this paper are fourfoldfirst we propose a general framework for distributional similarity based on the concepts of precision and recall second we evaluate the framework at its optimal parameter settings for two different applications showing that it outperforms existing stateoftheart similarity measures for both high and low frequency nounsthird we begin to investigate to what extent existing similarity measures might be characterised in terms of parameter settings within the framework fourth we provide an understanding of why a single existing measure cannot achieve optimal results in every application of distributional similarity measuresin this section we introduce the relevance of the information retrieval concepts of precision and recall in the context of word similaritywe provide combinatorial probabilistic and mutualinformation based models for precision and recall and discuss combining precision and recall to provide a single number in the context of a particular applicationthe similarity of two nouns can be viewed as a measure of how appropriate it is to use one noun in place of the otherif we are using the distribution of one noun in place of the distribution the other noun we can consider the precision and recall of the prediction madeprecision tells us how much of what has been predicted is correct whilst recall tells us how much of what is required has been predictedin order to calculate precision and recall we first need to consider for each noun n which verb cooccurrences will be predicted by it and conversely required in a description of itwe will refer to these verbs as the features of n f where d is the degree of association between noun n and verb v possible association functions will be defined in the context of each model described belowif we are considering the ability of noun a to predict noun b then it follows that the set of true positives is tp f n f and precision and recall can be defined as precision and recall both lie in the range 01 and are both equal to one when each noun has exactly the same featuresit should also be noted that ra b pwe will now consider some different possibilities for measuring the degree of association between a noun n and a verb v in the combinatorial model we simply consider whether a verb has ever been seen to cooccur with the nounin other words the degree of association between a noun n and a verb v is 1 if they have cooccurred together and 0 otherwisein this case it should be noted that the definitions of precision and recall can be simplified as follows in the probabilistic model more probable cooccurrences are considered more significantthe degree of association between a noun n and verb v is defined in the probabilistic model as the definitions for feature set membership tp precision and recall all remain the same except for the use of the new association functionusing the probabilistic model the precision of a prediction of b is the probability that a verb picked at random from those cooccurring with a will also cooccur with b and the recall of a prediction of b is the probability that a verb picked at random from those those cooccurring with b will also cooccur with amutual information allows us to capture the idea that a cooccurrence of low probability events is more informative than a cooccurrence of high probability eventsin this model as before we retain the definitions for feature set membership tp precision and recall but again change the association functionhere the degree of association between a noun n and a verb v is their miaccordingly verb v will be considered to be a feature of noun n if the probability of their cooccurrence is greater than would be expected if verbs and nouns occurred independentlyalthough we have defined a pair of numbers for similarity in applications it will still be necessary to compute a single number in order to determine neighbourhood or cluster membershipthere are two obvious ways to optimise a pair of numbers such as precision and recallthe first is to use an arithmetic mean which optimises the sum of the numbers and the second is to use a harmonic mean2 which optimises the product of the numbersin an attempt to retain generality we can allow both alternatives by computing an arithmetic mean of the harmonic mean and the arithmetic mean noting that the relative importance of each term in an arithmetic mean is controlled by weights where both and y lie in the range 01the resulting similarity sim will also lie in the range 01 where 0 represents complete lack of similarity and 1 represents equivalencethis formula can be used in combination with any of the models for precision and recall outlined abovefurther the generality allows us to investigate empirically the relative significance of the different terms and thus whether one might be omitted in future workprecision and recall can be computed once for every pair of words whereas similarity is something which will be computed for a specific task and will depend on the values of 3 and ytable 1 summarizes some special parameter settingsin this section we evaluate the performance of the framework using the combinatorial and mibased models of precision and recall at two application based tasks against lin mibased measure and the askew divergence measure the formulae for these measures are given in figure 1for the askew divergence measure we set a 099 since this most closely approximates the kullbackleibler divergence measurethe two evaluation tasks used pseudodisambiguation and wordnet prediction are fairly standard for distributional similarity measureshowever in the future we wish to extend our evaluation to other tasks such as malapropism correction and ppattachment ambiguity resolution and also to the probabilistic modelsince we use the same data and methodology as in earlier work some detail is omitted in the subsequent discussion but full details and rationale can be found in weeds and weir pseudodisambiguation tasks have become a standard evaluation technique and in the current context we may use a word neighbours to decide which of two cooccurrences is the most likelyalthough pseudodisambiguation itself is an artificial task it has relevance in at least two real application areasfirst by replacing occurrences of a particular word in a test suite with a pair or set of words from which a technique must choose we recreate a simplified version of the word sense disambiguation task that is choosing between a fixed number of homonyms based on local contextthe second is in language modelling where we wish to estimate the probability of cooccurrences of events but due to the sparse data problem it is often the case that a possible cooccurrence has not been seen in the training dataas is common in this field we study similarity between nouns based on their cooccurrences with verbs in the direct object relationwe study similarity between high and low frequency nouns since we want to investigate any associations between word frequency and quality of neighbours found by the measures but it is impractical to evaluate a large number of similarity measures over all nouns2852300 lemmatised directobject pairs were extracted from the bnc using a shallow parser from those nouns also occurring in wordnet we selected the 1000 most frequent3 nouns and a set of 1000 low frequency4 nounsfor each noun 80 of the available data was randomly selected as training data and the other 20 set aside as test dataprecision and recall were computed for each pair of nouns using the combinatorial and mi modelsthis data is then available to the application task which will first have to compute the similarity for each pair of nouns based on current parameter settings and select nearest neighbours accordinglywe converted each nounverb pair in the setaside test data into a nounverbverb triple where p is approximately equal to p over all the training data and has not been seen in the test or training dataa high frequency noun test set and a low frequency noun test set each containing 10000 test instances were then constructed by selecting ten test instances for each noun in a two step process of 1 whilst more than ten triples remained discarding duplicate triples and 2 randomly selecting ten triples from those remaining after step 1each set of test triples was split into five disjoint subsets containing two triples for each noun so that average performance and standard error could be computedadditionally three of the five subsets were used as a development set to optimise parameters and the remaining two used as a test set to find error rates at these optimal settingsthe task is then for the nearest neighbours of noun n to decide which of and was the original cooccurrenceeach of n neighbours m is given a vote which is equal to the difference in frequencies of the cooccurrences and and which it casts to the cooccurrence in which it appears most frequentlythe votes for each cooccurrence are summed over all of the k nearest neighbours of n and the cooccurrence with the most votes winsperformance is measured as error rate of ties error t1 2 where t is the number of test instances is that the hyponymy relation in wordnet is a gold standard for semantic similarity which is of course not truehowever we believe that a distributional similarity measure which more closely predicts wordnet is more likely to be a good predictor of semantic similaritywe will first explain the wordnetbased distance measure and then explain how we determine the similarity between neighbour sets generated using different measuresthe similarity of two nouns in wordnet is defined as the similarity of their maximally similar sensesthe commonality of two concepts is defined as the maximally specific superclass of those conceptsso if syn is the set of senses of the noun n in wordnet sup is the set of superclasses of concept c in wordnet and p is the probability that a randomly selected noun refers to an instance of c then the similarity between ni and n2 can be calculated using the formula for simwn in figure 1the probabilities p are estimated by the frequencies of concepts in semcor a sensetagged subset of the brown corpus noting that the occurrence of a concept refers to instances of all the superclasses of that concept 1the k nearest neighbours7 of each noun computed using each distributional similarity measure at each parameter setting are then compared with the k nearest neighbours of the noun according to the wordnet based measurein order to compute the similarity of two neighbour sets we transform each neighbour set so that each neighbour is given a rank score of k rankwe do not use the similarity scores directly since these require normalization if different similarity measures are to be comparedhaving performed this transformation the neighbour sets for the same word w may be represented by two ordered sets of words wk w1 and w wlthe similarity between such sets is computed using the same calculation as used by lin except for simplifications due to the use of ranks where i and j are the rank scores of the words within each neighbour settable 3 summarizes the optimal mean similarities and parameter settings for the general framework using both the combinatorial and the mibased modelsresults for lin mibased measure and the askew divergence measure are also given and results are divided into those for high frequency nouns and those for low frequency nounsstandard errors in the optimal mean similarities are not given but were of the order of 01our first observation is that the general framework using the mibased model for precision and recall outperforms all of the other distributional similarity measureswe also observe that lower values of y produce better results particularly for low frequency nounsfor example when y 1 similarity for low frequency nouns drops to 0147 using the combinatorial model and 0177 using the mibased modelthird from figure 3 it appears that this wordnet prediction task favours measures which select high recall neighboursalthough optimum similarity for the combinatorial model occurs at 805 similarity is always higher for lower values of than for higher values of 3 ing the askew divergence measure and those found using the mibased modeloptimal similarity was found at y 00 and i3 00 for high frequency nouns and at y 025 and 3 00 for low frequency nounsfurther similarity between the measures drops rapidly once i3 rises above 03using the mibased model for precision and recall and with a parameter setting of y 10 the general framework for distributional similarity proposed herein closely approximates lin measurehowever we have shown that using a much lower value of y so that the combination of precision and recall is closer to a weighted arithmetic mean than a harmonic mean yields better results in the two application tasks considered herethis is because the relative importance of precision and recall can be tuned to the task at handfurther we have shown that pseudodisambiguation is a task which requires high precision neighbours whereas wordnet prediction is a task which requires high recall neighboursaccordingly it is not clear how a single similarity measure could give optimum results on both tasksin the future we intend to extend the work to the characterisation of other tasks and other existing similarity measuresas well as their usually implicit use of precision and recall the main difference between existing similarity measures will be the models in which precision and recall are definedwe have explored two such models here a combinatorial model and a mibased model and have shown that the mibased model achieves significantly improved results over the combinatorial modelwe propose to investigate other models such as the probabilistic one given in section 23we would like to thank john carroll for the use of his parser adam kilgarriff and bill keller for valuable discussions and the uk epsrc for its studentship to the first author
W03-1011
a general framework for distributional similaritywe present a general framework for distributional similarity based on the concepts of precision and recalldifferent parameter settings within this framework approximate different existing similarity measures as well as many more which have until now been unexploredwe show that optimal parameter settings outperform two existing stateoftheart similarity measures on two evaluation tasks for high and low frequency nounswe propose a general framework for distributional similarity that consists of notions of precision and recall
learning extraction patterns for subjective expressions this paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective expressions highprecision classifiers label unannotated data to automatically create a large training set which is then given to an extraction pattern learning algorithm the learned patterns are then used to identify more subjective sentences the bootstrapping process learns many subjective patterns and increases recall while maintaining high precision many natural language processing applications could benefit from being able to distinguish between factual and subjective informationsubjective remarks come in a variety of forms including opinions rants allegations accusations suspicions and speculationsideally information extraction systems should be able to distinguish between factual information and nonfactual information question answering systems should distinguish between factual and speculative answersmultiperspective question answering aims to present multiple answers to the user based upon speculation or opinions derived from different sourcesmultidocument summarization systems need to summarize different opinions and perspectivesspam filtering systems must recognize rants and emotional tirades among other thingsin general nearly any system that seeks to identify information could benefit from being able to separate factual and subjective informationsome existing resources contain lists of subjective words and some empirical methods in nlp have automatically identified adjectives verbs and ngrams that are statistically associated with subjective language however subjective language can be exhibited by a staggering variety of words and phrasesin addition many subjective terms occur infrequently such as strongly subjective adjectives and metaphorical or idiomatic phrases consequently we believe that subjectivity learning systems must be trained on extremely large text collections before they will acquire a subjective vocabulary that is truly broad and comprehensive in scopeto address this issue we have been exploring the use of bootstrapping methods to allow subjectivity classifiers to learn from a collection of unannotated textsour research uses highprecision subjectivity classifiers to automatically identify subjective and objective sentences in unannotated textsthis process allows us to generate a large set of labeled sentences automaticallythe second emphasis of our research is using extraction patterns to represent subjective expressionsthese patterns are linguistically richer and more flexible than single words or ngramsusing the labeled sentences as training data we apply an extraction pattern learning algorithm to automatically generate patterns representing subjective expressionsthe learned patterns can be used to automatically identify more subjective sentences which grows the training set and the entire process can then be bootstrappedour experimental results show that this bootstrapping process increases the recall of the highprecision subjective sentence classifier with little loss in precisionwe also find that the learned extraction patterns capture subtle connotations that are more expressive than the individual words by themselvesthis paper is organized as followssection 2 discusses previous work on subjectivity analysis and extraction pattern learningsection 3 overviews our general approach describes the highprecision subjectivity classifiers and explains the algorithm for learning extraction patterns associated with subjectivitysection 4 describes the data that we use presents our experimental results and shows examples of patterns that are learnedfinally section 5 summarizes our findings and conclusionsmuch previous work on subjectivity recognition has focused on documentlevel classificationfor example developed a system to identify inflammatory texts and developed methods for classifying reviews as positive or negativesome research in genre classification has included the recognition of subjective genres such as editorials in contrast the goal of our work is to classify individual sentences as subjective or objectivedocumentlevel classification can distinguish between subjective texts such as editorials and reviews and objective texts such as newspaper articlesbut in reality most documents contain a mix of both subjective and objective sentencessubjective texts often include some factual informationfor example editorial articles frequently contain factual information to back up the arguments being made and movie reviews often mention the actors and plot of a movie as well as the theatres where it is currently playingeven if one is willing to discard subjective texts in their entirety the objective texts usually contain a great deal of subjective information in addition to factsfor example newspaper articles are generally considered to be relatively objective documents but in a recent study 44 of sentences in a news collection were found to be subjective one of the main obstacles to producing a sentencelevel subjectivity classifier is a lack of training datato train a documentlevel classifier one can easily find collections of subjective texts such as editorials and reviewsfor example collected reviews from a movie database and rated them as positive negative or neutral based on the rating given by the reviewerit is much harder to obtain collections of individual sentences that can be easily identified as subjective or objectiveprevious work on sentencelevel subjectivity classification used training corpora that had been manually annotated for subjectivitymanually producing annotations is time consuming so the amount of available annotated sentence data is relatively smallthe goal of our research is to use highprecision subjectivity classifiers to automatically identify subjective and objective sentences in unannotated text corporathe highprecision classifiers label a sentence as subjective or objective when they are confident about the classification and they leave a sentence unlabeled otherwiseunannotated texts are easy to come by so even if the classifiers can label only 30 of the sentences as subjective or objective they will still produce a large collection of labeled sentencesmost importantly the highprecision classifiers can generate a much larger set of labeled sentences than are currently available in manually created data setsinformation extraction systems typically use lexicosyntactic patterns to identify relevant informationthe specific representation of these patterns varies across systems but most patterns represent role relationships surrounding noun and verb phrasesfor example an ie system designed to extract information about hijackings might use the pattern hijacking of which looks for the noun hijacking and extracts the object of the preposition of as the hijacked vehiclethe pattern was hijacked would extract the hijacked vehicle when it finds the verb hijacked in the passive voice and the pattern hijacked would extract the hijacker when it finds the verb hijacked in the active voiceone of our hypotheses was that extraction patterns would be able to represent subjective expressions that have noncompositional meaningsfor example consider the common expression drives up the wall which expresses the feeling of being annoyed with somethingthe meaning of this expression is quite different from the meanings of its individual words furthermore this expression is not a fixed word sequence that could easily be captured by ngramsit is a relatively flexible construction that may be more generally represented as drives up the wall where x and y may be arbitrary noun phrasesthis pattern would match many different sentences such as george drives me up the wall she drives the mayor up the wall or the nosy old man drives his quiet neighbors up the wall we also wondered whether the extraction pattern representation might reveal slight variations of the same verb or noun phrase that have different connotationsfor example you can say that a comedian bombed last night which is a subjective statement but you cannot express this sentiment with the passive voice of bombedin section 32 we will show examples of extraction patterns representing subjective expressions which do in fact exhibit both of these phenomenaa variety of algorithms have been developed to automatically learn extraction patternsmost of these algorithms require special training resources such as texts annotated with domainspecific tags crystal rapier srv whisk or manually defined keywords frames or object recognizers and liep autoslogts takes a different approach requiring only a corpus of unannotated texts that have been separated into those that are related to the target domain and those that are not most recently two bootstrapping algorithms have been used to learn extraction patternsmetabootstrapping learns both extraction patterns and a semantic lexicon using unannotated texts and seed words as inputexdisco uses a bootstrapping mechanism to find new extraction patterns using unannotated texts and some seed patterns as the initial inputfor our research we adopted a learning process very similar to that used by autoslogts which requires only relevant texts and irrelevant texts as its inputwe describe this learning process in more detail in the next sectionwe have developed a bootstrapping process for subjectivity classification that explores three ideas highprecision classifiers can be used to automatically identify subjective and objective sentences from unannotated texts this data can be used as a training set to automatically learn extraction patterns associated with subjectivity and the learned patterns can be used to grow the training set allowing this entire process to be bootstrappedfigure 1 shows the components and layout of the bootstrapping processthe process begins with a large collection of unannotated text and two high precision subjectivity classifiersone classifier searches the unannotated corpus for sentences that can be labeled as subjective with high confidence and the other classifier searches for sentences that can be labeled as objective with high confidenceall other sentences in the corpus are left unlabeledthe labeled sentences are then fed to an extraction pattern learner which produces a set of extraction patterns that are statistically correlated with the subjective sentences these patterns are then used to identify more sentences within the unannotated texts that can be classified as subjectivethe extraction pattern learner can then retrain using the larger training set and the process repeatsthe subjective patterns can also be added to the highprecision subjective sentence classifier as new features to improve its performancethe dashed lines in figure 1 represent the parts of the process that are bootstrappedin this section we will describe the highprecision sentence classifiers the extraction pattern learning process and the details of the bootstrapping processthe highprecision classifiers use lists of lexical items that have been shown in previous work to be good subjectivity cluesmost of the items are single words some are ngrams but none involve syntactic generalizations as in the extraction patternsany data used to develop this vocabulary does not overlap with the test sets or the unannotated data used in this papermany of the subjective clues are from manually developed resources including entries from framenet lemmas with frame element experiencer adjectives manually annotated for polarity and subjectivity clues listed in others were derived from corpora including subjective nouns learned from unannotated data using bootstrapping the subjectivity clues are divided into those that are strongly subjective and those that are weakly subjective using a combination of manual review and empirical results on a small training set of manually annotated dataas the terms are used here a strongly subjective clue is one that is seldom used without a subjective meaning whereas a weakly subjective clue is one that commonly has both subjective and objective usesthe highprecision subjective classifier classifies a sentence as subjective if it contains two or more of the strongly subjective clueson a manually annotated test set this classifier achieves 915 precision and 319 recall this test set consists of 2197 sentences 59 of which are subjectivethe highprecision objective classifier takes a different approachrather than looking for the presence of lexical items it looks for their absenceit classifies a sentence as objective if there are no strongly subjective clues and at most one weakly subjective clue in the current previous and next sentence combinedwhy does not the objective classifier mirror the subjective classifier and consult its own list of strongly objective cluesthere are certainly lexical items that are statistically correlated with the objective class and words such as per case market and total but the presence of such clues does not readily lead to high precision objective classificationadd sarcasm or a negative evaluation to a sentence about a dry topic such as stock prices and the sentence becomes subjectiveconversely add objective topics to a sentence containing two strongly subjective words such as odious and scumbag and the sentence remains subjectivethe performance of the highprecision objective classifier is a bit lower than the subjective classifier 826precision and 164 recall on the test set mentioned above although there is room for improvement the performance proved to be good enough for our purposesto automatically learn extraction patterns that are associated with subjectivity we use a learning algorithm similar to autoslogts for training autoslogts uses a text corpus consisting of two distinct sets of texts relevant texts and irrelevant texts a set of syntactic templates represents the space of possible extraction patternsthe learning process has two stepsfirst the syntactic templates are applied to the training corpus in an exhaustive fashion so that extraction patterns are generated for every possible instantiation of the templates that appears in the corpusthe left column of figure 2 shows the syntactic templates used by autoslogtsthe right column shows a specific extraction pattern that was learned during our subjectivity experiments as an instantiation of the syntactic form on the leftfor example the pattern was satisfied will match any sentence where the verb satisfied appears in the passive voicethe pattern dealt blow represents a more complex expression that will match any sentence that contains a verb phrase with headdealt followed by a direct object with headblowthis would match sentences such as the experience dealt a stiff blow to his pride it is important to recognize that these patterns look for specific syntactic constructions produced by a parser rather than exact word sequencesthe second step of autoslogtss learning process applies all of the learned extraction patterns to the training corpus and gathers statistics for how often each pattern occurs in subjective versus objective sentencesautoslogts then ranks the extraction patterns using a metric called rlogf and asks a human to review the ranked list and make the final decision about which patterns to keepin contrast for this work we wanted a fully automatic process that does not depend on a human reviewer and we were most interested in finding patterns that can identify subjective expressions with high precisionso we ranked the extraction patterns using a conditional probability measure the probability that a sentence is subjective given that a specific extraction pattern appears in itthe exact formula is where subjfreq is the frequency of patterni in subjective training sentences and freq is the frequency of patterni in all training sentencesfinally we use two thresholds to select extraction patterns that are strongly associated with subjectivity in the training datawe choose extraction patterns for which freq 01 and pr 02figure 3 shows some patterns learned by our system the frequency with which they occur in the training data and the percentage of times they occur in subjective sentences for example the first two rows show the behavior of two similar expressions using the verb asked100 of the sentences that contain asked in the passive voice are subjective but only 63 of the sentences that contain asked in the active voice are subjectivea human would probably not expect the active and passive voices to behave so differentlyto understand why this is so we looked in the training data and found that the passive voice is often used to query someone about a specific opinionfor example here is one such sentence from our training set ernest bai koroma of ritcorp was asked to address his supporters on his views relating to full blooded temne to head apc in contrast many of the sentences containing asked in the active voice are more general in nature such as the mayor asked a newly formed jr about his petition figure 3 also shows that expressions using talk as a noun are highly correlated with subjective sentences while talk as a verb are found in a mix of subjective and objective sentencesnot surprisingly longer expressions tend to be more idiomatic than shorter expressions vs put is going to be vs is going was expectedfrom vs was expectedfinally the last two rows of figure 3 show that expressions involving the noun fact are highly correlated with subjective expressionsthese patterns match sentences such as the fact is and is a fact which apparently are often used in subjective contextsthis example illustrates that the corpusbased learning method can find phrases that might not seem subjective to a person intuitively but that are reliable indicators of subjectivitythe text collection that we used consists of englishlanguage versions of foreign news documents from fbis the yous foreign broadcast information servicethe data is from a variety of countriesour system takes unannotated data as input but we needed annotated data to evaluate its performancewe briefly describe the manual annotation scheme used to create the goldstandard and give interannotator agreement resultsin 2002 a detailed annotation scheme was developed for a governmentsponsored projectwe only mention aspects of the annotation scheme relevant to this paperthe scheme was inspired by work in linguistics and literary theory on subjectivity which focuses on how opinions emotions etc are expressed linguistically in context the goal is to identify and characterize expressions ofprivate states in a sentenceprivate state is a general covering term for opinions evaluations emotions and speculations for example in sentence the writer is expressing a negative evaluation the time has come gentlemen for sharon the assassin to realize that injustice cannot last long sentence reflects the private state of western countriesmugabes use of overwhelmingly also reflects a private state his positive reaction to and characterization of his victory western countries were left frustrated and impotent after robert mugabe formally declared that he had overwhelmingly won zimbabwes presidential election annotators are also asked to judge the strength of each private statea private state may have low medium high or extreme strengthto allow us to measure interannotator agreement three annotators independently annotated the same 13 documents with a total of 210 sentenceswe begin with a strict measure of agreement at the sentence level by first considering whether the annotator marked any privatestate expression of any strength anywhere in the sentenceif so the sentence is subjectiveotherwise it is objectivethe average pairwise percentage agreement is 90 and the average pairwise rc value is 077one would expect that there are clear cases of objective sentences clear cases of subjective sentences and borderline sentences in betweenthe agreement study supports thisin terms of our annotations we define a sentence as borderline if it has at least one privatestate expression identified by at least one annotator and all strength ratings of privatestate expressions are lowon average 11 of the corpus is borderline under this definitionwhen those sentences are removed the average pairwise percentage agreement increases to 95 and the average pairwise r value increases to 089as expected the majority of disagreement cases involve lowstrength subjectivitythe annotators consistently agree about which are the clear cases of subjective sentencesthis leads us to define the goldstandard that we use when evaluating our resultsa sentence is subjective if it contains at least one privatestate expression of medium or higher strengththe second class which we call objective consists of everything elseour pool of unannotated texts consists of 302163 individual sentencesthe bpsubj classifier initially labeled roughly 44300 of these sentences as subjective and the bpobj classifier initially labeled roughly 17000 sentences as objectivein order to keep the training set relatively balanced we used all 17000 objective sentences and 17000 of the subjective sentences as training data for the extraction pattern learner17073 extraction patterns were learned that have frequency 2 and pr 60 on the training datawe then wanted to determine whether the extraction patterns are in fact good indicators of subjectivityto evaluate the patterns we applied different subsets of them to a test set to see if they consistently occur in subjective sentencesthis test set consists of 3947 sentences 54 of which are subjectivefigure 4 shows sentence recall and pattern precision for the learned extraction patterns on the test setin this figure precision is the proportion of pattern instances found in the test set that are in subjective sentences and recall is the proportion of subjective sentences that contain at least one pattern instancewe evaluated 18 different subsets of the patterns by selecting the patterns that pass certain thresholds in the training datawe tried all combinations of 01 210 and 02 606570758085909510the data points corresponding to 012 are shown on the upper line in figure 4 and those corresponding to 0110 are shown on the lower linefor example the data point corresponding to 0110 and 0290 evaluates only the extraction patterns that occur at least 10 times in the training data and with a probability 90 overall the extraction patterns perform quite wellthe precision ranges from 71 to 85 with the expected tradeoff between precision and recallthis experiment confirms that the extraction patterns are effective at recognizing subjective expressionsin our second experiment we used the learned extraction patterns to classify previously unlabeled sentences from the unannotated text collectionthe new subjective sentences were then fed back into the extraction pattern learner to complete the bootstrapping cycle depicted by the rightmost dashed line in figure 1the patternbased subjective sentence classifier classifies a sentence as subjective if it contains at least one extraction pattern with 015 and 0210 on the training datathis process produced approximately 9500 new subjective sentences that were previously unlabeledsince our bootstrapping process does not learn new objective sentences we did not want to simply add the new subjective sentences to the training set or it would become increasingly skewed toward subjective sentencessince hpobj had produced roughly 17000 objective sentences used for training we used the 9500 new subjective sentences along with 7500 of the previously identified subjective sentences as our new training setin other words the training set that we used during the second bootstrapping cycle contained exactly the same objective sentences as the first cycle half of the same subjective sentences as the first cycle and 9500 brand new subjective sentenceson this second cycle of bootstrapping the extraction pattern learner generated many new patterns that were not discovered during the first cycle4248 new patterns were found that have 012 and 0260if we consider only the strongest extraction patterns 308 new patterns were found that had 0110 and 0210this is a substantial set of new extraction patterns that seem to be very highly correlated with subjectivityan open question was whether the new patterns provide additional coverageto assess this we did a simple test we added the 4248 new patterns to the original set of patterns learned during the first bootstrapping cyclethen we repeated the same analysis that we depict in figure 4in general the recall numbers increased by about 24 while the precision numbers decreased by less from 052in our third experiment we evaluated whether the learned patterns can improve the coverage of the highprecision subjectivity classifier to complete the bootstrapping loop depicted in the topmost dashed line of figure 1our hope was that the patterns would allow more sentences from the unannotated text collection to be labeled as subjective without a substantial drop in precisionfor this experiment we selected the learned extraction patterns that had 01 10 and 02 10 on the training set since these seemed likely to be the most reliable indicators of subjectivitywe modified the hpsubj classifier to use extraction patterns as followsall sentences labeled as subjective by the original hpsubj classifier are also labeled as subjective by the new versionfor previously unlabeled sentences the new version classifies a sentence as subjective if it contains two or more of the learned patterns or it contains one of the clues used by the original hpsubj classifier and at least one learned patterntable 1 shows the performance results on the test set mentioned in section 31 for both the original hpsubj classifier and the new version that uses the learned extraction patternsthe extraction patterns produce a 72 percentage point gain in coverage and only a 11 percentage point drop in precisionthis result shows that the learned extraction patterns do improve the performance ofthe highprecision subjective sentence classifier allowing it to classify more sentences as subjective with nearly the same high reliabilityhpsubj classifier which do not overlap in nonfunction words with any of the clues already known by the original systemfor each pattern we show an example sentence from our corpus that matches the patternthis research explored several avenues for improving the stateoftheart in subjectivity analysisfirst we demonstrated that highprecision subjectivity classification can be used to generate a large amount of labeled training data for subsequent learning algorithms to exploitsecond we showed that an extraction pattern learning technique can learn subjective expressions that are linguistically richer than individual words or fixed phraseswe found that similar expressions may behave very differently so that one expression may be strongly indicative of subjectivity but the other may notthird we augmented our original highprecision subjective classifier with these newly learned extraction patternsthis bootstrapping process resulted in substantially higher recall with a minimal loss in precisionin future work we plan to experiment with different configurations of these classifiers add new subjective language learners in the bootstrapping process and address the problem of how to identify new objective sentences during bootstrappingwe are very grateful to theresa wilson for her invaluable programming support and help with data preparation
W03-1014
learning extraction patterns for subjective expressionsthis paper presents a bootstrapping process that learns linguistically rich extraction patterns for subjective expressionshighprecision classifiers label unannotated data to automatically create a large training set which is then given to an extraction pattern learning algorithmthe learned patterns are then used to identify more subjective sentencesthe bootstrapping process learns many subjective patterns and increases recall while maintaining high precisionwe construct a high precision classifier for contiguous sentences using the number of strong and weak subjective words in current and nearby sentenceswe introduce a bootstrapping method to learn subjective extraction patterns that match specific syntactic templates using a highprecision sentencelevel subjectivity classifier and a large unannotated corpus
towards answering opinion questions separating facts from opinions and identifying the polarity of opinion sentences opinion question answering is a challenging task for natural language processing in this paper we discuss a necessary component for an opinion question answering system separating opinions from fact at both the document and sentence level we present a bayesian classifier for discriminating between documents with a preponderance of opinions such as editorials from regular news stories and describe three unsupervised statistical techniques for the significantly harder task of detecting opinions at the sentence level we also present a first model for classifying opinion sentences as positive or negative in terms of the main perspective being expressed in the opinion results from a large collection of news stories and a human evaluation of 400 sentences are reported indicating that we achieve very high performance in document classification and respectable performance in detecting opinions and classifying them at the sentence level as positive negative or neutral newswire articles include those that mainly present opinions or ideas such as editorials and letters to the editor and those that mainly report facts such as daily news articlestext materials from many other sources also contain mixed facts and opinionsfor many natural language processing applications the ability to detect and classify factual and opinion sentences offers distinct advantages in deciding what information to extract and how to organize and present this informationfor example information extraction applications may target factual statements rather than subjective opinions and summarization systems may list separately factual information and aggregate opinions according to distinct perspectivesat the document level information retrieval systems can target particular types of articles and even utilize perspectives in focusing queries our motivation for building the opinion detection and classification system described in this paper is the need for organizing information in the context of question answering for complex questionsunlike questions like who was the first man on the moon which can be answered with a simple phrase more intricate questions such as what are the reasons for the usiraq war require long answers that must be constructed from multiple sourcesin such a context it is imperative that the question answering system can discriminate between opinions and facts and either use the appropriate type depending on the question or combine them in a meaningful presentationperspective information can also help highlight contrasts and contradictions between different sourcesthere will be significant disparity in the material collected for the question mentioned above between fox news and the independent for examplefully analyzing and classifying opinions involves tasks that relate to some fairly deep semantic and syntactic analysis of the textthese include not only recognizing that the text is subjective but also determining who the holder of the opinion is what the opinion is about and which of many possible positions the holder of the opinion expresses regarding that subjectin this paper we are presenting three of the components of our opinion detection and organization subsystem which have already been integrated into our larger questionanswering systemthese components deal with the initial tasks of classifying articles as mostly subjective or objective finding opinion sentences in both kinds of articles and determining in general terms and without reference to a specific subject if the opinions are positive or negativethe three modules of the system discussed here provide the basis for ongoing work for further classification of opinions according to subject and opinion holder and for refining the original positivenegative attitude determinationwe review related work in section 2 and then present our documentlevel classifier for opinion or factual articles three implemented techniques for detecting opinions at the sentence level and our approach for rating an opinion as positive or negative we have evaluated these methods using a large collection of news articles without additional annotation and an evaluation corpus of 400 sentences annotated for opinion classifications the results presented in section 8 indicate that we achieve very high performance at documentlevel classification and respectable performance at detecting opinion sentences and classifying them according to orientationmuch of the earlier research in automated opinion detection has been performed by wiebe and colleagues who proposed methods for discriminating between subjective and objective text at the document sentence and phrase levelsbruce and wiebe annotated 1001 sentences as subjective or objective and wiebe et al described a sentencelevel naive bayes classifier using as features the presence or absence of particular syntactic classes punctuation and sentence positionsubsequently hatzivassiloglou and wiebe showed that automatically detected gradable adjectives are a useful feature for subjectivity classification while wiebe introduced lexical features in addition to the presenceabsence of syntactic categoriesmore recently wiebe et al report on documentlevel subjectivity classification using a knearest neighbor algorithm based on the total count of subjective words and phrases within each documentpsychological studies found measurable associations between words and human emotionshatzivassiloglou and mckeown described an unsupervised learning method for obtaining positively and negatively oriented adjectives with accuracy over 90 and demonstrated that this semantic orientation or polarity is a consistent lexical property with high interrater agreementturney showed that it is possible to use only a few of those semantically oriented words to label other phrases cooccuring with them as positive or negativehe then used these phrases to automatically separate positive and negative movie and product reviews with accuracy of 6684pang et al adopted a more direct approach using supervised machine learning with words and ngrams as features to predict orientation at the document level with up to 83 precisionour approach to document and sentence classification of opinions builds upon the earlier work by using extended lexical models with additional featuresunlike the work cited above we do not rely on human annotations for training but only on weak metadata provided at the document levelour sentencelevel classifiers introduce additional criteria for detecting subjective material including methods based on sentence similarity within a topic and an approach that relies on multiple classifiersat the document level our classifier uses the same document labels that the method of does but automatically detects the words and phrases of importance without further analysis of the textfor determining whether an opinion sentence is positive or negative we have used seed words similar to those produced by and extended them to construct a much larger set of semantically oriented words with a method similar to that proposed by our focus is on the sentence level unlike and we employ a significantly larger set of seed words and we explore as indicators of orientation words from syntactic classes other than adjectives to separate documents that contain primarily opinions from documents that report mainly facts we applied naive bayes a commonly used supervised machinelearning algorithmthis approach presupposes the availability of at least a collection of articles with preassigned opinion and fact labels at the document level fortunately wall street journal articles contain such metadata by identifying the type of each article as editorial letter to editor business and newsthese labels are used only to provide the correct classification labels during training and evaluation and are not included in the feature spacewe used as features single words without stemming or stopword removalnaive bayes assigns a document to the class that maximizes by applying bayes rule and assuming conditional independence of the featuresalthough naive bayes can be outperformed in text classification tasks by more complex methods such as svms pang et al report similar performance for naive bayes and other machine learning techniques for a similar task that of distinguishing between positive and negative reviews at the document levelfurther we achieved such high performance with naive bayes that exploring additional techniques for this task seemed unnecessarywe developed three different approaches to classify opinions from facts at the sentence levelto avoid the need for obtaining individual sentence annotations for training and evaluation we rely instead on the expectation that documents classified as opinion on the whole will tend to have mostly opinion sentences and conversely documents placed in the factual category will tend to have mostly factual sentenceswiebe et al report that this expectation is borne out 75 of the time for opinion documents and 56 of the time for factual documentsour first approach to classifying sentences as opinions or facts explores the hypothesis that within a given topic opinion sentences will be more similar to other opinion sentences than to factual senusing the rainbow implementation available from www cscmuedumccallumbowrainbow tenceswe used simfinder a stateoftheart system for measuring sentence similarity based on shared words phrases and wordnet synsetsto measure the overall similarity of a sentence to the opinion or fact documents we first select the documents that are on the same topic as the sentence in questionwe obtain topics as the results of ir queries we then average its simfinderprovided similarities with each sentence in those documentsthen we assign the sentence to the category for which the average is higher alternatively for the frequency variant we do not use the similarity scores themselves but instead we count how many of them for each category exceed a predetermined threshold our second method trains a naive bayes classifier using the sentences in opinion and fact documents as the examples of the two categoriesthe features include words bigrams and trigrams as well as the parts of speech in each sentencein addition the presence of semantically oriented words in a sentence is an indicator that the sentence is subjective therefore we include in our features the counts of positive and negative words in the sentence as well as counts of the polarities of sequences of semantically oriented words we also include the counts of parts of speech combined with polarity information as well as features encoding the polarity of the head verb the main subject and their immediate modifierssyntactic structure was obtained with charniaks statistical parser finally we used as one of the features the average semantic orientation score of the words in the sentenceour designation of all sentences in opinion or factual articles as opinion or fact sentences is an approximationto address this we apply an algorithm using multiple classifiers each relying on a different subset of our featuresthe goal is to reduce the training set to the sentences that are most likely to be correctly labeled thus boosting classification accuracygiven separate sets of features we train separate naive bayes classifiers corresponding to each feature setassuming as ground truth the information provided by the document labels and that all sentences inherit the status of their document as opinions or facts we first train on the entire training set then use the resulting classifier to predict labels for the training setthe sentences that receive a label different from the assumed truth are then removed and we train on the remaining sentencesthis process is repeated iteratively until no more sentences can be removedwe report results using five feature sets starting from words alone and adding in bigrams trigrams partofspeech and polarityhaving distinguished whether a sentence is a fact or opinion we separate positive negative and neutral opinions into three classeswe base this decision on the number and strength of semantically oriented words in the sentencewe first discuss how such words are automatically found by our system and then describe the method by which we aggregate this information across the sentenceto determine which words are semantically oriented in what direction and the strength of their orientation we measured their cooccurrence with words from a known seed set of semantically oriented wordsthe approach is based on the hypothesis that positive words cooccur more than expected by chance and so do negative words this hypothesis was validated at least for strong positivenegative words in as seed words we used subsets of the 1336 adjectives that were manually classified as positive or negative by hatzivassiloglou and mckeown in earlier work only singletons were used as seed words varying their number allows us to test whether multiple seed words have a positive effect in detection performancewe experimented with seed sets containing 1 20 100 and over 600 positive and negative pairs of adjectivesfor a given seed set size we denote the set of positive seeds as adj and the set of negative seeds as adjwe then calculate a modified loglikelihood ratio pos for a word with part of speech pos as the ratio of its collocation frequency with adj and adj within a sentence where freqall pos adj represents the collocation frequency of all wordsall of part of speech pos with adj and is a smoothing constant we used brills tagger to obtain partofspeech informationas our measure of semantic orientation across an entire sentence we used the average per word loglikelihood scores defined in the preceding sectionto determine the orientation of an opinion sentence all that remains is to specify cutoffs and so that sentences for which the average loglikelihood score exceeds are classified as positive opinions sentences with scores lower than are classified as negative opinions and sentences with inbetween scores are treated as neutral opinionsoptimal values for and are obtained from the training data via density estimationusing a small handlabeled subset of sentences we estimate the proportion of sentences that are positive or negativethe values of the average loglikelihood score that correspond to the appropriate tails of the score distribution are then determined via monte carlo analysis of a much larger sample of unlabeled training datawe used the trec2 8 9 and 11 collections which consist of more than 17 million newswire articlesthe aggregate collection covers six different newswire sources including 173252 wall street journal articles from 1987 to 1992some of the wsj articles have structured headings that include editorial letter to editor business and news we randomly selected 2000 articles3 from each category so that our data set was approximate evenly divided between fact and opinion articlesthose articles were used for both document and sentence level opinionfact classificationfor classification tasks we measured our systems performance by standard recall and precisionwe evaluated the quality of semantically oriented words by mapping the extracted words and labels to an external gold standardwe took the subset of our output containing words that appear in the standard and measured the accuracy of our output as the portion of that subset that was assigned the correct labela gold standard for documentlevel classification is readily available since each article in our wall street journal collection comes with an article type label we mapped article types news and business to facts and article types editorial and letter to the editor to opinionswe cannot automatically select a sentencelevel gold standard discriminating between facts and opinions or between positive and negative opinionswe therefore asked human evaluators to classify a set of sentences between facts and opinions as well as determine the type of opinionssince we have implemented our methods in an opinion question answering system we selected four different topics for each topic we randomly selected 25 articles from the entire combined trec corpus these were articles matching the corresponding topical phrase given above as determined by the lucene search engine4 from each of these documents we randomly selected four sentencesif a document happened to have less than four sentences additional documents from the same topic were retrieved to supply the missing sentencesthe resulting sentences were then interleaved so that successive sentences came from different topics and documents and divided into ten 50sentence blockseach block shares ten sentences with the preceding and following block so that 100 of the 400 sentences appear in two blockseach of ten human evaluators was presented with one block and asked to select a label for each sentence among the following fact positive opinion negative opinion neutral opinion sentence contains both positive and negative opinions opinion but cannot determine orientation and uncertain5 since we have one judgment for 300 sentences and two judgments for 100 sentences we created two gold standards for sentence classificationthe first includes the 300 sentences with one judgment and a single judgment for the remaining 100 sentences6 the second standard contains the subset of the 100 sentences for which we obtained identical labelsstatistics of these two standards are given in table 1we measured the pairwise agreement among the 100 sentences that were judged by two evaluators as the ratio of sentences that receive a label from both evaluators divided by the total number of sentences receiving label from any evaluatorthe agreement across the 100 sentences for all seven choices was 55 if we group together the five subtypes of opinion sentences the overall agreement rises to 82the low agreement for some labels was not surprising because there is much ambiguity between facts and opinionsan example of an arguable sentence is a lethal guerrilla war between poachers and wardens now rages in central and eastern africa which one rater classified as fact and another rater classified as opinionfinally for evaluating the quality of extracted words with semantic orientation labels we used two distinct manually labeled collections as gold standardsone set consists of the previously described 657 positive and 679 negative adjectives we also used the anew list which was constructed during psycholinguistic experiments and contains 1031 words of all four open classesas described in humans assigned valence scores to each score according to dimensions such as pleasure arousal and dominance following heuristics proposed in psycholinguistics7 we obtained 284 positive and 272 negative words from the valence scoresdocument classification we trained our bayes classifier for documents on 4000 articles from the wsj portion of our combined trec collection and evaluated on 4000 other articles also from the wsj parttable 2 lists the fmeasure scores of our bayesian classifier for documentlevel opinionfact classificationthe results show the classifier achieved 97 fmeasure which is comparable or higher than the 93 accuracy reported by who evaluated their work based on a similar set of wsj articlesthe high classification performance is also consistent with a high interrater agreement for documentlevel factopinion annotation note that we trained and evaluated only on wsj articles for which we can obtain article class metadata so the classifier may perform less accurately when used for other newswire articlessentence classification table 3 shows the recall and precision of the similaritybased approach while table 4 lists the recall and precision of naive bayes for sentencelevel opinionfact classificationin both cases the results are better when we evaluate against standard b containing the sentences for which two humans assign the same label obviously it is easier for the automatic system to produce the correct label in these more clearcut casesour naive bayes classifier has a higher recall and precision for detecting opinions than for facts while words and ngrams had little performance effect for the opinion class they increased the recall for the fact class around five fold compared to the approach by wiebe et al in general the additional features helped the classifier the best performance is achieved when words bigrams trigrams partofspeech and polarity are included in the feature setfurther using multiple classifiers to automatically identify an appropriate subset of the data for training slightly increases performancepolarity classification using the method of section 51 we automatically identified a total of 39652 3128 144238 and 22279 positive adjectives adverbs nouns and verbs respectivelyextracted positive words include inspirational truly luck and achievenegative ones include depraved disastrously problem and depressfigure 1 plots the beled positive and negative adjectives as gold standard of extracted adjectives using 1 20 and 100 positive and negative adjective pairs as seeds recall and precision of extracted adjectives by using randomly selected seed sets of 1 20 and 100 pairs of positive and negative adjectives from the list of both recall and precision increase as the seed set becomes largerwe obtained similar results with the anew list of adjectives as an additional experiment we tested the effect of ignoring sentences with negative particles obtaining a small increase in precision and recallwe subsequently used the automatically extracted polarity score for each word to assign an aggregate gold standards a and b for different sets of partsofspeech polarity to opinion sentencestable 5 lists the accuracy of our sentencelevel tagging processwe experimented with different combinations of partofspeech classes for calculating the aggregate polarity scores and found that the combined evidence from adjectives adverbs and verbs achieves the highest accuracy as in the case of sentencelevel classification between opinion and fact we also found the performance to be higher on standard b for which humans exhibited consistent agreementwe presented several models for distinguishing between opinions and facts and between positive and negative opinionsat the document level a fairly straightforward bayesian classifier using lexical information can distinguish between mostly factual and mostly opinion documents with very high precision and recall the task is much harder at the sentence levelfor that case we described three novel techniques for opinionfact classification achieving up to 91 precision and recall on the detection of opinion sentenceswe also examined an automatic method for assigning polarity information to single words and sentences accurately discriminating between positive negative and neutral opinions in 90 of the casesour work so far has focused on characterizing opinions and facts in a generic manner without examining who the opinion holder is or what the opinion is aboutwhile we have found presenting information organized in separate opinion and fact classes useful our goal is to introduce further analysis of each sentence so that opinion sentences can be linked to particular perspectives on a specific subjectwe intend to cluster together sentences from the same perspective and present them in summary form as answers to subjective questionswe wish to thank eugene agichtein sasha blairgoldensohn roy byrd john chen noemie elhadad kathy mckeown becky passonneau and the anonymous reviewers for valuable input on earlier versions of this paperwe are grateful to the graduate students at columbia university who participated in our evaluation of sentencelevel opinionsthis work was supported by arda under aquaint project mda90802c0008any opinions findings or recommendations are those of the authors and do not necessarily reflect ardas views
W03-1017
towards answering opinion questions separating facts from opinions and identifying the polarity of opinion sentencesopinion question answering is a challenging task for natural language processingin this paper we discuss a necessary component for an opinion question answering system separating opinions from fact at both the document and sentence levelwe present a bayesian classifier for discriminating between documents with a preponderance of opinions such as editorials from regular news stories and describe three unsupervised statistical techniques for the significantly harder task of detecting opinions at the sentence levelwe also present a first model for classifying opinion sentences as positive or negative in terms of the main perspective being expressed in the opinionresults from a large collection of news stories and a human evaluation of 400 sentences are reported indicating that we achieve very high performance in document classification and respectable performance in detecting opinions and classifying them at the sentence level as positive negative or neutral at sentence level we propose to classify opinion sentences as positive or negative in terms of the main perspective being expressed in opinionated sentences
improved automatic keyword extraction given more linguistic knowledge in this paper experiments on automatic extraction of keywords from abstracts using a supervised machine learning algorithm are discussed the main point of this paper is that by adding linguistic knowledge to the representation rather than relying only on a better result is obtained as measured by keywords previously assigned by professional indexers in more detail exgives a better precithan and by adding the tag assigned to the term as a feature a dramatic improvement of the results is obtained independent of the term selection approach applied automatic keyword assignment is a research topic that has received less attention than it deserves considering keywords potential usefulnesskeywords may for example serve as a dense summary for a document lead to improved information retrieval or be the entrance to a document collectionhowever relatively few documents have keywords assigned and therefore finding methods to automate the assignment is desirablea related research area is that of terminology extraction where all terms describing a domain are to be extractedthe aim of keyword assignment is to find a small set of terms that describes a specific document independently of the domain it belongs tohowever the latter may very well benefit from the results of the former as appropriate keywords often are of a terminological characterin this work the automatic keyword extraction is treated as a supervised machine learning task an approach first proposed by turney two important issues are how to define the potential terms and what features of these terms are considered discriminative ie how to represent the data and consequently what is given as input to the learning algorithmin this paper experiments with three term selection approaches are presented ngrams noun phrase chunks and terms matching any of a set of partofspeech tag sequencesfour different features are used term frequency collection frequency relative position of the first occurrence and the pos tag assigned to the termtreating the automatic keyword extraction as a supervised machine learning task means that a classifier is trained by using documents with known keywordsthe trained model is subsequently applied to documents for which no keywords are assigned each defined term from these documents is classified either as a keyword or a nonkeyword orif a probabilistic model is usedthe probability of the defined term being a keyword is giventurney presents results for a comparison between an extraction model based on a genetic algorithm and an implementation of bagged c45 decision trees for the taskthe terms are all stemmed unigrams bigrams and trigrams from the documents after stopword removalthe features used are for example the frequency of the most frequent phrase component the relative number of characters of the phrase the first relative occurrence of a phrase component and whether the last word is an adjective as judged by the unstemmed suffixturney reports that the genetic algorithm outputs better keywords than the decision treespart of the same training and test material is later used by frank et al for evaluating their algorithm in relation to turneys algorithmthis algorithm which is based on naive bayes uses a smaller and simpler set of features term frequency collection frequency and relative positionalthough it performs equally wellfrank et al also discuss the addition of a fourth feature that significantly improves the algorithm when trained and tested on domainspecific documentsthis feature is the number of times a term is assigned as a keyword to other documents in the collectionit should be noted that the performance of the stateoftheart keyword extraction is much lower than for many other nlptasks such as tagging and parsing and there is plenty of room for improvementsto give an idea of this the results obtained by the genetic algorithm trained by turney and the naive bayes approach by frank et al are presentedthe number of terms assigned must be explicitly limited by the user for these algorithmsturney and frank et al report the precision for five and fifteen keywords per documentrecall is not reported in their studiesin table 1 their results when training and testing on journal articles are shown and the highest values for the two algorithms are presentedthere are two drawbacks in common with the approaches proposed by turney and frank et al first the number of tokens in a keyword is limited to threein the data used to train the classifiers evaluated in this paper 91 of the manually assigned keywords consist of four tokens or more and the longest keywords have eight tokenssecondly the user must state how many keywords to extract from each document as both algorithms for each potential keyword output the probability of the term being a keywordthis could be solved by manually setting a threshold value for the probability but this decision should preferably be made by the extraction systemfinding potential termswhen no machine learning is involved in the processby means of pos patterns is a common approachfor example barker and cornacchia discuss an algorithm where the number of words and the frequency of a noun phrase as well as the frequency of the head noun is used to determine what terms are keywordsan extraction system called linkit compiles the phrases having a noun as the head and then ranks these according to the heads frequencyboguraev and kennedy extract technical terms based on the noun phrase patterns suggested by justeson and katz these terms are then the basis for a headlinelike characterisation of a documentthe final example given in this paper is daille et al who apply statistical filters on the extracted noun phrasesin that study it is concluded that term frequency is the best filter candidate of the scores investigatedwhen pos patterns are used to extract potential terms the problem lies in how to restrict the number of terms and only keep the ones that are relevantin the case of professional indexing the terms are normally limited to a domainspecific thesaurus but not to those present only in the document to which they are assignedfor example steinberger presents work where as a first step all lemmas after stop word removal in a document are ranked according to the loglikelihood ratio thus a list of content descriptors is obtainedthese terms are then used to assign thesaurus terms that have been automatically assigned associating lemmas during a training phasein this paper however the concern is not to limit the terms to a set of allowed termsas opposed to turney and frank et al who experiment with keyword extraction from fulllength texts this work concerns keyword extraction from abstractsthe reason for this is that many journal papers are not available as fulllength texts but as abstracts only as is the case for example on the internetthe starting point for this work was to examine whether the data representation suggested by frank et al was adequate for constructing a keyword extraction model from and for abstractsas the results were poor two alternatives to extracting ngrams as the potential terms were exploredthe first approach was to extract all noun phrases in the documents as judged by an npchunkerthe second selection approach was to define a set of pos tag sequences and extract all words or sequences of words that matched any of these relying on a pos taggerthese two different approaches mean that the length of the potential terms is not limited to something arbitrary but reflects a linguistic propertythe solution to limiting the number of termsas the majority of the extracted words or phrases are not keywordswas to apply a machine learning algorithm to decide which terms are keywords and which are notthe output from the machine learning algorithm is binary consequently the system itself limits the amount of extracted keywords per documentas for the features a fourth feature was added to the ones used by frank et al namely the pos tag assigned to the termthis feature turned out to dramatically improve the resultsthe collection used for the experiments described in this paper consists of 2 000 abstracts in english with their corresponding title and keywords from the inspec databasethe abstracts are from the years 1998 to 2002 from journal papers and from the disciplines computers and control and information technologyeach abstract has two sets of keywordsassigned by a professional indexer associated to them a set of controlled terms ie terms restricted to the inspec thesaurus and a set of uncontrolled terms that can be any suitable termsboth the controlled terms and the uncontrolled terms may or may not be present in the abstractshowever the indexers had access to the fulllength documents when assigning the keywordsfor the experiments described here only the uncontrolled terms were considered as these to a larger extent are present in the abstracts the set of abstracts was arbitrarily divided into three sets a training set consisting of 1 000 documents a validation set consisting of 500 documents and a test set with the remaining 500 abstractsthe set of manually assigned keywords were then removed from the documentsfor all experiments the same training validation and test sets were usedthis section begins with a discussion on the different ways the data were represented in section 41 the term selection approaches are described and in section 42 the features are discussedthereafter a brief description of the machine learning approach is givenfinally in section 44 the training and the evaluation of the classifiers are discussedin this section the three different term selection approaches in other words the three definitions of what constitutes a term in a document are describedngrams in a first set of runs the terms were defined in a manner similar to turney and frank et al all unigrams bigrams and trigrams were extractedthereafter a stoplist was used where all terms beginning or ending with a stopword were removedfinally all remaining tokens were stemmed using porters stemmer in this paper this manner of selecting terms is referred to as the ngram approachthe implementation differs from frank et al in the following aspects only nonalphanumeric characters that were not present in any keyword in the training set were removed numbers were removed only if they stood separately proper nouns were keptthe stemming and the stoplist applied were differentthe stems were kept even if they appeared only once npchunks that nouns are appropriate as content descriptors seems to be something that most agree uponwhen inspecting manually assigned keywords the vast majority turn out to be nouns or noun phrases with adjectives and as discussed in section 2 the research on term extraction focuses on noun patternsto not let the selection of potential terms be an arbitrary processwhich is the case when extracting ngramsand better capture the idea of keywords having a certain linguistic property i decided to experiment with noun phrasesin the next set of experiments a partial parserl was used to select all npchunks from the documentsexperiments with both unstemmed and stemmed terms were performedthis way of defining the terms is in this paper called the chunking approachas about half of the manual keywords present in the training data were lost using the chunking approach i decided to define another term selection approachthis still captures the idea of keywords having a certain syntactic property but is based on empirical evidence in the training dataa set of pos tag patternsin total 56were defined and all words or sequences of words that matched any of these were extractedthe patterns were those tag sequences of the manually assigned keywords present in the training data that occurred ten or more timesthis way of defining the terms is here called the pattern approachas with the chunking approach experiments with both unstemmed and stemmed terms were performedout of the 56 patterns 51 contain one or more noun tagsto give an idea of the patterns the lt chunk available at httpwwwltgedacuksoftwareposindexhtml five most frequently occurring ones of the keywords present in the training data are adjective noun noun noun adjective noun noun noun noun initially the same features that frank et al used for their domainindependent experiments were usedthese were withindocument frequency collection frequency relative position of the first occurrence the representation differed in that the term frequency and the collection frequency were not weighted together but kept as two distinct featuresin addition the real values were not discretised only rounded off to two decimals thus more decisionmaking was handed over to the algorithmthe collection frequency was calculated for the three data sets separatelyin addition experiments with a fourth feature were performedthis is the pos tag or tags assigned to the term by the same partial parser used for finding the chunks and the tag patternswhen a term consists of several tokens the tags are treated like a sequenceas an example an extracted phrase like random jj excitations nns gets the atomic feature value jj nnsin case a term occurs more than once in the document the tag or tag sequence assigned is the most frequently occurring one for that term in the entire documentin case of a draw the first occurring one is assignedas usual in machine learning the input to the learning algorithm consists of examples where an example refers to the feature value vector for each in this case potential keywordan example that is a manual keyword is assigned the class positive and those that are not are given the class negativethe machine learning approach used for the experiments is that of rule induction ie the model that is constructed from the given examples consists of a set of rules2the strategy used to construct the rules is recursive partitioning which has as the goal to maximise the separation between the classes for each rulethe system used allows for different ensemble techniques to be applied meaning that a number of classifiers are generated and then combined to predict the classthe one used for these experiments is bagging in bagging examples from the training data are drawn randomly with replacement until a set of the original size is obtainedthis new set is then used to train a classifierthis procedure is repeated n times to generate n classifiers that then vote to classify an instanceit should be noted that my intention is not to argue for this machine learning approach in favour of any otherhowever one advantage with rules is that they may be inspected and thus might give an insight into how the learning component makes its decisions although this is less applicable when applying ensemble techniquesthe feature values were calculated for each extracted unit in the training and the validation sets that is for the ngrams npchunks stemmed npchunks patterns and the stemmed patterns respectivelyin other words the withindocument frequency the collection frequency and the proportion of the document preceding the first appearance for each potential term were calculatedalso the pos tag for each term were extractedin addition as the machine learning approach is supervised the class was added ie whether the term is a manually assigned keyword or notfor the stemmed terms a unit was considered a keyword if it was equal to a stemmed manual keywordfor the unstemmed terms the term had to match exactlythe measure used to evaluate the results on the validation set was the fscore defined as combining the precision and the recall obtainedin this study the main concern is the precision and the recall for the examples that have been assigned the class positive that is how many of the suggested keywords are correct and how many of the manually assigned keywords that are found as the proportion of correctly suggested keywords is considered equally important as the amount of terms assigned by a professional indexer that was detected was assigned the value 1 thus giving precision and recall equal weightswhen calculating the recall the value for the total number of manually assigned keywords present in the documents is used independent of the number actually present in the different representationsthis figure varies slightly for the unstemmed and the stemmed data and for the two the corresponding value is usedseveral runs were made for each representation with the goal to maximise the performance as evaluated on the validation set first the weights of the positive examples were adjusted as the data set is unbalanceda better performance was obtained when the positive examples in the training data outnumbered the negative onesthereafter experiments with bagging were performed and also runs with and without the pos tag feature were madethe results are presented nextin this section the results obtained by the best performing model for each approachas judged on the validation setwhen run on the previously unseen test set are presentedit should however be noted that the number of possible runs is very large by varying for example the number of classifiers generated by the ensemble techniqueit might well be that better results are possible for any of the representationsas stemming with few exceptions led to better results on the validation set over all runs only these values are presented in this sectionin table 2 the number of assigned terms and the number of correct terms in total and on average per document are shownalso precision recall and the fscore are presentedfor each approach both the results with and without the pos tag feature are giventhe length of the abstracts in the test set varies from 338 to 23 tokens the number of uncontrolled terms per document is 31 to 2 the total number of stemmed keywords present in the stemmed test set is 3 816 and the average number of terms is 763their distribution over the 500 documents is 27 to three documents with 0 terms with the median being 7as for bagging it was noted that although the accuracy improved when increasing the number of classifiers the fscore often decreasedfor the pattern approach without the tag features the best model consists of a 5bagged classifier for the pattern approach with the tag feature a 20bagged and finally for the ngram approach with the tag feature a 10bagged classifierfor the other three runs a single classifier had the best performancewhen extracting the terms from the test set according to the ngram approach the data consisted of 42 159 negative examples and 3 330 positive examples thus in total 45 489 examples were classified by the trained modelusing this manner of extracting the terms meant that 128 of the keywords originally present in the test set were lostto summarise the ngram approach without the tag feature it finds on average 437 keywords per document out of originally on average 763 manual keywords present in the abstractshowever the price paid for these correct terms is high almost 38 incorrect terms per documentwhen adding the fourth feature the number of correct terms decreases slightly while the number of incorrect terms is decreased to a thirdif looking at the actual distribution of assigned terms for these two runs this varies between 134 and 5 without the tag feature and from 48 to 1 with the tag featurethe median is 40 and 14 respectivelythe fscores for these two runs are 176 and 339 respectively339 is the highest fscore that was achieved for the six runs presented herewhen extracting the terms according to the stemmed chunking approach the test set consisted of 13 579 negative and 1 920 positive examples in total 15 499 examplesan fscore of 227 is obtained without the pos tag feature and 330 with this featurethe number of terms on average per document is 1638 without the tag feature and 958 with itif looking at each document the number of keywords assigned varies from 46 to 0 with the median 16 and 29 to 0 with the median value being 9 termsextracting the terms with the chunking approach meant that slightly more than half of the keywords actually present in the test set were lost and compared to the ngram approach the number of correct terms assigned was almost halvedthe number of incorrect keywords however decreased considerablybut the difference is shown when the pos tag feature is included the number of correctly assigned terms is more or less the same for this approach with or without the tag feature while the number of incorrect terms is halvedwhen extracting the terms according to the stemmed pattern approach the test data consisted of 33 507 examplesof these were 3 340 positive and 30 167 negativein total 125 of the present keywords were lostthe fscores for the two runs displayed in table 2 are 256 and 281 the number of terms assigned on average per document is 504 and 305 without and with the tag feature respectivelythe actual number of terms assigned per document is 100 to 0 without the tag feature and 46 to 0 with the tag featurethe median is 30 and 12 respectivelyin this paper i have shown how keyword extraction from abstracts can be achieved by using simple statistical measures as well as syntactic information from the documents as input to a machine learning algorithmif first considering the term selecdocument the number of correct terms in total and mean per document precision recall and fscorethe highest value is shown in boldthe total number of manually assigned terms present in the abstracts is 3 816 and the mean is 763 terms per document tion approaches extracting npchunks gives a better precision while extracting all words or sequences of words matching any of a set of pos tag patterns gives a higher recall compared to extracting ngramsthe highest fscore is obtained by one of the ngram runsthe largest amount of assigned terms present in the abstracts are assigned by the pattern approach without the tag featurethe pattern approach is also the approach which keeps the largest number of assigned terms after that the data have been preprocessedusing phrases means that the length of the potential terms is not restricted to something arbitrary rather the terms are treat as the units they arehowever of the patterns that were selected for the experiments discussed here none was longer than four tokensif looking at all assigned keywords in the training set 30 are then ruled out as potential termsthe longest chunks in the test set that were correctly assigned are five tokens longas for when syntactic information is included as a feature assigned to the term it is evident from the results presented in this paper that this information is crucial for assigning an acceptable number of terms per document independent of what term selection strategy is chosenone shortcoming of the work is that there is currently no relation between the different pos tag feature valuesfor example a singular noun has no closer relationship to a plural noun than to an adjectivein the future the patterns should somehow be categorised reflecting their semantics perhaps in a hierarchical manner or morphological information could be removedin this paper i have not touched upon the more intricate aspects of evaluation but simply treated the manually assigned keywords as the gold standardthis is the most severe way to evaluate a keyword extractor as many terms might be just as good although for one reason or another not chosen by the human indexerfuture work will examine alternative approaches to evaluationone possibility for a more liberal evaluation could be to use human evaluators with real information needs as done by turney another possibility would be to let several persons index each document thus getting a larger set of acceptable terms to choose fromthis would hopefully lead to a better precision while recall probably would be affected negatively the importance of recall would then need to be reconsideredfuture work should also go in the direction of generating keywords by for example exploring potential knowledge provided by a thesaurusfor valuable comments and suggestions beata megyesi henrik bostrom jussi karlgren harko verhagen fredrik kilander and the anonymous emnlp reviewers
W03-1028
improved automatic keyword extraction given more linguistic knowledgein this paper experiments on automatic extraction of keywords from abstracts using a supervised machine learning algorithm are discussedthe main point of this paper is that by adding linguistic knowledge to the representation rather than relying only on statistics a better result is obtained as measured by keywords previously assigned by professional indexersin more detail extracting npchunks gives a better precision than ngrams and by adding the pos tag assigned to the term as a feature a dramatic improvement of the results is obtained independent of the term selection approach appliedwe propose a system for keyword extraction from abstracts that uses supervised learning with lexical and syntactic features which proved to improve significantly over previously published results
transliteration of proper names in crosslingual information retrieval we address the problem of transliterating english names using chinese orthography in support of crosslingual speech and text processing applications we demonstrate the application of statistical machine translation techniques to translate the phonemic representation of an english name obtained by using an automatic texttospeech system to a sequence of initials and finals commonly used subword units of pronunciation for chinese we then use another statistical translation model to map the initialfinal sequence to chinese characters we also present an evaluation of this module in retrieval of mandarin spoken documents from the tdt corpus using english text queries translation of proper names is generally recognized as a significant problem in many multilingual text and speech processing applicationseven when handcrafted translation lexicons used for machine translation and crosslingual information retrieval provide significant coverage of the words encountered in the text a significant portion of the tokens not covered by the lexicon are proper names and domainspecific terminology this lack of translations adversely affects performancefor clir applications in particular proper names and technical terms are especially important as they carry the most distinctive information in a query as corroborated by their relatively low document frequencyfinally in interactive ir systems where users provide very short queries their importance grows even furtherunlike specialized terminology however proper names are amenable to a speechinspired translation approachone tries when writing foreign names in ones own language to preserve the way it sounds ie one uses an orthographic representation which when read aloud by a speaker of ones language sounds as much like it would when spoken by a speaker of the foreign language a process referred to as transliterationtherefore if a mechanism were available to render say an english name in its phonemic form and another mechanism were available to convert this phonemic string into the orthography of say chinese then one would have a mechanism for transliterating english names using chinese charactersthe first step has been addressed extensively for other obvious reasons in the automatic speech synthesis literaturethis paper describes a statistical approach for the second stepseveral techniques have been proposed in the recent past for name transliterationrather than providing a comprehensive survey we highlight a few representative approaches herefinite state transducers that implement transformation rules for backtransliteration from japanese to english have been described by knight and graehl and extended to arabic by gloverstalls and knight in both cases the goal is to recognize words in japanese or arabic text which happen to be transliterations of english namesif the orthography of a language is strongly phonetic as is the case for korean then one may use relatively simple hidden markov models to transform english pronunciations as shown by jung et al the work closest to our application scenario and the one with which we will be making several direct comparisons is that of meng et al in their work a set of handcrafted transformations for locally editing the phonemic spelling of an english word to conform to rules of mandarin syllabification are used to seed a transformationbased learning algorithmthe algorithm examines some data and learns the proper sequence of application of the transformations to convert an english phoneme sequence to a mandarin syllable sequenceour paper describes a data driven counterpart to this technique in which a cascade of two sourcechannel translation models is used to go from english names to their chinese transliterationthus even the initial requirement of creating candidate transformation rules which may require knowledge of the phonology of the target language is eliminatedwe also investigate incorporation of this transliteration system in a crosslingual spoken document retrieval application in which english text queries are used to index and retrieve mandarin audio from the tdt corpuswe break down the transliteration process into various steps as depicted in figure 1steps 1 and 3 are deterministic transformations while steps 2 and 4 are accomplished using statistical meansthe ibm sourcechannel model for statistical machine translation plays a central role in our systemwe therefore describe it very briefly here for completenessin this model a word foreign language sentence is modeled as the output of a noisy channel whose input is its correct word english translation and having observed the channel output one seeks a posteriori the most likely english sentence the translation model is estimated from are available both for training models2 as well as for decoding3 the task of determining the most likely translation since we seek chinese names which are transliteration of a given english name the notion of words in a sentence in the ibm model above is replaced with phonemes in a wordthe roles of english and chinese are also reversedtherefore represents a sequence of english phonemes and for instance a sequence of gif symbols in step 2 described abovethe overall architecture of the proposed transliteration system is illustrated in figure 2we have available from meng et al a small list of about 3875 english names and their chinese transliterationa pinyin rendering of the chinese transliteration is also providedwe use the festival texttospeech system to obtain a phonemic pronunciation of each english namewe also replace all pinyin symbols by their pronunciations which are described using an inventory of generalized initials and finalsthe pronunciation table for this purpose is obtained from an elementary mandarin textbook the net result is a corpus of 3875 pairs of sentences of the kind depicted in the second and third lines of figure 1the vocabulary of the english side of this parallel corpus is 43 phonemes and the chinese side is 58 note however that only 409 of the 21 37 possible initialfinal combinations constitute legal pinyin symbolsa second corpus of 3875 sentence pairs is derived corresponding to the fourth and fifth lines of figure 1 this time to train a statistical model to translate pinyin sequences to chinese charactersthe vocabulary of the pinyin side of this corpus is 282 and that of the character side is about 680these of course are much smaller than the inventory of chinese pinyin and charactersetswe note that certain characters are preferentially used in transliteration over others and the resulting frequency of characterusage is not the same as unrestricted chinese texthowever there is not a distinct set of characters exclusively for transliterationfor purposes of comparison with the transliteration accuracy reported by meng et al we divide this list into 2233 training namepairs and 1541 test namepairsfor subsequent clir experiments we create a larger training set of 3625 namepairs leaving only 250 namespairs for intrinsic testing of transliteration performancethe actual training of all translation models proceeds according to a standard recipe recommended in giza namely 5 iterations of model 1 followed by 5 of model 2 10 hmmiterations and 10 iterations of model 4the gif language model required for translating english phoneme sequences to gif sequences is estimated from the training portion of the 3875 chinese namesa trigram language model on the gif vocabulary is estimated with the cmu toolkit using goodturing smoothing and katz backoffnote that due to the smoothing this language model does not necessarily assign zero probability to an illegal gif sequence eg one containing two consecutive initialsthis causes the first translation system to sometimes though very rarely produce gif sequences which do not correspond to any pinyin sequencewe make an ad hoc correction of such sequences when mapping a gif sequence to pinyin which is otherwise trivial for all legal sequences of initials and finalsspecifically a final e or i or a is tried in that order between consecutive initials until a legitimate sequence of pinyin symbols obtainsthe language model required for translating pinyin sequences to chinese characters is relatively straightforwarda character trigram model with goodturing discounting and katz backoff is estimated from the list of transliterated nameswe use the rewrite decoder provided by isi along with the two translation models and their corresponding language models trained either on 2233 or 3625 namepairs as described above to perform transliteration of english names in the respective test sets with 1541 or 250 namepairs respectivelya small but important manual setting in the rewrite decoder is a list of zero fertility wordsin the ibm model described earlier these are the words which may be deleted by the noisy channel when transforming into for the decoder these are therefore the words which may be optionally inserted in even when there is no word in of which they are considered a direct translationfor the usual case of chinese to english translation these would usually be articles and other function words which may not be prevalent in the foreign language but frequent in englishfor the phonemetogif translation model the words which need to be inserted in this manner are syllabic nucleithis is because mandarin does not permit complex consonant clusters in a way that is quite prevalent in englishthis linguistic knowledge however need not be imparted by hand in the ibm modelone can indeed derive such a list from the trained models by simply reading off the list of symbols which have zero fertility with high probabilitythis list in our case is i e you o r tu ou c iu iethe second translation system for converting pinyin sequences to character sequences has a onetoone mapping between symbols and therefore has no words with zero fertilitywe evaluate the efficacy of our transliteration at two levelsfor comparison with the very comparable setup of meng et al we measure the accuracy of the pinyin output produced by our system after step 3 in section 23the results are shown in table 1 where pinyin error rate is the edit distance between the correct pinyin representation of the correct transliteration and the pinyin sequence output by the systemnote that the pinyin error performance of our fully statistical method is quite competitive with previous resultswe further note that increasing the training data results in further reduction of the syllable error ratewe concede that this performance while comparable to other systems is not satisfactory and merits further investigationwe also evaluate the efficacy of our second translation system which maps the pinyin sequence produced by the previous stages to a sequence of chinese characters and obtain character error rates of 126thus every correctly recognized pinyin symbol has a chance of being transformed with some error resulting in higher character error rate than the pinyin error ratenote that while significantly lower error rates have been reported for converting pinyin to characters in generic chinese text ours is a highly specialized subset of transliterated foreign names where the choice between several characters sharing the same pinyin symbol is somewhat arbitraryseveral multilingual speech and text applications require some form of name transliteration crosslingual spoken document retrieval being a prototypical examplewe build upon the experimental infrastructure developed at the 2000 johns hopkins summer workshop where considerable work was done towards indexing and retrieving mandarin audio to match english text queriesspecifically we find that in a large number of queries used in those experiments english proper names are not available in the translation lexicon and are subsequently ignored during retrievalwe use the technique described above to transliterate all such names into chinese characters and observe the effect on retrieval performancethe tdt2 corpus which we use for our experiments contains 2265 audio clips of mandarin news stories along with several thousand contemporaneously published chinese text articles and english text and audio broadcaststhe articles tend to be several hundred to a few thousand words long while the audio clips tend to be two minutes or less on averagethe purpose of the corpus is to facilitate research in topic detection and tracking and exhaustive relevance judgments are provided for several topics ie for each of at least 17 topics every english and chinese article and news clip has been examined by a human assessor and determined to be either onor offtopicwe randomly select an english article on each of the 17 topics as a query and wish to retrieve all the mandarin audio clips on the same topic without retrieving any that are offtopicfor mitigating the variability due to query selection we choose up to 12 different english articles for each of the 17 topics and average retrieval performance over this selection before reporting any resultswe use the query termselection and translation technique described by meng et al to convert the english document to chinese the only augmentation being the transliterated names there are roughly 2000 tokens in the queries which are not translatable and almost all of them are proper nameswe report ir performance with and without the nametransliterationwe use a different information retrieval system from the one used in the 2000 workshop to perform the retrieval taska brief description of the system is therefore in orderthe hopkins automated information retriever for combing unstructured text is a research retrieval system developed at the johns hopkins university applied physics laboratorythe system was developed to investigate knowledgelight methods for linguistic processing in text retrievalhaircut uses a statistical language model of retrieval such as the one explored by hiemstra the model ranks documents according to the probability that the terms in a query are generated by a documentvarious smoothing methods have been proposed to combine the contributions for each term based on the document model and also a generic model of the languagemany have found that a simple mixture model using document term frequencies for the former and occurrence statistics from a large corpus for the later works quite wellmcnamee and mayfield have shown using haircut that overlapping character ngrams are effective for retrieval in nonasian languages and that translingual retrieval between closely related languages is quite feasible even without translation resources of any kind for the task of retrieving mandarin audio from chinese text queries on the tdt2 task the system described by meng et al achieved a mean average precision of 0733 using character bigrams for indexingon identical queries haircut achieved 0762 using character bigramsthis figure forms the monolingual baseline for our clir systemwe first indexed the automatic transcription of the tdt2 mandarin audio collection using character bigrams as done by meng et al we performed clir using the chinese translations of the english queries with and without transliteration of proper names and compared the standard 11step mean average precision on the tdt2 audio corpusour results and the corresponding results from meng et al are reported in table 2without name transliteration the performance of the two clir systems is nearly identical a paired ttest shows that the difference in the maps of 0514 and 0501 is significant only at a value of 074a small improvement in map is obtained by the haircut system with name transliteration over the system without name transliteration the improvement from 0501 to 0515 is statistically significant at a value of 0084the statistical significance of the improvement from 0514 to 0522 by meng et al is not known to usin any event a need for improvement in transliteration is suggested by this resultwe recently received a large list of nearly 2m chineseenglish namedentity pairs from the ldcas a pilot experiment we simply added this list to the translation lexicon of the clir system ie we translated those names in our english queries which happened to be available in this ldc listthis happens to cover more than 85 of the previously untranslatable names in our queriesfor the remaining names we continued to use our automatic transliteratorto our surprise the map improvement from 0501 to 0506 was statistically insignificant and the reason why the use of the ostensibly correct transliteration most of the time still does not result in any significant gain in clir performance continues to elude uswe conjecture that the fact that the audio has been processed by an automatic speech recognition system which in all likelihood did not have many of the proper names in question in its vocabulary may be the because of this dismal performanceit is plausible though we cannot find a stronger justification for it that by using the 10best transliterations produced by our automatic system we are adding robustness against asr errors in the retrieval of proper namesthe ldc chineseenglish named entity list was compiled from xinhua news sources and consists of nine pairs of lists one each to cover personnames placenames organizations etcwhile there are indeed nearly 2 million namepairs in this list a large number of formatting character encoding and other errors exist in this beta release making it difficult to use the corpus as is in our statistical mt systemwe have tried using from this resource the two lists corresponding to personnames and placenames respectively and have attempted to augment the training data for our system described previously in section 21however we further screened these lists as well in order to eliminate possible errorsthere are nearly 1 million pairs of person or placenames in the ldc corpusin order to obtain a clean corpus of named entity transliterations we performed the following steps 3we then aligned all the training sentence pairs with this translation model and extracted roughly a third of the sentences with an alignment score above a certain tunable threshold this resulted in the extraction of 346860 namepairs4we divided the set into 343738 pairs for training and 3122 for testing5we estimated a pinyin language model from the training portion above6we retrained the statistical mt system on this presumably good training set and evaluated the pinyin error rate of the transliterationthe result of this evaluation is reported in table 3 against the line huge mt where we also report the transliteration performance of the socalled big mt system of table 1 on this new test setwe note again with some dismay that the additional training data did not result in a significant improvement in transliteration performancewe continue to believe that careful dataselection is the key to successful use of this betarelease of the ldc named entity corpuswe therefore went back to step 3 of the procedure outlined above where we had used alignment scores from an mt system to select good sentencepairs from our training data and instead of using the mt system trained in step 2 immediately preceding it we used the previously built big mt system of section 21 which we know is trained on a small but clean dataset of 3625 namepairswith a similar threshold as above we again selected roughly 300k namepairs being careful to leave out any pair which appears in the 3122 pair test set described above and reestimated the entire phonemetogif translation system on this new corpuswe evaluated this system on the 3122 namepair test set for transliteration performance and the results are included in table 3note that significant improvements in transliteration performance result from this alternate method of data selectionwe reran the clir experiments on the tdt2 corpus using the somewhat improved entity transliterator described above with the same query and document collection specifications as the experiments reported in table 2the results of this second experiment is reported in table 4 where the performance of the big mt transliterator is reproduced for comparison and without name transliteration note that the gain in clir performance is again only somewhat significant with the improvement in map from 0501 to 0517 being significant only at a value of 0080we have presented a name transliteration procedure based on statistical machine translation techniques and have investigated its use in a cross lingual spoken document retrieval taskwe have found small gains in the extrinsic evaluation of our procedure map improvement from 0501 to 0517in a more intrinsic and direct evaluation we have found ways to gainfully filter a large but noisy training corpus to augment the training data for our models and improve transliteration accuracy considerably beyond our starting point eg to reduce pinyin error rates from 511 to 425we expect to further refine the translation models in the future and apply them in other tasks such as text translation
W03-1508
transliteration of proper names in crosslingual information retrievalwe address the problem of transliterating english names using chinese orthography in support of crosslingual speech and text processing applicationswe demonstrate the application of statistical machine translation techniques to translate the phonemic representation of an english name obtained by using an automatic texttospeech system to a sequence of initials and finals commonly used subword units of pronunciation for chinesewe then use another statistical translation model to map the initialfinal sequence to chinese characterswe also present an evaluation of this module in retrieval of mandarin spoken documents from the tdt corpus using english text querieswe adopt the noisy channel modeling framework
the first international chinese word segmentation bakeoff this paper presents the results from the aclsighansponsored first international chinese word segmentation bakeoff held in 2003 and reported in conjunction with the second sighan workshop on chinese language processing sapporo japan we give the motivation for having an international segmentation contest and we report on the results of this first international contest analyze these results and make some recommendations for the future chinese word segmentation is a difficult problem that has received a lot of attention in the literature reviews of some of the various approaches can be found in the problem with this literature has always been that it is very hard to compare systems due to the lack of any common standard test setthus an approach that seems very promising based on its published report is nonetheless hard to compare fairly with other systems since the systems are often tested on their own selected test corporapart of the problem is also that there is no single accepted segmentation standard there are several including the four standards used in this evaluationa number of segmentation contests have been held in recent years within mainland china in the context of more general evaluations for chineseenglish machine translationsee for the first and second of these the third evaluation will be held in august 2003the test corpora were segmented according to the chinese national standard gb 13715 though some lenience was granted in the case of plausible alternative segmentations so while gb 13715 specifies the segmentation for mao zedong was also allowedaccuracies in the mid 80s to mid 90s were reported for the four systems that participated in the first evaluation with higher scores being reported for the second evaluationthe motivations for holding the current contest are twofoldfirst of all by making the contest international we are encouraging participation from people and institutions who work on chinese word segmentation anywhere in the worldthe final set of participants in the bakeoff include two from mainland china three from hong kong one from japan one from singapore one from taiwan and four from the united statessecondly as we have already noted there are at least four distinct standards in active use in the sense that large corpora are being developed according to those standards see section 21it has also been observed that different segmentation standards are appropriate for different purposes that the segmentation standard that one might prefer for information retrieval applications is likely to be different from the one that one would prefer for texttospeech synthesis see for useful discussionthus while we do not subscribe to the view that any of the extant standards are in fact appropriate for any particular application nevertheless it seems desirable to have a contest where people are tested against more than one standarda third point is that we decided early on that we would not be lenient in our scoring so that alternative segmentations as in the case of mao zedong cited above would not be allowedwhile it would be fairly straightforward to automatically score both alternatives we felt we could provide a more objective measure if we went strictly by the particular segmentation standard being tested on and simply did not get into the business of deciding upon allowable alternativescomparing segmenters is difficultthis is not only because of differences in segmentation standards but also due to differences in the design of systems systems based exclusively on lexical and grammatical analysis will often be at a disadvantage during the comparison compared to systems trained exclusively on the training datacompetitions also may fail to predict the performance of the segmenter on new texts outside the training and testing setsthe handling of outofvocabulary words becomes a much larger issue in these situations than is accounted for within the test environment a system that performs admirably in the competition may perform poorly on texts from different registersanother issue that is not accounted for in the current collection of evaluations is the handling of short strings with minimal context such as queries submitted to a search enginethis has been studied indirectly through the crosslanguage information retrieval work performed for the trec 5 and trec 6 competitions this report summarizes the results of this first international chinese word segmentation bakeoff provides some analysis of the results and makes specific recommendations for future bakeoffsone thing we do not do here is get into the details of specific systems each of the participants was required to provide a four page description of their system along with detailed discussion of their results and these papers are published in this volumethe corpora are detailed in table 1links to descriptions of the corpora can be found at httpwwwsighanorgbakeoff2003 bakeoff_instrhtml publications on specific corpora are the beijing university standard is very similar to that outlined in table 1 lists the abbreviations for the four corpora that will be used throughout this paperthe suffixes o and c will be used to denote open and closed tracks respectively thus asoc denotes the academia sinica corpus both open and closed tracks and pkc denotes the beijing university corpus closed trackduring the course of this bakeoff a number of inconsistencies in segmentation were noted in the ctb corpus by one of the participantsthis was done early enough so that it was possible for the ctb developers to correct some of the more common cases both in the training and the test datathe revised training data was posted for participants and the revised test data was used during the testing phaseinconsistencies were also noted by another participant for the as corpusunfortunately this came too late in the process to correct the datahowever some informal tests on the revised testing data indicated that the differences were minorthe contest followed a strict set of guidelines and a rigid timetablethe detailed instructions for the bakeoff can be found at httpwwwsighan orgbakeoff2003bakeoff_instrhtml training material was available starting march 15 testing material was available april 22 and the results had to be returned to the sighan ftp site by april 25 no later than 1700 edtupon initial registration sites were required to declare which corpora they would be training and testing on and whether they would be participating in the open or closed tracks on each corpus where these were defined as follows for the open test sites were allowed to train on the training set for a particular corpus and in addition they could use any other material including material from other training corpora proprietary dictionaries material from the www and so forthhowever if a site selected the open track the site was required to explain what percentage of the results came from which sourcesfor example if the system did particularly well on outofvocabulary words then the participants were required to explain if for example those results could mostly be attributed to having a good dictionaryin the closed test participants could only use training material from the training data for the particular corpus being testing onno other material was allowedother obvious restrictions applied participants were prohibited from testing on corpora from their own sites and by signing up for a particular track participants were declaring implicitly that they had not previously seen the test corpus for that trackscoring was completely automaticnote that the scoring software does not correct for cases where a participant converted from one coding scheme into another and any such cases were counted as errorsresults were returned to participants within a couple of days of submission of the segmented test datathe script used for scoring can be downloaded from httpwwwsighanorg bakeoff2003score it is a simple perl script that depends upon a version of different that supports the y flag for sidebyside output formatparticipating sites are shown in table 2these are a subset of the sites who had registered for the bakeoff as some sites withdrew due to technical difficultiesan unfortunate and sometimes unforseen complexity in dealing with chinese text on the computer is the plethora of character sets and character encodings used throughout greater chinathis is demonstrated in the encoding column of table 1 this variation of encoding is exacerbated by the usual lack of specific declaration in the filesgenerally a file is said to be big five or gb when in actuality the file is encoded in a variation of thesethis is problematic in systems that utilize unicode internally since transcoding back to the original encoding may lose informationwe computed a baseline for each of the corpora by compiling a dictionary of all and only the words in the training portion of the corpuswe then used this dictionary with a simple maximum matching algorithm to segment the test corpusthe results of this experiment are presented in table 3in this and subsequent tables we list the word count for the test corpus test recall test precision f score1 the outofvocabulary rate for the test corpus the recall on oov words and the recall on invocabulary wordsper normal usage oov is defined as the set of words in the test corpus not occurring in the training corpus2 we expect systems to do at least as well as this baselineas a nominal topline we ran the same maximum matching experiments but this time populating the dictionary only with words from the test corpus this is of course a cheating experiment since one could not reasonably know exactly the set of words that occur in the test corpussince this is better than one could hope for in practice we would expect systems to generally underperform this toplinethe results of this cheating experiment are given in table 43 results for the closed tests are presented in tables 58column headings are as above except for c and c for which see section 43results for the open tests are presented in tables 912 again see section 43 for the explanation of c and c let us assume that the recall rates for the various system represent the probability that a word will be successfully identified and let us further assume that a binomial distribution is appropriate for this experimentgiven the central limit theorem for bernouilli trials eg then the 95 confidence interval is given as where is the number of trials the values for are given in tables 512 under the heading c they can be interpreted as follows to decide whether two sites are significantly different in their performance on a particular task one just has to compute whether their confidence intervals overlapsimilarly one can treat the precision rates as the probability that a character string that has been identified as a word is really a word these precisionbased confidences are given as c in the tablesit seems reasonable to treat two systems as significantly different if at least one of their recallbased or precisionbased confidences are differentusing this criterion all systems are significantly different from each other except that on pk closed s10 is not significantly different from s09 and s07 is not significantly different from s04in figure 1 we plot the f scores for all systems all trackswe include as base and top the baseline and topline scores discussed previouslyin most cases people performed above the baseline though well below the ideal topline note though that the two participants in the academia sinica open track underperformed the baselineperformance on the penn chinese treebank corpus was generally lower than all the other corpora omitting s02 which only ran on ctboc the scores for the other systems were uniformly higher on other corpora than they were on ctb the single exception being s11 which did better on ctbo than on hkothe baseline for ctb is also much lower than the baseline for other corpora so one might be inclined to ascribe the generally lower performance to the smaller training data for this corpusalso the oov rate for this corpus is much higher than all of the other corpora and since error rates are generally higher on oov this is surely a contributing factorhowever this would only explain why ctb showed lower performance on the closed test on the open test one might expect the size of the training data to matter less but there were still large differences between several systems performance on ctb and their performance on other corporanote also that the topline for ctb is also lower than for the other corporawhat all of this suggests is that the ctb may simply be less consistent than the other corpora in its segmentation indeed one of the participants noted a number of inconsistencies in both the training and the test data 4 systems that ran on both closed and open tracks for the same corpus generally did better on the open track indicating that using additional data can helphowever the lowerthanbaseline performance of s03 and s11 on aso may reflect issues with tuning of these additional resources to the particular standard in questionfinally note that the top performance of any system on any track was s09 on asc since performances close to our ideal topline have occasionally been reported in the literature it is worth bearing the results of this bakeoff in mind when reading such reportsfigure 2 plots the recall on outofvocabulary words for all systems and all tracksfor this meaas one word in the test datasimilarly vice president is segmented as one word in training data but as two words in the testing dataas a final example superlatives such as should be segmented as a single word if the adjective is monosyllabic and it is not being used predicatively however this principle is not consistently appliedwu also notes that the test data is different from the training data in several respectsmost of the training data comprise texts about mainland china whereas most of the testing data is about taiwanthe test data contains classes of items such as urls and english page designations that never appeared in the test data sure the performance of the baseline is only above 00 fortuitously as we noted in section 41similarly the topline performance is only less than 10 in cases where there are two or more possible decompositions of a string and where the option with the longest prefix is not the correct oneit is with oov recall that we see the widest variation among systems which in turn is consistent with the observation that dealing with unknown words is the major outstanding problem of chinese word segmentationwhile some systems performed little better than the baseline others had a very respectable 080 recall on oovagain there was clearly a benefit for many systems in using additional resources than what is in the training data a number of systems that were run on both closed and open tracks showed significant improvements in the open trackfor the closedtrack entries that did well on oov one must conclude that they have effective unknownword detection methodswe feel that this first international chinese word segmentation bakeoff has been useful in that it has provided us with a good sense of the range of performance of various systems both from academic and industrial institutionsthere is clearly no single best system insofar as there is no system that consistently outperformed all the others on all trackseven if there were the most one could say is that for the four different segmentation standards and associated corpora this particular system outperformed the others but there could be no implication that said system would be the most appropriate for all applicationsone thing that we have not explicitly discussed in this paper is which type of approach shows the most promise given the different submissionswhile we are familiar with the approaches taken in several of the tested systems we leave it up to the individual participants to describe their approaches and hopefully elucidate which aspects of their approaches are most responsible for their successes and failures the participants papers all appear in this volumewe leave it up to the research community as a whole to decide whether one approach or another shows most promisewe believe that there should be future competitions of this kind possibly not every year but certainly every couple of years and we have some specific recommendations on how things might be improved in such future competitions to the restriction that participants may not be evaluated on data from their own institutionthe decision this time to let people pick and choose was motivated in part by the concern that if we insisted that people participate in all tracks some participants might be less inclined to participateit was also motivated in part by the different chinese coding schemes used by the various corpora and the possibility that someone is system might work on one coding scheme but not the otherhowever with sufficient planning perhaps giving people a longer period of time for training their systems than was possible with this contest it should be possible to impose this restriction without scaring away potential participants2we would like to see more testing data developed for the next bakeoffwhile the test sets turned out to be large enough to measure significant differences between systems in most cases a larger test set would allow even better statisticsin some cases more training data will also be neededgiven the problems noted by some of the participants with some of the data we would also like to see more consistently annotated training and test data and test data that is more representative of what was seen in the training data3we would like to expand the testing data to include texts of various lengths particularly short strings in order to emulate query strings seen in commercial search engines4finally one question that we did not ask that should have been asked was whether the tested system is used as part of a commercial product or notit is often believed of natural language and speech applications that deployed commercial systems are about a generation behind the systems being developed in research laboratoriesit would be interesting to know if this is true in the domain of chinese word segmentation which should be possible to find out if we get a good balance of bothfor the present we will make the training and test data for the bakeoff available via httpwww sighanorgbakeoff2003 so that others can better study the results of this contestfirst and foremost we wish to thank the following institutions for providing the training and testing data for this bakeoff institute of linguistics academia sinicainstitute of computational linguistics beijing universitylanguage information sciences research centre city university of hong kongthe chinese treebank project university of pennsylvania and the linguistic data consortiumwithout the generous contribution of these resources this competition would not have been possiblewe would also like to thank martha palmer for making funds available to pay for translations of the detailed bakeoff instructions and to fudong chiou susan converse and nianwen xue for their work on the translationsandi wu and aitao chen provided useful feedback on errors in some of the corporathe first author wishes to thank bill dumouchel of att labs for advice on the statisticswe also wish to thank professor tianshun yao of northeast university for sending us the reports of the chinese national competitionsfinally we thank fei xia and qing ma for their work on the second meeting of sighan of which this bakeoff is a part
W03-1719
the first international chinese word segmentation bakeoffthis paper presents the results from the aclsighansponsored first international chinese word segmentation bakeoff held in 2003 and reported in conjunction with the second sighan workshop on chinese language processing sapporo japanwe give the motivation for having an international segmentation contest and we report on the results of this first international contest analyze these results and make some recommendations for the future