Text
stringlengths
45
130k
Id
stringlengths
8
8
Summary
stringlengths
55
2.67k
chinese word segmentation as lmr tagging pk fscore c the set by the memm tagger that scans the input from left to right and the last column is the results after the transformation based learner is applied the results show that using transformationbased learning only give rise to slight improvements it seems that the bidirectional approach does not help much for the lmr tagging therefore we only submitted the results of our lefttoright memm tagger retrained on the entire training sets as our official results fscore memm memmtbl as 09595 09603 hk 09143 na pk 09391 09398 table 2 fscore on development data the results on the official test data is similar to we have got on our except that the fscore on the beijing univ corpus is over 2 lower in absolute accuracy than what we expected the reason is that in the training data of beijing university corpus all the numbers are encoded in gbk while in the test data many numbers are encoded in ascii which are unknown to our tagger with this problem fixed the results of the official test data are compatible with the results on however we have withdrawn our segmentation results on the beijing university corpus corpus r p f as 0961 0958 0959 0729 0966 hk 0917 0915 0916 0670 0936 table 3 official bakeoff outcome 4 conclusions and future work our closed track experiments on the first sighan bakeoff data show that the lmr algorithm produces promising results our system ranks the second when tested on the academia sinica corpus and third on the hong kong city university corpus in the future we will try to incorporate a large word list into our tagger to test its performance on open track experiments its high accuracy on makes it a good candidate as a general purpose segmenter references e brill 1995 transformationbased errordriven learning and natural language processing a case study partofspeech tagging 21543565 c k fan and w h tsai 1988 automatic word identification in chinese sentences by the relaxation tech processing of chinese and oriental 43356 kokwee gan martha palmer and kimteng lua 1996 a statistically emergent approach for language processing application to modeling context effects in chinese word boundary perception com 2253153 j lafferty a mccallum and f pereira 2001 conditional random fields probabilistic models for stgmenand labeling sequence data in of g ngai and r florian 2001 transformationbased in the fast lane in of naaclpages 4047 adwait ratnaparkhi 1996 a maximum entropy partoftagger in of the empirical methin natural language processing university of pennsylvania l shen and a k joshi 2003 a snow based supertagwith application to np chunking in r sproat chilin shih william gale and nancy chang 1996 a stochastic finitestate wordsegmentation for chinese 22377404 h van halteren j zavrel and w daelmans 1998 improving data driven wordclass tagging by system com in of colingacl andi wu 2003 customizable segmentation of morderived words in chinese and chinese language nianwen xue 2003 chinese word segmentation as tagging linguistics and unlike english text in which sentences are sequences of words delimited by white spaces in chinese text sentences are represented as strings of chinese characters or hanzi without similar natural delimiterstherefore the first step in a chinese language processing task is to identify the sequence of words in a sentence and mark boundaries in appropriate placesthis may sound simple enough but in reality identifying words in chinese is a nontrivial problem that has drawn a large body of research in the chinese language processing community the key to accurate automatic word identification in chinese lies in the successful resolution of ambiguities and aproper way to handle outofvocabulary wordsthe ambiguities in chinese word segmentation is due to the fact that a hanzi can occur in different wordinternal positions given the proper context generally provided by the sentence in which it occurs the position of a hanzi can be determinedin this paper we model the chinese word segmentation as a hanzi tagging problem and use a machinelearning algorithm to determine the appropriate position for a hanzithere are several reasons why we may expect this approach to workfirst chinese words generally have fewer than four charactersas a result the number of positions is smallsecond although each hanzi can in principle occur in all possible positions not all hanzi behave this waya substantial number of hanzi are distributed in a constrained mannerfor example the plural marker almost always occurs in the wordfinal positionfinally although chinese words cannot be exhaustively listed and new words are bound to occur in naturally occurring text the same is not true for hanzithe number of hanzi stays fairly constant and we do not generally expect to see new hanziwe represent the positions of a hanzi with four different tags lm for a hanzi that occurs on the left periphery of a word followed by other hanzi mm for a hanzi that occurs in the middle of a word mr for a hanzi that occurs on the right periphery of word preceded by other hanzi and lr for hanzi that is a word by itselfwe call this lmr taggingwith this approach word segmentation is a process where each hanzi is assigned an lmr tag and sequences of hanzi are then converted into sequences of words based on the lmr tagsthe use of four tags is linguistically intuitive in that lm tags morphemes that are prefixes or stems in the absence of prefixes mr tags morphemes that are suffixes or stems in the absence of suffixes mm tags stems with affixes and lr tags stems without affixesrepresenting the distributions of hanzi with lmr tags also makes it easy to use machine learning algorithms which has been successfully applied to other tagging problems such as postagging and iob tagging used in text chunkingin this paper we present chinese word segmentation algorithms based on the socalled lmr taggingour lmr taggers are implemented with the maximum entropy markov model and we then use transformationbased learning to combine the results of the two lmr taggers that scan the input in opposite directionsour system achieves fscores of and on the academia sinica corpus and the hong kong city university corpus respectivelyour algorithm consists of two partswe first implement two maximum entropy taggers one of which scans the input from left to right and the other scans the input from right to leftthen we implement a transformation based algorithm to combine the results of the two taggersthe maximum entropy markov model has been successfully used in some tagging problemsmemm models are capable of utilizing a large set of features that generative models cannot useon the other hand memm approaches scan the input incrementally as generative models dothe maximum entropy markov model used in postagging is described in detail in and the lmr tagger here uses the same probability modelthe probability model is defined over where is the set of possible contexts or histories and is the set of possible tagsthe models joint probability of a history and a tag is defined as where is a normalization constant are the model parameters and are known as features where each feature has a corresponding parameter that effectively serves as a weight of this featurein the training process given a sequence of characters and their lmr tags as training data the purpose is to determine the parameters that maximize the likelihood of the training data using the success of the model in tagging depends to a large extent on the selection of suitable featuresgiven a feature must encode information that helps to predict the features we used in our experiments are instantiations of the feature templates in feature templates to represent character features while represents tag featuresin the following list are characters and are lmr tags the previous two characters and the next two characters the previous and the next character the tag of the previous character and the tag of the character two before the current character one potential problem with the memm is that it can only scan the input in one direction from left to right or from right to leftit is noted in that nongenerative finitestate models memm models included share a weakness which they call the label bias problem a transition leaving a given state compete only against all other transitions in the modelthey proposed conditional random fields as a solution to address this problema partial solution to the lbp is to compute the probability of transitions in both directionsthis way we can use two memm taggers one of which scans the input from left to right and the other scans the input from right to leftthis strategy has been successfully used in in that paper pairwise voting has been used to combine the results of two supertaggers that scan the input in the opposite directionsthe pairwise voting is not suitable in this application because we must make sure that the lmr tags assigned to consecutive words are compatiblefor example an lm tag cannot immediately follow an mmpairwise voting does not use any contextual information so it cannot prevent incompatible tags from occurringtherefore in our experiments described here we use the transformationbased learning to combine the results of two memm taggersthe feature set used in the tbl algorithm is similar to those used in the np chunking task in we conducted closed track experiments on three data sources the academia sinica corpus the beijing university corpus and the hong kong city university corpuswe first split the training data from each of the three sources into two portions of the official training data is used to train the memm taggers and the other is held out as the development test data the development set is used to estimate the optimal number of iterations in the memm trainingfigure and show the curves of fscores on the development set with respect to the number of iterations in memm training iteration dataset of the academia sinica corpusxaxis stands for the number of iteration in trainingyaxis stands for the scoreexperiments show that the memm models achieve the best results after 500 and 400 rounds of training on the as data and the pku data respectivelyhowever the results on the cityu data is not very clearfrom round 100 through 200 the fscore on the development set almost stays unchangedwe think this is because the cityu data is from three different sources which differ in the optimal number of iterationswe decided to train the memm taggers for 160 iterations the hk city university datawe implemented two memm taggers one scans the input from left to right and one from right to leftwe then used these two memm taggers to tag both the training and the development datawe use the lmr tagging output to train a transformationbased learner using fast tbl the middle in table 2 shows the fscore on the development set achieved by the memm tagger that scans the input from left to right and the last column is the results after the transformationbased learner is appliedthe results show that using transformationbased learning only give rise to slight improvementsit seems that the bidirectional approach does not help much for the lmr taggingtherefore we only submitted the results of our lefttoright memm tagger retrained on the entire training sets as our official resultsthe results on the official test data is similar to what we have got on our development set except that the fscore on the beijing univ corpus is over 2 lower in absolute accuracy than what we expectedthe reason is that in the training data of beijing university corpus all the numbers are encoded in gbk while in the test data many numbers are encoded in ascii which are unknown to our taggerwith this problem fixed the results of the official test data are compatible with the results on our development sethowever we have withdrawn our segmentation results on the beijing university corpusour closed track experiments on the first sighan bakeoff data show that the lmr algorithm produces promising resultsour system ranks the second when tested on the academia sinica corpus and third on the hong kong city university corpusin the future we will try to incorporate a large word list into our tagger to test its performance on open track experimentsits high accuracy on makes it a good candidate as a general purpose segmenter
W03-1728
chinese word segmentation as lmr taggingin this paper we present chinese word segmentation algorithms based on the socalled lmr taggingour lmr taggers are implemented with the maximum entropy markov model and we then use transformationbased learning to combine the results of the two lmr taggers that scan the input in opposite directionsour system achieves fscores of 959 and 916 on the academia sinica corpus and the hong kong city university corpus respectivelywe describe for the first time the character classification approach for chinese word segmentation where each character is given a boundary tag denoting its relative position in a word
hhmmbased chinese lexical analyzer ictclas this document presents the results from inst of computing tech cas in the acl sighansponsored first international chinese word segmentation bakeoff the authors introduce the unified hhmmbased frame of our chinese lexical analyzer ictclas and explain the operation of the six tracks then provide the evaluation results and give more analysis evaluation on ictclas shows that its performance is competitive compared with other system ictclas has ranked top both in ctb and pk closed track in pk open track it ranks second position ictclas big5 version was transformed from gb version only in two days however it achieved well in two big5 closed tracks through the first bakeoff we could learn more about the development in chinese word segmentation and become more confident on our hhmmbased approach at the same time we really find our problems during the evaluation the bakeoff is interesting and helpful ict participated the first international chinese word segmentation bakeoffwe have taken six tracks academia sinica closed you penn chinese tree bank open and closed hong kong cityu closed peking university open and closedthe structure of this document is as followsthe next section presents the hhmmbased framework of ictclasnext we detail the operation of six tracksthe following section provides evaluation result and gives further analysis2 hhmmbased chinese lexical analysisas illustrated in figure 1 hhmmbased chinese lexical analysis comprises five levels atom segmentation simple and recursive unknown words recognition classbased segmentation and pos taggingin the whole frame classbased segmentation graph which is a directed graph designed for word segmentation is an essential intermediate data structure that links disambiguation unknown words recognition with word segmentation and pos taggingatom segmentation the bottom level of hhmm is an initial stephere atom is defined to be the minimal segmentation unit that cannot be split in any stagethe atom consists of chinese character punctuation symbol string numeric expression and other nonchinese char stringany word is made up of an atom or moreatom segmentation is to segment original text into atom sequence and it provides pure and simple source for its parent hmmfor instance a sentence like quot20029ictclas would be segmented as atom sequence quot20029ictclasjnehqff 44mmquotin this hmm the original symbol is observation while the atom is statewe skip the detail of operation in that it is a simple application on the basis of hmmpos tagging and role tagging using viterbi are also skipped because they are classic application of hmmbecause of paper length limit unknown words recognition is omittedour previous papers gave more given a word wi classc i is defined in figure 2suppose ilexi to be the lexicon size then the total number of word classes is ilexi9given the atom sequence a let w be the words sequence c be a corresponding class sequence of w and w be the choice of word segmentation with the maximized probability respectivelythen we could get for a specific atom sequence a p is a constant and p pso on the basis of baye theorem it can be induced that where co is begin of sentencefor convenience we often use the negative log probability instead of the proper formthat iswe apply to word segmentation classbased hmm which is a generalized approach covering both common words and unknown words wi iff wi is listed in the segmentation lexicon per iff wi is unlisted personal name loc iff wi is unlisted location name org iff wi is unlisted organization name time iff wi is unlisted time expression num iff wi is unlisted numeric expression str iffwi is unlisted symbol string beg iff beginning of a sentence end iff ending of a sentence other otherwise quotunlistedquot is referred as being outside the lexicon according to the word class definition if wi is listed in lexicon then ci is wi and p is equal to 10otherwise p is probability that class ci initially activates wi and it could be estimated in its child hmm for unknown words recognitionas demonstrated in figure 3 we provide the process of classbased word segmentation on quotet mt 1893 01quot the significance of our method is it covers the possible ambiguitymoreover unknown words which are recognized in the following steps can be added into the segmentation graph and proceeded as any other common wordsafter transformation through classbased hmm word segmentation becomes singlesource shortest paths problemhence the best choice w of word segmentation is easy to find using djikstra algorithmhere we would introduce the operation of some different trackwe participate all the closed tracksas for each closed track we first extracted all the common words and tokens that appear in the training corpusthen build the segmentation core lexicons with the wordsthose named entity words are classified into different named entities numeric and time expression personal names location names and transliterated namesaccording to named entities in the given corpus we could train both classbased segmentation hmm and rolebased hmm model for unknown word recognitiontherefore the whole lexical system including unknown word detection is accomplished as shown in figure 1we only participate gb code open tracksactually open track is similar to closed onethe only difference is the size of training data setin peking university open track ictclas is trained on sixmonth news corpus that is 5 months more than closed trackthe entire corpus is also from peking universityexcept for the additional corpus we have not employed any other special libraries or other resourcesas for ctb open track we find that it cannot benefit from that 5 month pku corpusactually pku standard is very different from ctb one though they seemed similarcore lexicon extracted from peking corpus degraded the performance on ctb testing dataexcept for some named entity corpus we could not get any more sources related to ctb standardtherefore ctb open track is operated in the similar way as closed trackbefore the bakeoff big5coded word segmentation has never been researched in our institutebesides the character code common words and sentence styles are greatly different in china mainland and taiwan or hong kongbecause of time limitation we have only spent two days on transforming our gbcoded ictclas to big5coded lexical analyzerfor each big5 closed we extracted a big5coded core lexiconthen the compared with other systems ictclas especially gbcoded version is competitivein both gbcoded closed tracks ictclas ranked topictclas also rank second position in peking open trackbecause of the lack of resources ctb open track is almost as same as ctb closed trackthe final performance in big5 track is not very goodas a preliminary bigcoded system however we are satisfied with the resultas is shown in table 1 it could also be concluded that classbased segmentation hmm is effectiveexcepted for ctb iv recall is over 97through the first bakeoff we have learn more about the development in chinese word segmentation and become more confident on our hhmmbased approachat the same time we really find our problems during the evaluationthe bakeoff is interesting and helpfulwe look forward to participate forthcoming bakeoffthe authors would like to thank prof shiwen yu of peking university for the peking corpusand we acknowledge our debt to gang zou dr bin wang dr jian sun jifeng li hao zhang and other colleagueshuaping zhang would especially express gratitude to his graceful girl friend feifei and her family for their encouragementwe also thank richard sproat qing ma fei xia and other sighan colleagues for their elaborate organization and enthusiastic help in the first international chinese word segmentation bakeoff
W03-1730
hhmmbased chinese lexical analyzer ictclasthis document presents the results from inst of computing tech cas in the acl sighansponsored first international chinese word segmentation bakeoffthe authors introduce the unified hhmmbased frame of our chinese lexical analyzer ictclas and explain the operation of the six tracksthen provide the evaluation results and give more analysisevaluation on ictclas shows that its performance is competitivecompared with other system ictclas has ranked top both in ctb and pk closed trackin pk open track it ranks second positionictclas big5 version was transformed from gb version only in two days however it achieved well in two big5 closed tracksthrough the first bakeoff we could learn more about the development in chinese word segmentation and become more confident on our hhmmbased approachat the same time we really find our problems during the evaluationthe bakeoff is interesting and helpful
a statistical approach to the semantics of verbparticles this paper describes a distributional approach to the semantics of verbparticle has recently become the target of renewed attention notably in the area of handwritten grammar development such items because considerable problems for any semanticallygrounded nlp application because their meaning is often not simply a function of the meaning of the constituent partshowever corpusbased or empirical nlp has shown limited interest in the problemwhile there has been some work on statistical approaches to the semantics of compositional compound nominals barker and szpakowicz rosario and hearst the more idiosyncratic items have been largely ignored beyond attempts at identification and yet the identification of noncompositional phrases while valuable in itself would by no means be the end of the matterthe unique challenge posed by mwes for empirical nlp is precisely that they do not fall cleanly into the binary classes of compositional and noncompositional expressions but populate a continuum between the two extremespart of the reason for the lack of interest by computational linguists in the semantics of mwes is that there is no established gold standard data from which to construct or evaluate modelsevaluation to date has tended to be fairly ad hocanother key problem is the lack of any firm empirical foundations for the notion of compositionalitygiven this background this paper has two aimsthe first is to put the treatment of noncompositionality in corpusbased nlp on a firm empirical footingas such it describes the development of a resource for implementing and evaluating statistical models of mwe meaning based on nonexpert human judgementsthe second is to demonstrate the usefulness of such approaches by implementing and evaluating a handful of approachesthe remainder of this paper is structured as followswe outline the linguistic foundations of this research in section 2 before describing the process of resource building in section 3section 4 summarises previous work on the subject and section 5 details our proposed models of compositionalitysection 6 lays out the evaluation of those models over the gold standard data and we conclude the paper in section 7we selected the english verbparticle construction as our test case mwe in this paperverbparticle constructions consist of a head verb and one or more obligatory particles in the form of intransitive prepositions adjectives or verbs here we focus exclusively on prepositional particles due to their high productivity and variable compositionalityexamples of prepositional vpcs are put up finish up gun down and make out as used in the following sentences vpcs because significant problems for nlp systemssemantically they often cannot be understood through the simple composition of their independent partscompare for example sentences and in the meaning seems to be that peter put the picture somewhere and that as a consequence the picture was upthat is the verb and the particle make independent contributions to the sentencea parsonsstyle semantic analysis of this might be as follows sentence on the other hand requires a rather different analysisneither barbara nor simon can be said to have made or to be outthe semantic analysis we would want then might be something like the following make out n and n barbara n simon how are we to identify whether the first or the second kind of semantic representation is appropriate for any given itemif we look at the other two sentences we can see that the problem is even more complicatedin it is the case that the paper is finished but it would be hard to claim that anything or anyone is uponly the verb then seems to be contributing its simplex meaning and the semantic analysis is in by contrast it is the particle that contributes its simplex meaning and not the verbas a consequence of philips action the intruder is down but since there is no simplex verb to gun we would not say that anyone gunned or was gunned the semantic analysis is consequently as follows gun down n philip n intruder n down in the linguistic literature the semantics of vpcs is frequently viewed in rather more complicated terms than we are suggesting here with particles often seen as making significant constructionspecific contributions in terms of aspect however no such existing linguistic account is completely robust and for practical nlp purposes we are forced to adopt a rather straightforward definition of compositionality as meaning that the overall semantics of the mwe can be composed from the simplex semantics of its parts as described in a finite lexiconrather than attempting to model compositionality by anchoring word semantics to a given lexicon our approach in this work is to defer to an empirical reference based on human judgementswe define mwe compositionality to be an entailment relationship between the whole and its various parts and solicit entailment judgements based on a handful of example sentencesentailment is conventionally defined for logical propositions where a proposition p entails a proposition q iff there is no conceivable state of affairs that could make p true and q falsethis can be generalised to refer to the relationship between two verbs v1 and v2 that holds when the sentence someone v1s entails the sentence someone v2s according to this generalisation we would then say that the verb run entails the verb move because the sentence he runs entails the sentence he movesthe same idea can be generalised to the relationship between simplex verbs and vpcs for example sentence can be said to entail that peter put the picture somewhere and so we can say that put up entails putthe same might be said of finish up and finish in however and produce a rather different result does not entail that simon and barbara made something and cannot entail that philip gunned the intruder because there is no simplex verb to gunthis is a very useful way of testing whether the simplex verb contributes to the meaning of the constructionwe can approach the relationship between vpcs and particles in this same wayfor while it is not true that peter was up it is true that the picture was upwe can therefore say that the vpc entails the particle herefor it is not true that either susan or the paper were up and the vpc therefore does not entail the particlein the case of while it is not true that philip was down it is true that the intruder was down and the vpc therefore entails the particlefinally for it is not true that barbara and simon were out and the vpc therefore does not entail the particlewe make the assumption that these relationships between the component words of the vpc and the whole are intuitive to nonexperts and aim to use their entailment judgements accordinglythis use of entailment in exploring the semantics of verb and preposition combinations was first proposed by hawkins and applied to vpcs by lohse et al in an attempt to normalise the annotators entailment judgements we decided upon an experimental setup where the subject is for each vpc type presented with a fixed selection of sentential contexts for that vpcso as to avoid introducing any bias into the experiment through artificiallygenerated sentences we chose to extract the sentences from naturallyoccurring text namely the written component of the british national corpus extraction of the vpcs was based on the method of baldwin and villavicencio first we used a pos tagger and chunker to tag the bncthis allowed us to extract vpc tokens through use of the particle pos in the pos tagged output for each instance of which we simply then look for the rightmost verb within a fixed window to the left of the particle and the particle chunk tag in the chunker output where we similarly locate the rightmost verb associated with each particle chunk occurrencefinally we ran a stochastic chunkbased grammar over the chunker output to extend extraction coverage to include mistagged particles and also more reliably determine the valence of the vpcthe token output of these three methods was amalgamated by weighted votingthe above method extracted 461 distinct vpc types occurring at least 50 times attested in a total of 110199 sentencesafter partitioning the sentence data by type we randomly selected 5 sentences for each vpc typewe then randomly selected 40 vpc types to use in the entailment experimentthat is all results described in this paper are over 40 vpc types28 participants took part in our initial experimentthey were all native speakers of english recruited by advertisements posted to newsgroups and mailing listseach participant was presented with 40 sets of 5 sentences where each of the five sentences contained a particular vpcthe vpc in question was indicated at the top of the screen and they were asked two questions whether the vpc implies the verb and whether the vpc implies the particleif the vpc was round up eg the subject would be asked does round up imply round and does round up imply up respectivelythey were given the option of three responses yes no or do not knowonce they had indicated their answer and pressed next they advanced to the next vpc and set of 5 sentencesthey were unable to move on until a choice had been indicatedas with any corpusbased approach to lexical semantics our study of vpcs is hampered by polysemy eg carry outtransin the execute and transport out sensesrather than intervene to customise example sentences to a prescribed sense we accepted whatever composition of senses random sampling producedparticipants were advised that if they felt more that one meaning was present in the set of five sentences they should base their decision on the sense that had the greatest number of occurrences in the setthe effects ofpolysemy were compounded by not having any reliable method for determining valencewe consider that simply partitioning vpc items into intransitive and transitive usages would reduce polysemy significantly get down get 19 5 2 down 14 10 2 move off move 14 12 0 off 19 7 0 throw out throw 20 6 0 out 15 10 1 pay off pay 11 12 3 off 16 8 2 lift out lift 25 1 0 out 26 0 0 roll back roll 13 9 4 back 14 12 0 dig up dig 21 5 0 up 18 7 1 lie down lie 24 2 0 down 25 1 0 wear on wear 6 19 1 on 3 22 1 fall off fall 23 3 0 off 25 1 0 move out move 22 4 0 out 26 0 0 hand out hand 15 9 2 out 19 7 0 seek out seek 13 13 0 out 15 11 0 sell off sell 14 12 0 off 16 9 1 trail off trail 8 18 0 off 10 16 0 stay up stay 20 5 1 up 21 5 0 go down go 18 7 1 down 22 3 1 hang out hang 22 4 0 out 25 1 0 get back get 20 6 0 back 19 6 1 throw in throw 15 9 2 in 13 12 1 put off put 8 17 1 off 5 19 2 shake off shake 12 14 0 off 15 11 0 step off step 25 1 0 off 26 0 0 give off give 12 12 2 off 21 5 0 carry away carry 7 17 2 away 6 18 2 throw back throw 18 7 1 back 21 4 1 pull off pull 13 10 3 off 13 6 7 carry out carry 0 25 1 out 0 25 1 brighten up brighten 9 16 1 up 16 10 0 map out map 9 17 0 out 10 16 0 slow down slow 11 14 1 down 19 7 0 sort out sort 6 19 1 out 11 15 0 bite off bite 15 10 1 off 16 8 2 add up add 12 14 0 the experiment was conducted remotely over the web using the experimental software package webexp experimental sessions lasted approximately 20 minutes and were selfpacedthe order in which the forty sets of sentences were presented was randomised by the softwarewe performed a pairwise analysis of the agreement between our 28 participantsthe overall mean agreement was 655 with a kappa score of 329an initial analysis showed that two participants strongly disagreed with the other achieving a mean pairwise r score of less than 1we decided therefore to remove these from the set before proceedingthe overall results for the remaining 26 participants can be seen in table 2the r score over these 26 participants is classed as fair and approaching moderate according to altman as mentioned above a major problem with lexical semantic studies is that items tend to occur with more than one meaningin order to test the effects of polysemy in the example sentences on interannotator agreement we analysed the agreement obtained over those vpcs which have only one meaning according to wordnet there was a total of 14 such items giving 28 entailment judgements for these items mean agreement and the r score were 700 and 387 respectivelythese are only very slightly higher than the overall scores suggesting although by no means proving that polysemy was not a significant confounding factorthe results for each vpc type can be seen in table 1 broken down into the verb and particle entailment judgements and based on the 26 participantswe took two approaches to deriving a single judgement for each testfirst we took the majority judgement to be the correct one second we identified the participant who achieved the highest overall r score with the other participants and took their judgements to be correct both sets of results will be referred to in evaluating our modelsit is interesting to look at the way in which the results for component entailment are distributed across the vpcsaccording to the majority view there are 21 fullycompositional items 10 items where neither the verb nor the particle is entailed 9 items where only the particle is entailed and 0 items where the verb alone is entailedaccording to the judgements of the centroid annotator there are 10 fullycompositional items 12 items where neither the verb nor the particle is entailed 15 where only the verb is entailed and 3 where only the particle is entailedit is surprising to notice that the majority view holds there to be no items in which the verb alone is contributing meaningit could be the case that items where only the verb contributes meaning are rare or that they are not represented in our datasetanother possible and to our minds more likely conclusion is that the contribution of the head verb strongly affects the way in which participants view the whole itemthus if a verb is considered to be contributing simplex semantics the participant is likely to assume that the vpc is completely compositional and conversely if a verb is considered to not be contributing simplex semantics the participant is more likely to assume the vpc to be noncompositionalwe devote this section to a description of statistical nlp work on the noncompositionality of mwesperhaps the singularly most influential work on mwe noncompositionality is that of lin we describe lins method in some detail here as it forms the basis of one of the methods tested in this researchlins method is based on the premise that noncompositional items have markedly different distributional characteristics to expressions derived through synonym substitution over the original word compositionlin took his multiword items from a collocation database for each collocation he substituted each of the component words with a word with a similar meaningthe list of similar meanings was obtained by taking the 10 most similar words according to a corpusderived thesaurus the construction of which is described in lin the mutual information value was then found for each item produced by this substitution by taking a collocation to consist of three events the type of dependency relationship the head lexical item and the modifiera phrase a was then said to be noncompositional iff there exists no phrase q where q can be produced by substitution of the components of a as described above and there is an overlap between the 95 confidence interval of the mutual information values of a and qthese judgements were evaluated by comparison with a dictionary of idiomsif an item was in the dictionary then it was said to be noncompositionalscores of 157 for precision and 137 for recall are reportedthere are to our minds significant problems with the underlying assumptions of lins methodthe theoretical basis of the technique is that compositional items should have a similar distribution to items formed by replacing components words with semantically similar onesthe idea presumably is that if an item is the result of the free combination of words or a fully productive lexical rule then wordsubstituted variants should be distributed similarlythis seems a reasonable basis for modelling productivity but not compositionality as lin claimsthere are many examples in natural language of phrases that are not at all productive but are still compositional we term the process by which these expressions arise institutionalisation similar work to lins has been done in the area of collocation extraction to pick up on this alternate concept of institutionalisationschone and jurafsky employed latent semantic analysis in an effort to improve on existing techniques for extracting mwes from corporaone property they try and pick up on in doing so is noncompositionalitythey measure the cosine between the vector representation for the candidate mwe and a weighted vector sum of its component words suggesting that a small cosine would indicate compositionalitythey evaluate this by comparing the extracted items with those listed in existing dictionaries and report that it offers no improvement in extracting mwes over existing techniquesthe assumption that noncompositionality is requisite for the presence of a mwe in a dictionary while interesting is not wellfounded and hence it does not seem to us that the poor results reflect a failure of the lsa approach in measuring compositionalitybannard used a combination of handbuilt thesauri and corpus statistics to explore the compositionality of vpcsthe task was to predict whether the verb andor the particle were contributing meaning to a given item using statistical analysis of a set of vpcs extracted from the wall street journal section of the penn treebank two techniques were usedthe first of these loosely followed lin in measuring the extent to which the component verb or particle of any vpc could be replaced with items of a similar semantic class to form a corpusattested vpc wordnet was used as the source for verb substitution candidates and a handbuild semantic taxonomy for particlesthe second technique explored the semantic similarity of a vpc to its component verb by comparing their subcategorisation preferences assuming that semantic similarity between a vpc and its component verb indicates compositionalitypoor results were put down to datasparseness and the lexical resources not being well suited to the taskwe use a larger corpus and an automaticallyderived thesaurus for the research described in this paper with the hope of overcoming these problemsmccarthy et al carry out research close in spirit to that described here in taking vpc tokens automatically extracted from the bnc and using an automatically acquired thesaurus to classify their relative compositionalityone significant divergence from our research is that they consider compositionality to be an indivisible property of the overall vpc and not the individual partsgoldstandard data was generated by asking human annotators to describe the compositionality of a given vpc according to a 11point scale based upon which the vpcs were ranked in order of compositionalitysimilarly to this research mccarthy et al in part used the similarity measure of lin to model compositionality eg in taking the top n similar words to each vpc and looking at overlap with the top n similar words to the head verbthey also examine the use of statistical tests such as mutual information in modelling compositionality and find the similaritybased methods to correlate more highly with the human judgementsbaldwin et al use lsa as a technique for analysing the compositionality of a given mwelsa is suggested to be a constructioninspecific test for compositionality which is illustrated by testing its effectivity over both english nounnoun compounds and vpcsbaldwin et al used lsa to calculate the distributional similarity between an mwe and its head word and demonstrate a correlation between similarity and compositionality by way of items with higher similarity being more compositionalthey do not go as far as to classify mwes as being compositional or noncompositional howeverhaving created our goldstandard data we implemented some statistical techniques for automatic analysisin this we use the vpc tokens with sentential contexts extracted from the bnc as reported in section 3 ie a superset of the data used to annotate the vpcswe mapped the goldstandard data onto four binary classification tasks over vpc items note the partial conditional chaining between these tests eg an item for which the verb and particle contribute their simplex meaning is completely compositional the following sections describe four methods for modelling vpc compositionality each of which is tested over the 4 individual compositionality classification tasksthe results for each method are given in table 4 in which the baseline for each task is the score obtained when we assign the most frequent label to all itemseach method is evaluated in terms of precision recall and fscore and all values which exceed the baseline are indicated in boldfacewe decided to gain a sense of the startoftheart on the task by reimplementing the technique described in lin over vpcsin our implementation we replaced lins collocations with our vpcs treating the relationship between a verb and a particle as a kind of grammatical relationin addition to the binary compositionalnoncompositional judgement that lin offers we tested the method over the other three tasksacknowledging as we must that items can be partially compositional it would seem to be the case according to the assumptions made by the technique that the substitutability of each item will give us some insight into its semantic contributionthe thesaurus used by lin has been generously made available onlinehowever this is not adequate for our purposes since it includes only verbs nouns and adjectivesadverbswe therefore replicated the approach described in lin to build the thesaurus using bnc data and including prepositionsmethod 2 is very similar to method 1 except that instead of using a thesaurus based on lins method we took a knowledgefree approach to obtaining synonymsour technique is very similar to the approach taken to building a context space by schutze we measured the frequency of cooccurrence of our target words with a set of 1000 contentbearing words a target word was said to cooccur with a content word if that content word occurred within a window of 5 words to either side of itthese cooccurrence figures were stored as feature vectorsin order to overcome data sparseness we used techniques borrowed from latent semantic indexing lsi is an information retrieval technique based on singular value decomposition and works by projecting a termdocument matrix onto a lowerdimensional subspace in which relationships might more easily be observed between terms which are related but do not cooccurwe used this technique to reduce the feature space for our target words from 1000 to 100 allowing relations to be discovered between target words even if there is not direct match between their context wordswe used the various tools in the gtp software package created at the university of tennessee3 to build these matrices from the cooccurrence data and to perform svd analysiswe calculated the similarity between two terms by finding the cosine of the angle between their vectorswe performed a pairwise comparison between all verbs and all particlesfor each term we then sorted all of the other items of the same partofspeech in descending order of similarity which gave us the thesaurus for use in substitutionas with the lin method we performed substitutions by taking the 10 most similar items for the head verb and particle of each vpcwe noted in section 4 that a significant problem with the substitution approach is that it is sensitive to institutionalisation rather than noncompositionalitymethod 3 attempts to adapt substitution to more accurately reflect noncompositionality by removing the assumption that an item formed by substitution should have the same distributional characteristics as the original itemrather than basing the compositionality judgement on the relative mutual information scores of the original items and the items resulting from substitution we instead base it on the corpusbased semantic similarity between the original expression and wordsubstituted derivative expressionsthe same method of substitution is used with each component being replaced by each of its 10 nearest neighbours according to the knowledgefree similarity measure described abovewe judge a vpc item to be compositional if an expression formed by substitution occurs among the nearest 100 verbparticle items to the original and failing this we judge it to be noncompositionalwe experimented with a number of cutoff points for identifying semantically similar items and found that a value of 100 gave the best resultswhile method 3 softens the reliance upon productivity as a test for compositionality it still confuses institutionalisation with noncompositionality somewhat in its reliance upon substitutionwe now suggest another technique which we claim is based on sounder principlesthe underlying intuition is that identifying the degree of semantic similarity between a vpc and its component verb andor particle will indicate whether that component part contributes independent semanticsthis is similar to the assumption made in schone and jurafsky except that we make a distinction between the contribution of the different component partswe again used the knowledgefree semantic similarity measurewe performed a pairwise comparison of all vpcs with all verbs and all particles obtaining cosine similarity scores for each pairin order to measure the usefulness of this score we performed a logistic regression of the similarity scores and the human judgements as to whether the given verb or particle is entailed by the vpcwe did this for the majority human judgements and also the centroid annotator scoreswe also did the same using the majority scores but rejecting those items on which there was less than 60 agreementin addition to performing a regression for all items we also performed a regression for only those items which have only one meaning according to wordnet the results for all of these are shown in table 3the figures shown are chisquared scores with their associated significance valueswe observed significant correlations for a number of the regressions while the results are far from stable such variation is perhaps to be expected on a test like this since the nature of context space models means that rogue items sometimes get extremely high similarity scores and we are performing the regression over only 40 vpcs in order to build a classifier for making compositionality decisions we again used a neighbourbased approach with a cutoffwe said that a verb was contributing meaning to a vpc if it occurred in the 20 most similar items to the vpcfor particles we said that the item was contributing meaning if it was among the 10 nearest neighbourswe tried out a range of different cutoffs for each item and found that these gave the best resultsthe results in table 4 show that on all tasks at least one of the four statistical methods offers an improvement in precision over the baseline and that there is an improvement in fscore for task 1 on both sets of datathere are swings in the relative scores obtained over the majority as compared to centroid annotator data for a given taskin terms of relative performance the semantic similarity based approach of methods 3 and 4 outperform the distribution based approach of methods 1 and 2 in terms of fscore on 6 of the 8 sets of results reportedin order to get a reliable sense for how good these scores are we compare them with the level of agreement across human judgeswe calculated pairwise agreement across all participants on the four classification tasks resulting in the figures given in table 4these agreement scores give us an upper bound for classification accuracy on each task from which it is possible to benchmark the classification accuracy of the classifiers on that same taskon task 1 three of the four classifiers achieved a classification accuracy of 575on task 2 the highestperforming classifier achieved a classification accuracy of 725on task 3 method 2 achieved the highest classification accuracy at 600 and on task 4 method 4 achieved a classification accuracy of 675we can see then that the best classifiers perform only marginally below the upper bound on at least two of the taskswhile these results may appear at first glance to be less than conclusive we must bear in mind that we are working with limited amounts of data and relatively simplistic models of a cognitively intensive taskwe interpret them as very positive indicators of the viability of using empirical methods to analyse vpc semanticsthis paper has described the implementation and evaluation of four corpusbased approaches to the semantics of verbparticle constructionswe created a set of goldstandard data based on nonexpert judgements acquired via a webbased experimentwe then implemented four different techniques and showed that they offer a significant improvement over a naive approach
W03-1809
a statistical approach to the semantics of verbparticlesthis paper describes a distributional approach to the semantics of verbparticle constructions we report first on a framework for implementing and evaluating such modelswe then go on to report on the implementation of some techniques for using statistical models acquired from corpus data to infer the meaning of verbparticle constructions
detecting a continuum of compositionality in phrasal verbs we investigate the use of an automatically acquired thesaurus for measures designed to indicate the compositionality of candidate multiword verbs specifically english phrasal verbs identified automatically using a robust parser we examine various measures using the nearest neighbours of the phrasal verb and in some cases the neighbours of the simplex counterpart and show that some of these correlate significantly with human rankings of compositionality on the test set we also show that whilst the compositionality judgements correlate with some statistics commonly used for extracting multiwords the relationship is not as strong as that using the automatically constructed thesaurus many people are working on acquisition of multiword expressions although terminology variesin this paper we are interested in lexicalised expressions where special interpretation is required because of some degree of noncompositionality or semantic opacitywe are specifically concerned with what are commonly referred to as phrasal verbs or verb and particle constructions as well as having idiosyncratic semantics phrasals also display specific syntactic behaviour such as permitting particle movement when used in the transitive for example jo ate up her food jo ate her food up we are interested in phrasal verbs because we want to acquire predicate selectional preferences for word sense disambiguation when acquiring such lexical information for a verb it is important to know when there is a special interpretation required for the verb and particle combination so that these combinations are handled separately from the simplex casewhilst it is possible to put every single occurrence of a verb and particle combination into a lexicon this is not desirableone wants to achieve generalisation and avoid redundancy only storing details which cannot be created from what is already therenot every verb modified by a particle may be a genuine multiword unit but may instead be a fully compositional verb modified by an adverbial eg fly upalso very productive verb particle combinations such as those involving verbs of motion which often occur with a particle eg up such as wander stroll go etc might be better handled in the grammar additionally in lexical acquisition and for word sense disambiguation it is important that related senses of words are identifiedfor example if the verb eat is closer in meaning to a phrasal construction eat up compared to other simplex verbs with their phrasal constructions such as blowblow up then the lexicon should reflect thathaving a measure of compositionality should help in thisin this paper we are not concerned with evaluation of precision and recall of the extraction of phrasal verbs from a parser although we have done some preliminary experiments in this direction on the wall street journal see section 3instead our focus is on methods of using an automatically acquired thesaurus for detecting compositionality of candidate phrasals output from our parserwe contrast this with some statistics commonly used for multiword extractionthe thesaurus is acquired from the grammatical relations occurring with verbs both the target phrasals and their simplex counterpartsthe intuition is that the neighbours of the simplex verb should be similar to those of the phrasal where the phrasal has a compositional meaning and that the phrasal neighbours should include phrasal candidates with the same particlefor evaluation we obtain a sample of multiword candidates from our parser and then obtain human judgements of compositionality using an ordinal scale for compositionalitywe demonstrate that there is highly significant agreement on the rank order of these judgements and use the average ranks for each item as a goldstandard to compare various measures aimed at detecting noncompositionalityin the following section we look at related workin section 3 we show how phrasals are identified by our parserwe talk about the generation of the goldstandard set of compositionality judgements in section 4in section 5 we describe the construction of the automatic thesaurus and the measurements we explored for detecting compositionalityin section 6 we show the correlations of our measures with the goldstandard and compare these to some statistics commonly used for identifying compositional multiwordsin section 7 we analyse our findings and conclude with directions for future workthere has been a lot of recent work on extraction of multiwords from corpora we focus specifically on work involving multiword verbs and detecting compositionality of multiwordsthere have been a number of methods proposed in the literature for extracting multiword verb constructions from corporabaldwin and villavicencio demonstrated that combining syntactic evidence using automatic pos taggers and statistical chunkers and feeding evidence from a number of tokens into a memory basedlearner gave high precision and recall using marked up wsj text to gauge precision and phrasals listed in the alvey natural language tools attested in the same corpus for recallno distinction on opaqueness of the verb and particle constructions was madeblaheta and johnson used loglinear models to extract english multiword verbs involving verb and particle constructions from parsed data these include phrasal and prepositional verbskrenn and evert investigated german support verb constructions and figurative expressions in their experiments true positives were typically defined as such according to the annotator scanning the listkrenn and evert found that different statistics are suited to different types of collocation there is no easy route for collocation extractionmoreover they found that a simple cooccurrence frequency fares comparably if not better than most statistical tests of significancemost people researching into multiwords assume some degree of noncompositionalityblaheta and johnson took human judgements on phrasality opaqueness and a subjective judgement of relatedness they showed that the opaqueness judgements correlated with the relatedness judgementalso those constructions judged to be phrasals tended to have higher ranks than prepositional verb particle constructionsboth lin and schone and jurafsky have used distributional similarity to detect compositionality in multiwordsschone and jurafsky used measures on the vectors representing the multiword candidates compared to measures for the words that the multiword contains but this failed to improve performance using wordnet and other machine readable resources as goldstandards for evaluationthere was some success though in using latent semantic analysis models to identify multiwords by the fact that the component words are typically nonsubstitutable but they felt that much of what is captured by this is already handled by the statistics that they employlin had already done something similar to the substitutability experiments using the method he had proposed earlier for automatic thesaurus constructionhe identified general multiwords involving several openclass words output from his parser and filtered by the loglikelihood statisticusing the parser yielded much better results than just a simple window for cooccurrence relationshipslin proposed that if there is a multiword obtained by substitution of either the head or modifier in the multiword with a near neighbour then the mutual information of this and the original multiword must be significantly different for the original multiword to be considered noncompositionalhe evaluated this manually on a sampleas well as finding noncompositional multiwords there were also a higher proportion of parser errors that met these criteriabannard et al are investigating compositionality by looking at the contribution of the verb and the particle to the semantics of the verb and particle combination this follows on from bannard earlier work where he showed that compositionality judgements correlate with human judgements of similarity between the head verb and the verb and particle combinationbannard et al point out that lin method of using substitution of component words in a multiword with semantic neighbours is a good indication of productivity but not necessarily of compositionality since an institutionalised nonproductive combination such as frying pan would not have near neighbour substitutes but would nevertheless be compositionalthey explore four methods for detecting compositionality using resources acquired from distributional datathey use these on 40 candidates on 4 separate tasks which aim to determine whether i the item is compositional ii one component word contributes its meaning iii the verb contributes its meaning iv the particle contributes its meaningthe classifications on each of these tasks according to these methods are contrasted with a gold standard classification from 26 judges on the same datathe methods exceed the mean agreement of the annotators in some cases particularly as regards the contribution from the particlebaldwin et al are also exploring empirical models of compositionality using lsa with nounnoun compounds and verbparticle constructionsin their study they compare the similarities of the component words with wordnet based similarity scores and demonstrate a moderate correlation lower for nounnoun compoundswe are also exploring the relation between a verb and verb and particle combination using distributional techniques but our evaluation is somewhat differentevaluation of collocation extraction is a notoriously thorny problem people do use mrds such as wordnet even though they acknowledge that there will be omissions in these resources and the phenomena in the resource may be rare or simply not attested in the particular corpus used for acquisitionmany researchers use manually annotated samples where the judges make a binary decision on whether each candidate multiword is quotgenuinequot or not as krenn and evert point out there is low agreement between annotators who are asked to mark quottypicalquot multiwords or collocationsthe intuitions behind what is typical vary and likewise association scores vary in their ability to partition the set depending on the notion of quottypicalityquot employed by the annotatorsresearchers also sometimes show how well the results accord with the contents of mrds even though these cannot be taken as definitivein this study we are less interested in the dichotomy of whether a putative phrasal candidate is indeed a genuine multiword or not but we use empirical methods to gauge the position of a candidate on a continuum between the fully opaque idiom and transparent compositional phrasesvariability of idioms on a scale of compositionality has been discussed by nunberg et al and in the psycholinguistics literature see tseng also advocates use of a spectrum when considering the semantics of prepositionswe will consider compositionality as a continuous scale and ask human judges to rank multiword candidates along thiswe investigate the use of these ranked judgements for evaluating compositionality measureswe also look at the relation between these judgements and appearance of the candidates in goldstandard resources such as wordnet or the anlt lexicon on the premise that noncompositional phrases are more likely to be listed as multiwords in manmade resourcesfor these experiments we use data from the ninety million words of the written portion of the british national corpus parsed with the rasp parser the output of the parser is a set of grammatical relations specifying the syntactic dependency between each head and its dependent read off from the phrase structure tree that is returned from the disambiguation phasethe parser uses information from anlt such as phrasals in its dictionarythis makes it more likely to spot phrasal constructions from this listwe have already looked at recognition of verb and particle constructions in the wsj identified purely on syntactic grounds using the parses provided with the wsj penn treebank 2 as a gold standardthe results for identifying verb and particle tokens are reported in table 1 both with and without the anlt phrasal list we also give results for comparison obtained on the same data for another wide coverage parser 2 in the rasp parser grammatical relation output we identify phrasal verbs as being a verb modified by a particle under the ncmod relationit is quite possible that some particle tags have been given erroneously and that some genuine particles are not recognised as such by the parser or are not attached to the verb by the parserwe only look at tokens in isolation and therefore do not collate evidence to look for syntactic evidence of particle movement as baldwin and villavicencio dothis would be a good way to improve phrasal extraction accuracy particularly where a particle follows a pronounin our experiments we asked human judges to rank phrasal verb candidates as to how compositional they arefrom the full set of 4272 phrasal verb candidate types output from the rasp parser we obtained 100 candidates randomly subject to the constraint that 33 3 each came from one of 3 frequency ranges from 20 to the maximum frequencya further 16 manually selected phrasals were added to this test setthree native english speakers ranked the 116 candidates on a numerical score 0 to 10 or gave a quotdo not knowquot responsewe discounted any item where any of the judges had put such a quotdo not knowquotthis only removed a total of 5 items leaving a ranking from all 3 judges on 111 candidatesto investigate if the rankings from the 3 judges agreed we employed the kendall coefficient of concordance this statistic is useful for determining interrater agreement where there are 3 or more judges and the judgements are ordinal and one is interested in the ranks rather than the actual numerical valuesw ranges between 0 and 1 and bears a linear relationship to the average spearman rankorder correlation coefficient taken over all possible pairs of the rankingsw is calculated as shown in equation 1 below where n is the number of items ri is the average rank for the ith item and k is the number of ratersthe second term in the denominator includes a correction for ties where where ti is the number of tied ranks in the ith grouping of ranksthe value kw is approximately distributed as x2 with n 1 degrees of freedomwe obtained a w score of 0594 which gives a x2 score of 19630 for 110 degrees of freedom which is highly significant we investigated various measures which compare the nearest neighbours of the phrasal verb to the neighbours of the corresponding simplex verbwe also tried various measures on the neighbours of the phrasal verbwe supply short labels for these for ease of reference overlap the size of the overlap of the top x phrasal neighbours with the same number of the corresponding simplex verb neighbours not including the simplex verb itselfwe tried this for sures which use the automatic thesaurus we indicate whether the measure relies only on the phrasal neighbours or the simplex neighbours or some combination of both in this first column we also indicate how many of the top ranked neighbours were usedwhere we are evaluating scores on a numerical scale such as the size of the overlap we use the ranks of the numerical values and compare these to the average ranks of our goldstandard using the spearman rankorder correlation coefficient since we have a large enough sample these can be used to obtain a normally distributed z score and we can thus obtain the probability of obtaining a score such as this by chance under the null hypothesis for the scores which involve a binary decision such as whether a score is in wordnet or not we use the mann whitney you test which compares the goldstandard ranks for the partitioned set and gives a z scorewe use onetailed tests because we predict the direction of the relationshipfor all the scores using the automatic thesaurus we assume that the larger the value the more compositional the itemfor the statistics the relationship is in the other direction high values are indicative of a noncompositional readingwe change the loglikelihood statistic to add a sign where the joint frequency of particle and verb is smaller than anticipated from that expectedfrom these results we can see that some of the measures from the automatic thesaurus correlate significantly with the human compositionality judgements and that these correlations are slightly stronger than those of any of the statistics usedthe statistics used all correlate with the human compositionality judgements although this is slightly less so for the loglikelihood ratiothe frequency of the verb and particle seems to bear no significant relation to compositionality judgementsthis is interesting because krenn and evert found that cooccurrence frequency was a good indication of the german multiwords although the task there was identification of the multiwords as opposed to measuring compositionalitymi is the statistic with the strongest value of t and the thesaurus measure with the strongest relationship was sameparticlesimplexthese two measures correlated well together and both are significantly correlated with whether the candidate is found in either wordnet or anlt see table 3 although the relationship using the automatic thesaurus is slightly higherlin uses a loglikelihood ratio to filter multiword candidates before using his automatic thesaurus to detect compositionality in multiwords containing 2 or more open class wordsfor phrasal candidates at least it might be worth using evidence from the thesaurus on the unfiltered listwe were surprised and a little disappointed that the straight overlap of neighbours did not give a significant relationship other than for the overlap of 30 neighbourswe believe this is due to the large scope for open class words as neighbours and that there is often some element of meaning added by the particlethus the overlap where we reduce neighbours of the phrasal to simplex form compensated for thiswe have not yet explored varying the number of neighbours for methods other than the overlap and overlapswe feel that it would be worth exploring the effect of the number of neighbours further and also to use the similarity scores of the neighbours rather than simple measures operating on the types occurring as neighboursthis would help control for the fact that for some verbs there are not many close neighbours and neighbours further down the ranked list may in fact be quite distantwhilst statistics are useful indicators of noncompositionality there are compositional multiwords which have low values for these statistics yet are highly noncompositionala good example is cock up it is the lowest ranked for compostionality by the human judges but its mi value is only 502 and according to mi it is ranked between the somewhat more compositional candidates tie down and come downthe automatic thesaurus measures such as sameparticlesimplex give a low compositionality score and place it at the end between carry out and latch onthere are also candidates with high values of the statistics yet they are in the middle range of the compositionality judgements for example plod onthis is simply because of a high cooccurrence frequencywhether such an unexpectedly high cooccurrence frequency warrants an entry in the lexicon depends on the type of lexicon being builtwe can see that there is a significant relationship between the human compositionality judgements and some of the measures from the automatic thesaurus particularly those that endeavour to take into acof measures with manmade resources count the semantics of the particlethis relationship is stronger than statistics which have previously been used for filtering candidate multiwords which suggests that it might be better not to filter with statistics before looking at compositionality using an automatic thesauruswe have not yet exploited these measures in the construction of a lexicon for phrasal verbsidentifying noncompositional phrasals by employing thresholds to force a binary decision is one optionthis would help in determining which candidate phrasals should be treated separately from the simplex for purposes such as selectional preference acquisition and word sense disambiguationthe thresholds might be acquired empirically from some training data such as the compositionality judgements we have usedhowever we believe that permitting measurements and evaluation on a continuum of compositionality allows for a more natural exploration of relationships without imposing an arbitrary cutoff point required only when finally categorising items for a lexiconit also could be useful to use the measures to tell whether the meaning comes from the verb or the particle or both as bannard et al do because if the verb contributes its meaning then data for selectional preference acquisition might be amalgamated with those of the simplex counterpartthis work was supported by the epsrcfunded rasp project and the eu 5th framework project meaning developing multilingual webscale language technologies we are grateful to timothy baldwin and colin bannard for their helpful comments and useful references
W03-1810
detecting a continuum of compositionality in phrasal verbswe investigate the use of an automatically acquired thesaurus for measures designed to indicate the compositionality of candidate multiword verbs specifically english phrasal verbs identified automatically using a robust parserwe examine various measures using the nearest neighbours of the phrasal verb and in some cases the neighbours of the simplex counterpart and show that some of these correlate significantly with human rankings of compositionality on the test setwe also show that whilst the compositionality judgements correlate with some statistics commonly used for extracting multiwords the relationship is not as strong as that using the automatically constructed thesaurus
an empirical model of multiword expression decomposability this paper presents a constructioninspecific model of multiword expression decomposability based on latent semantic analysis we use latent semantic analysis to determine the similarity between a multiword expression and its constituent words and claim that higher similarities indicate greater decomposability we test the model over english nounnoun compounds and verbparticles and evaluate its correlation with similarities and hyponymy values in wordnet based on mean hyponymy over partitions of data ranked on similarity we furnish evidence for the calculated similarities being correlated with the semantic relational content of wordnet this paper is concerned with an empirical model of multiword expression decomposabilitymultiword expressions are defined to be cohesive lexemes that cross word boundaries they occur in a wide variety of syntactic configurations in different languages decomposability is a description of the degree to which the semantics of an mwe can be ascribed to those of its parts analysis of the semantic correlation between the constituent parts and whole of an mwe is perhaps more commonly discussed under the banner of compositionality our claim here is that the semantics of the mwe are deconstructed and the parts coerced into often idiosyncratic interpretations to attain semantic alignment rather than the other way aroundone idiom which illustrates this process is spill the beans where the semantics of reveal are decomposed such that spill is coerced into the idiosyncratic interpretation of reveal and beans into the idiosyncratic interpretation of secretgiven that these senses for spill and beans are not readily available at the simplex level other than in the context of this particular mwe it seems fallacious to talk about them composing together to form the semantics of the idiomideally we would like to be able to differentiate between three classes of mwes nondecomposable idiosyncratically decomposable and simple decomposable with nondecomposable mwes no decompositional analysis is possible and the mwe is semantically impenetrablethe only syntactic variation that nondecomposable mwes undergo is verbal inflection and pronominal reflexivisation idiosyncratically decomposable mwes are decomposable but coerce their parts into taking semantics unavailable outside the mwethey undergo a certain degree of syntactic variation finally simple decomposable mwes decompose into simplex senses and generally display high syntactic variabilitywhat makes simple decomposable expressions true mwes rather than productive word combinations is that they tend to block compositional alternates with the expected semantics for example motor car cannot be rephrased as engine car or motor automobilenote that the existence of anticollocations is also a test for nondecomposable and idiosyncratically decomposable mwes our particular interest in decomposability stems from ongoing work on grammatical means for capturing mwesnunberg et al observed that idiosyncratically decomposable mwes undergo much greater syntactic variation than nondecomposable mwes and that the variability can be partially predicted from the decompositional analysiswe thus aim to capture the decomposability of mwes in the grammar and use this to constrain the syntax of mwes in parsing and generationnote that it is arguable whether simple decomposable mwes belong in the grammar proper or should be described instead as lexical affinities between particular word combinationsas the first step down the path toward an empirical model of decomposability we focus on demarcating simple decomposable mwes from idiosyncratically decomposable and nondecomposable mwesthis is largely equivalent to classifying mwes as being endocentric or ezocentric we attempt to achieve this by looking at the semantic similarity between an mwe and its constituent words and hypothesising that where the similarity between the constituents of an mwe and the whole is sufficiently high the mwe must be of simple decomposable typethe particular similarity method we adopt is latent semantic analysis or lsa lsa allows us to calculate the similarity between an arbitrary word pair offering the advantage of being able to measure the similarity between the mwe and each of its constituent wordsfor mwes such as house boat therefore we can expect to capture the fact that the mwe is highly similar in meaning to both constituent words more importantly lsa makes no assumptions about the lexical or syntactic composition of the inputs and thus constitutes a fully construction and languageinspecific method of modelling decomposabilitythis has clear advantages over a more conventional supervised classifierstyle approach where training data would have to be customised to a particular language and construction typeevaluation is inevitably a difficulty when it comes to the analysis of mwes due to the lack of concise consistency checks on what mwes should and should not be incorporated into dictionarieswhile recognising the dangers associated with dictionarybased evaluation we commit ourselves to this paradigm and focus on searching for appropriate means of demonstrating the correlation between dictionary and corpusbased similaritiesthe remainder of this paper is structured as followssection 2 describes past research on mwe compositionality of relevance to this effortsection 3 provides a basic outline of the resources used in this research lsa the mwe extraction methods and measures used to evaluate our methodsection 4 then provides evaluation of the proposed method and the paper is concluded with a brief discussion in section 5although there has been some useful work on compositionality in statistical machine translation there has been little work on detecting noncompositional items of variable syntactic type in monolingual corporaone interesting exception is lin whose approach is explained as follows the intuitive idea behind the method is that the metaphorical usage of a noncompositional expression causes it to have a different distributional characteristic than expressions that are similar to its literal meaningthe expressions he uses are taken from a collocation database these expressions that are similar to their literal meaning are found by substituting each of the words in the expression with the 10 most similar words according to a corpus derived thesaurus lin models the distributional difference as a significant difference in mutual informationsignificance here is defined as the absence of overlap between the 95 confidence interval of the mutual information scoreslin provides some examples that suggest he has identified a successful measure of compositionalityhe offers an evaluation where an item is said to be noncompositional if it occurs in a dictionary of idiomsthis produces the unconvincing scores of 157 for precision and 137 for recallwe claim that substitutionbased tests are useful in demarcating mwes from productive word combinations in a mwe detection task but not in distinguishing the different classes of decomposabilityas observed above simple decomposable mwes such as motor car fail the substitution test not because of nondecomposability but because the expression is institutionalised to the point of blocking alternatesthus we expect lins method to return a wide array of both decomposable and nondecomposable mwesbannard focused on distributional techniques for describing the meaning of verbparticle constructions at the level of logical formthe semantic similarity between a multiword expression and its head was used as an indicator of decomposabilitythe assumption was that if a verbparticle was sufficiently similar to its head verb then the verb contributed its simplex meaningit gave empirical backing to this assumption by showing that annotator judgements for verbparticle decomposability correlate significantly with nonexpert human judgements on the similarity between a verbparticle construction and its head verbbannard et al extended this research in looking explicitly at the task of classifying verbparticles as being compositional or notthey successfully combined statistical and distributional techniques with a substitution test in analysing compositionalitymccarthy et al also targeted verbparticles for a study on compositionality and judged compositionality according to the degree of overlap in the n most similar words to the verbparticle and head verb eg to determine compositionalitywe are not the first to consider applying lsa to mwesschone and jurafsky applied lsa to the analysis of mwes in the task of mwe discovery by way of rescoring mwes extracted from a corpusthe major point of divergence from this research is that schone and jurafsky focused specifically on mwe extraction whereas we are interested in the downstream task of semantically classifying attested mwesin this section we outline the resources used in evaluation give an informal introduction to the lsa model sketch how we extracted the mwes from corpus data and describe a number of methods for modelling decomposability within a hierarchical lexiconthe particular reference lexicon we use to evaluate our technique is wordnet 17 due to its public availability hierarchical structure and wide coverageindeed schone and jurafsky provide evidence that suggests that wordnet is as effective an evaluation resource as the web for mwe detection methods despite its inherent size limitations and static naturetwo mwe types that are particularly well represented in wordnet are compound nouns and multiword verbs of these we chose to specifically target two types of mwe nounnoun compounds and verbparticles due to their frequent occurrence in both decomposable and nondecomposable configurations and also their disparate syntactic behaviourswe extracted the nn compounds from the 1996 wall street journal data and the verbparticles from the british national corpus the wsj data is more tightly domainconstrained and thus a more suitable source for nn compounds if we are to expect sentential context to reliably predict the semantics of the compoundthe bnc data on the other hand contains more colloquial and prosaic texts and is thus a richer source of verbparticlesour goal was to compare the distribution of different compound terms with their constituent words to see if this indicated similarity of meaningfor this purpose we used latent semantic analysis to build a vector space model in which termterm similarities could be measuredlsa is a method for representing words as points in a vector space whereby words which are related in meaning should be represented by points which are near to one another first developed as a method for improving the vector model for information retrieval as a technique for measuring similarity between words lsa has been shown to capture semantic properties and has been used successfully for recognising synonymy wordsense disambiguation and for finding correct translations of individual terms the lsa model we built is similar to that described in first 1000 frequent content words l were chosen as contentbearing wordsusing these contentbearing words as column labels the 50000 most frequent terms in the corpus were assigned row vectors by counting the number of times they oca stoplist is a list of frequent words which have little independent semantic content such as prepositions and determiners curred within the same sentence as a contentbearing wordsingularvalue decomposition was then used to reduce the number of dimensions from 1000 to 100similarity between two vectors was measured using the cosine of the angle between them in the same way as the similarity between a query and a document is often measured in information retrieval effectively we could use lsa to measure the extent to which two words or mwes x and y usually occur in similar contextssince the corpora had been tagged with partsofspeech we could build syntactic distinctions into the lsa models instead of just giving a vector for the string test we were able to build separate vectors for the nouns verbs and adjectives testthis combination of technologies was also used to good effect by widdows an example of the contribution of partofspeech information to extracting semantic neighbours of the word fire is shown in table 1as can be seen the noun fire and the verb fire are related to quite different areas of meaningbuilding a single vector for the string fire confuses this distinction the neighbours offire treated just as a string include words related to both the meaning offire as a noun and as a verbthe appropriate granularity of syntactic classifications is an open question for this kind of research treating all the possible verbs categories as different led to data sparseness and instead we considered verb as a single partofspeech typenn compounds were extracted from the wsj by first tagging the data with fntbl 10 and then simply taking noun bigrams out of these we selected those compounds that are listed in wordnet resulting in 5405 nn compound types extraction of the verbparticles was considerably more involved and drew on the method of baldwin and villavicencio essentially we used a pos tagger and chunker to first tag the bncthis allowed us to extract verbparticle tokens through use of the particle pos and chunk tags returned by the two systemsthis produces highprecision but relatively lowrecall results so we performed the additional step of running a chunkbased grammar over the chunker output to detect candidate mistagged particlesin the case that a noun phrase followed the particle candidate we performed attachment disambiguation to determine the transitivity of the particle candidatethese three methods produced three distinct sets of verbparticle tokens which we carried out weighted voting over to determine the final set of verbparticle tokensa total of 461 verbparticles attested in wordnet were extracted for both the nn compound and verbparticle data we replaced each token occurrence with a singleword postagged token to feed into the lsa modelin order to evaluate our approach we employed the lexical relations as defined in the wordnet lexical hierarchy wordnet groups words into sets with similar meaning egcar auto automobile machine motorcar these are organised into a hierarchy employing multiple inheritancethe hierarchy is structured according to different principles for each of nouns verbs adjectives and adverbsthe nouns are arranged according to hyponymy or isa relations eg a car is a kind of automobilethe verbs are arranged according to troponym or mannerof relations where murder is a manner of killing so kill immediately dominates murder in the hierarchywe used wordnet for evaluation by way of looking at hyponymy and semantic distancehyponymy provides the most immediate way of evaluating decomposabilitywith simple decomposable mwes we can expect the constituents to be hypernyms or synonyms of the mwethat is simple decomposable mwes are generally endocentric although there are some exceptions to this generalisation such as vice president arguably not being a hyponym of presidentno hyponymy relation holds with nondecomposable or idiosyncratically decomposable mwes as even if the semantics of the head noun can be determined through decomposition by definition this will not correspond to a simplex sense of the wordwe deal with polysemy of the constituent words andor mwe by simply looking for the existence of a sense of the constituent words which subsumes a sense of the mwethe function hyponym thus returns a value of 1 if some sense of wordz subsumes a sense of mwe and a value of 0 otherwisea more proactive means of utilising the wordnet hierarchy is to derive a semantic distance based on analysis of the relative location of senses in wordnetbudanitsky and hirst evaluated the performance of five different methods that measure the semantic distance between words in the wordnet hierarchy which patwardhan et al have then implemented and made available for general use as the perl package distance0112 we focused in particular on the following three measures the first two of which are based on information theoretic principles and the third on sense topology resnik combined wordnet with corpus statisticshe defines the similarity between two words as the information content of the lowest superordinate in the hierarchy defining the information content of a concept c to be the negative of its log likelihoodthis is calculated over a corpus of text where c0 is the lowest class in the hierarchy that subsumes both classes lations of different strength to determine the similarity of word senses conditioned on the type direction and relative distance of edges separating themthe patwardhan et al implementation that we used calculates the information values from semcor a semantically tagged subset of the brown corpusnote that the first two similarity measures operate over nouns only while the last can be applied to any word classthe similarity measures described above calculate the similarity between a pair of sensesin the case that a given constituent word andor mwe occur with more than one sense we calculate a similarity for sense pairing between them and average over them to produce a consolidated similarity valuelsa was used to build models in which mwes could be compared with their constituent wordstwo models were built one from the wsj corpus and one from the bnc after removing stopwords the 50000 most frequent terms were indexed in each modelfrom the wsj these 50000 terms included 1710 nn compounds and from the bnc 461 verbparticles we used these models to compare different words and to find their neighboursfor example the neighbours of the simplex verb cut and the verbparticles cut out and cut off are shown in table 2as can be seen several of the neighbours of cut out are from similar semantic areas as those of cut whereas those of cut off are quite differentthis reflects the fact that in most of its instances the verb cut off is used to mean forcibly isolatein order to measure this effect quantitatively we can simply take the cosine similarities between these verbs finding that sim 0433 and sim 0183 from which we infer directly that relative to the sense of cut cut out is a clearer case of a simple decomposable mwe than cut offin order to get an initial feel for how well the lsabased similarities for mwes and their head words correlate with the wordnetbased similarities over those same word pairs we did a linear regression and pearsons correlation analysis of the paired data simwn for each wordnet similarity measure simwnfor both tests values closer to 0 indicate random distribution of the data whereas values closer to 1 indicate a strong correlationthe correlation results for nn compounds and verbparticles are presented in table 3 where r2 refers to the output of the linear regression test and hso refers to hirst and stonge similarity measurein the case of nn compounds the correlation with lsa is very low for all tests that is lsa is unable to reproduce the relative similarity values derived from wordnet with any reliabilitywith verbparticles correlation is notably higher than for nn compounds3 but still at a low levelbased on these results lsa would appear to correlate poorly with wordnetbased similaritieshowever our main interest is not in similarity per se but how reflective lsa similarities are of the decomposability of the mwe in questionwhile taking note of the low correlation with wordnet similarities therefore we move straight on to look at the hyponymy testwe next turn to analysis of correlation between lsa similarities and hyponymy valuesour expectation is that for constituent wordmwe pairs with higher lsa similarities there is a greater likelihood of the mwe being a hyponym of the constituent wordwe test this hypothesis by ranking the constituent word mwe pairs in decreasing order of lsa similarity 3recall that hso is the only similarity measure which operates over verbs and partitioning the ranking up into m partitions of equal sizewe then calculate the average number of hyponyms per partitionif our hypothesis is correct the earlier partitions will have higher occurrences of hyponyms than the latter partitionsfigure 1 presents the mean hyponymy values across partitions of the nn compound data and verbparticle data with m set to 3 in each casefor the nn compounds we derive two separate rankings based on the similarity between the head noun and nn compound and the modifier noun and the nn compound in the case of the verbparticle data wordnet has no classification of prepositions or particles so we can only calculate the similarity between the head verb and verbparticle looking to the curves for these three rankings we see that they are all fairly flat nondescript curvesif we partition the data up into low and highfrequency mwes as defined by a threshold of 100 corpus occurrences we find that the graphs for the lowfrequency data low and vpclow are both monotonically decreasing whereas those for highfrequency data high and vpchigh are more haphazard in natureour hypothesis of lesser instances of hyponymy for lower similarities is thus supported for lowfrequency items but not for highfrequency items suggesting that lsa similarities are more brittle over highfrequency items for this particular taskthe results for the lowfrequency items are particularly encouraging given that the lsabased similarities were found to correlate poorly with wordnetderived similaritiesthe results for nn are more erratic for both low and highfrequency terms that is the modifier noun is not as strong a predictor of decomposability as the head nounthis is partially supported by the statistics on the relative occurrence of nn compounds in wordnet subsumed by their head noun as compared to nn compounds subsumed by their modifier in an ideal world we would hope that the values for mean hyponymy were nearly 1 for the first partition and nearly 0 for the lastnaturally this presumes perfect correlation of the lsa similarities with decomposability but classificational inconsistencies in wordnet also work against usfor example vice chairman is an immediate hyponym of both chairman and president but vice president is not a hyponym of presidentaccording to lsa however sim 508 and sim 551it remains to be determined why lsa should perform better over lowfrequency items although the higher polysemy of highfrequency items is one potential becausewe intend to further investigate this matter in future researchwhile evaluation pointed to a moderate correlation between lsa similarities and occurrences of hyponymy we have yet to answer the question of exactly where the cutoffs between simple decomposable idiosyncratically decomposable and nondecomposable mwes liewhile it would be possible to set arbitrary thresholds to artificially partition up the space of mwes based on lsa similarity we feel that more work needs to be done in establishing exactly what different lsa similarities for different mwe constituent word combinations meanone area in which we plan to extend this research is the analysis of mwes in languages other than englishbecause of lsas independence from linguistic constraints it is equally applicable to all languages assuming there is some way of segmenting inputs into constituent wordsto summarise we have proposed a constructioninspecific empirical model of mwe decomposability based on latent semantic analysiswe evaluated the method over english nn compounds and verbparticles and showed it to correlate moderately with wordnetbased hyponymy valuesthis material is partly based upon work supported by the national science foundation under grant nobcs0094638 and also the research collaboration between ntt communication science laboratories nippon telegraph and telephone corporation and csli stanford universitywe would like to thank the anonymous reviewers for their valuable input on this research
W03-1812
an empirical model of multiword expression decomposabilitythis paper presents a constructioninspecific model of multiword expression decomposability based on latent semantic analysiswe use latent semantic analysis to determine the similarity between a multiword expression and its constituent words and claim that higher similarities indicate greater decomposabilitywe test the model over english nounnoun compounds and verbparticles and evaluate its correlation with similarities and hyponymy values in wordnetbased on mean hyponymy over partitions of data ranked on similarity we furnish evidence for the calculated similarities being correlated with the semantic relational content of wordnetwe studied vector extraction for phrases because they were interested in the decomposability of multi word expressionswe propose a lsabased model for measuring the decomposability of mwes by examining the similarity between them and their constituent words with higher similarity indicating the greater decomposability
incrementality in deterministic dependency parsing deterministic dependency parsing is a robust and efficient approach to syntactic parsing of unrestricted natural language text in this paper we analyze its potential for incremental processing and conclude that strict incrementality is not achievable within this framework however we also show that it is possible to minimize the number of structures that require nonincremental processing by choosing an optimal parsing algorithm this claim is substantiated with experimental evidence showing that the algorithm achieves incremental parsing for 689 of the input when tested on a random sample of swedish text when restricted to sentences that are accepted by the parser the degree of incrementality increases to 879 incrementality in parsing has been advocated for at least two different reasonsthe first is mainly practical and has to do with realtime applications such as speech recognition which require a continually updated analysis of the input received so farthe second reason is more theoretical in that it connects parsing to cognitive modeling where there is psycholinguistic evidence suggesting that human parsing is largely incremental however most stateoftheart parsing methods today do not adhere to the principle of incrementality for different reasonsparsers that attempt to disambiguate the input completely full parsing typically first employ some kind of dynamic programming algorithm to derive a packed parse forest and then applies a probabilistic topdown model in order to select the most probable analysis since the first step is essentially nondeterministic this seems to rule out incrementality at least in a strict senseby contrast parsers that only partially disambiguate the input partial parsing are usually deterministic and construct the final analysis in one pass over the input but since they normally output a sequence of unconnected phrases or chunks they fail to satisfy the constraint of incrementality for a different reasondeterministic dependency parsing has recently been proposed as a robust and efficient method for syntactic parsing of unrestricted natural language text in some ways this approach can be seen as a compromise between traditional full and partial parsingessentially it is a kind of full parsing in that the goal is to build a complete syntactic analysis for the input string not just identify major constituentsbut it resembles partial parsing in being robust efficient and deterministictaken together these properties seem to make dependency parsing suitable for incremental processing although existing implementations normally do not satisfy this constraintfor example yamada and matsumoto use a multipass bottomup algorithm combined with support vector machines in a way that does not result in incremental processingin this paper we analyze the constraints on incrementality in deterministic dependency parsing and argue that strict incrementality is not achievablewe then analyze the algorithm proposed in nivre and show that given the previous result this algorithm is optimal from the point of view of incrementalityfinally we evaluate experimentally the degree of incrementality achieved with the algorithm in practical parsingin a dependency structure every word token is dependent on at most one other word token usually called its head or regent which means that the structure can be represented as a directed graph with nodes representing word tokens and arcs representing dependency relationsin addition arcs may be labeled with specific dependency typesfigure 1 shows a labeled dependency graph for a simple swedish sentence where each word of the sentence is labeled with its part of speech and each arc labeled with a grammatical functionin the following we will restrict our attention to unlabeled dependency graphs ie graphs without labeled arcs but the results will apply to labeled dependency graphs as wellwe will also restrict ourselves to projective dependency graphs formally we define these structures in the following way we write wz wj to express that wz precedes wj in the string w we write wz wj to say that there is an arc from wz to wj we use to denote the reflexive and transitive closure of the arc relation and we use h and h for the corresponding undirected relations ie wz h wj iff wz wj or wj wz2a dependency graph d is wellformed iff the five conditions given in figure 2 are satisfiedthe task of mapping a string w w1 wn to a dependency graph satisfying these conditions is what we call dependency parsingfor a more detailed discussion of dependency graphs and wellformedness conditions the reader is referred to nivre having defined dependency graphs we may now consider to what extent it is possible to construct these graphs incrementallyin the strictest sense we take incrementality to mean that at any point during the parsing process there is a single connected structure representing the analysis of the input consumed so farin terms of our dependency graphs this would mean that the graph being built during parsing is connected at all timeswe will try to make this more precise in a minute but first we want to discuss the relation between incrementality and determinismit seems that incrementality does not by itself imply determinism at least not in the sense of never undoing previously made decisionsthus a parsing method that involves backtracking can be incremental provided that the backtracking is implemented in such a way that we can always maintain a single structure representing the input processed up to the point of backtrackingin the context of dependency parsing a case in point is the parsing method proposed by kromann which combines heuristic search with different repair mechanismsin this paper we will nevertheless restrict our attention to deterministic methods for dependency parsing because we think it is easier to pinpoint the essential constraints within a more restrictive frameworkwe will formalize deterministic dependency parsing in a way which is inspired by traditional shiftreduce parsing for contextfree grammars using a buffer of input tokens and a stack for storing previously processed inputhowever since there are no nonterminal symbols involved in dependency parsing we also need to maintain a representation of the dependency graph being constructed during processingwe will represent parser configurations by triples where 5 is the stack i is the list of input tokens and a is the arc relation for the dependency graphgiven an input string w the parser is initialized to and terminates when it reaches a configuration the input string w is accepted if the dependency graph d given at termination is wellformed otherwise w is rejectedin order to understand the constraints on incrementality in dependency parsing we will begin by considering the most straightforward parsing strategy ie lefttoright bottomup parsing which in this case is essentially equivalent to shiftreduce parsing with a contextfree grammar in chomsky normal formthe parser is defined in the form of a transition system represented in figure 3 the two topmost tokens on the stack wi and wj by a rightdirected arc wi wj and reduces them to the head wi3the transition shift pushes the next input token wi onto the stackthe transitions leftreduce and rightreduce are subject to conditions that ensure that the single head condition is satisfiedfor shift the only condition is that the input list is nonemptyas it stands this transition system is nondeterministic since several transitions can often be applied to the same configurationthus in order to get a deterministic parser we need to introduce a mechanism for resolving transition conflictsregardless of which mechanism is used the parser is guaranteed to terminate after at most 2n transitions given an input string of length n moreover the parser is guaranteed to produce a dependency graph that is acyclic and projective this means that the dependency graph given at termination is wellformed if and only if it is connectedwe can now define what it means for the parsing to be incremental in this frameworkideally we would like to require that the graph is connected at all timeshowever given the definition of leftreduce and rightreduce it is impossible to connect a new word without shifting it to the stack first so it seems that a more reasonable condition is that the size of the stack should never exceed 2in this way we require every word to be attached somewhere in the dependency graph as soon as it has been shifted onto the stackwe may now ask whether it is possible to achieve incrementality with a lefttoright bottomup dependency parser and the answer turns out to be no in the general casethis can be demonstrated by considering all the possible projective dependency graphs containing only three nodes and checking which of these can be parsed incrementallyfigure 4 shows the relevant structures of which there are seven altogetherwe begin by noting that trees can all be constructed incrementally by shifting the first two tokens onto the stack then reducing with rightreduce in and leftreduce in and then shifting and reducing again with rightreduce in and and leftreduce in and by contrast the three remaining trees all require that three tokens are initialization hnil w i termination hs nil ai leftreduce hwjwis i ai hwjs i a i wk a rightreduce hwjwis i ai hwis i a i wk a shift hs wii ai hwis i ai shifted onto the stack before the first reductionhowever the reason why we cannot parse the structure incrementally is different in compared to in the problem is that the first two tokens are not connected by a single arc in the final dependency graphin they are sisters both being dependents on the third token in the first is the grandparent of the secondand in pure dependency parsing without nonterminal symbols every reduction requires that one of the tokens reduced is the head of the otherthis holds necessarily regardless of the algorithm used and is the reason why it is impossible to achieve strict incrementality in dependency parsing as defined herehowever it is worth noting that which are the mirror images of can be parsed incrementally even though they contain adjacent tokens that are not linked by a single arcthe reason is that in the reduction of the first two tokens makes the third token adjacent to the firstthus the defining characteristic of the problematic structures is that precisely the leftmost tokens are not linked directlythe case of is different in that here the problem is caused by the strict bottomup strategy which requires each token to have found all its dependents before it is combined with its headfor leftdependents this is not a problem as can be seen in which can be processed by alternating shift and leftreducebut in the sequence of reductions has to be performed from right to left as it were which rules out strict incrementalityhowever whereas the structures exemplified in can never be processed incrementally within the present framework the structure in can be handled by modifying the parsing strategy as we shall see in the next sectionit is instructive at this point to make a comparison with incremental parsing based on extended categorial grammar where the structures in would normally be handled by some kind of concatenation which does not correspond to any real semantic combination of the constituents by contrast the structure in would typically be handled by function composition which corresponds to a welldefined compositional semantic operationhence it might be argued that the treatment of is only pseudoincremental even in other frameworksbefore we leave the strict bottomup approach it can be noted that the algorithm described in this section is essentially the algorithm used by yamada and matsumoto in combination with support vector machines except that they allow parsing to be performed in multiple passes where the graph produced in one pass is given as input to the next passthe main motivation they give for parsing in multiple passes is precisely the fact that the bottomup strategy requires each token to have found all its dependents before it is combined with its head which is also what prevents the incremental parsing of structures like in order to increase the incrementality of deterministic dependency parsing we need to combine bottomup and topdown processingmore precisely we need to process leftdependents bottomup and rightdependents topdownin this way arcs will be added to the dependency graph as soon as the respective head and dependent are available even if the dependent is not complete with respect to its own dependentsfollowing abney and johnson we will call this arceager parsing to distinguish it from the standard bottomup strategy discussed in the previous sectionusing the same representation of parser configurations as before the arceager algorithm can be defined by the transitions given in figure 5 where wi and wj are arbitrary word tokens the stack to the next input token wj and pushes wj onto the stack4the transition shift pushes the next input token wi onto the stackthe transitions leftarc and rightarc like their counterparts leftreduce and rightreduce are subject to conditions that ensure la purely terminological but potentially confusing difference is that yamada and matsumoto use the term right for what we call leftreduce and the term left for rightreduce that the single head constraint is satisfied while the reduce transition can only be applied if the token on top of the stack already has a headthe shift transition is the same as before and can be applied as long as the input list is nonemptycomparing the two algorithms we see that the leftarc transition of the arceager algorithm corresponds directly to the leftreduce transition of the standard bottomup algorithmthe only difference is that for reasons of symmetry the former applies to the token on top of the stack and the next input token instead of the two topmost tokens on the stackif we compare rightarc to rightreduce however we see that the former performs no reduction but simply shifts the newly attached rightdependent onto the stack thus making it possible for this dependent to have rightdependents of its ownbut in order to allow multiple rightdependents we must also have a mechanism for popping rightdependents off the stack and this is the function of the reduce transitionthus we can say that the action performed by the rightreduce transition in the standard bottomup algorithm is performed by a rightarc transition in combination with a subsequent reduce transition in the arceager algorithmand since the rightarc and the reduce can be separated by an arbitrary number of transitions this permits the incremental parsing of arbitrary long rightdependent chainsdefining incrementality is less straightforward for the arceager algorithm than for the standard bottomup algorithmsimply considering the size of the stack will not do anymore since the stack may now contain sequences of tokens that form connected components of the dependency graphon the other hand since it is no longer necessary to shift both tokens to be combined onto the stack and since any tokens that are popped off the stack are connected to some token on the stack we can require that the graph should be connected at all times where as is the restriction of a to 5 ieas e awi wj e 51given this definition of incrementality it is easy to show that structures in figure 4 can be parsed incrementally with the arceager algorithm as well as with the standard bottomup algorithmhowever with the new algorithm we can also parse structure incrementally as wj initialization hnil w i termination hs nil ai leftarc hwis wji ai hs wji a i wk a rightarc hwis wji ai hwjwis i a i wk a reduce hwis i ai hs i ai wj a shift hs wii ai hwis i ai is shown by the following transition sequence hnil abc i ha because i hba c i hcba nil i we conclude that the arceager algorithm is optimal with respect to incrementality in dependency parsing even though it still holds true that the structures in figure 4 cannot be parsed incrementallythis raises the question how frequently these structures are found in practical parsing which is equivalent to asking how often the arceager algorithm deviates from strictly incremental processingalthough the answer obviously depends on which language and which theoretical framework we consider we will attempt to give at least a partial answer to this question in the next sectionbefore that however we want to relate our results to some previous work on contextfree parsingfirst of all it should be observed that the terms topdown and bottomup take on a slightly different meaning in the context of dependency parsing as compared to their standard use in contextfree parsingsince there are no nonterminal nodes in a dependency graph topdown construction means that a head is attached to a dependent before the dependent is attached to its dependents whereas bottomup construction means that a dependent is attached to its head before the head is attached to its headhowever topdown construction of dependency graphs does not involve the prediction of lower nodes from higher nodes since all nodes are given by the input stringhence in terms of what drives the parsing process all algorithms discussed here correspond to bottomup algorithms in contextfree parsingit is interesting to note that if we recast the problem of dependency parsing as contextfree parsing with a cnf grammar then the problematic structures in figure 4 all correspond to rightbranching structures and it is wellknown that bottomup parsers may require an unbounded amount of memory in order to process rightbranching structure moreover if we analyze the two algorithms discussed here in the framework of abney and johnson they do not differ at all as to the order in which nodes are enumerated but only with respect to the order in which arcs are enumerated the first algorithm is arcstandard while the second is arceagerone of the observations made by abney and johnson is that arceager strategies for contextfree parsing may sometimes require less space than arcstandard strategies although they may lead to an increase in local ambiguitiesit seems that the advantage of the arceager strategy for dependency parsing with respect to structure in figure 4 can be explained along the same lines although the lack of nonterminal nodes in dependency graphs means that there is no corresponding increase in local ambiguitiesalthough a detailed discussion of the relation between contextfree parsing and dependency parsing is beyond the scope of this paper we conjecture that this may be a genuine advantage of dependency representations in parsingin order to measure the degree of incrementality achieved in practical parsing we have evaluated a parser that uses the arceager parsing algorithm in combination with a memorybased classifier for predicting the next transitionin experiments reported in nivre et al a parsing accuracy of 857 was achieved using data from a small treebank of swedish divided into a training set of 5054 sentences and a test set of 631 sentenceshowever in the present context we are primarily interested in the incrementality of the parser which we measure by considering the number of connected components in at different stages during the parsing of the test datathe results can be found in table 1 where we see that out of 16545 configurations used in parsing 613 sentences 689 have zero or one connected component on the stack which is what we require of a strictly incremental parserwe also see that most violations of incrementality are fairly mild since more than 90 of all configurations have no more than three connected components on the stackmany violations of incrementality are caused by sentences that cannot be parsed into a wellformed dependency graph ie a single projective dependency tree but where the output of the parser is a set of internally connected componentsin order to test the influence of incomplete parses on the statistics of incrementality we have performed a second experiment where we restrict the test data to those 444 sentences for which the parser produces a wellformed dependency graphthe results can be seen in table 2in this case 871 of all configurations in fact satisfy the constraints of incrementality and the proportion of configurations that have no more than three connected components on the stack is as high as 995it seems fair to conclude that although strict wordbyword incrementality is not possible in deterministic dependency parsing the arceager algorithm can in practice be seen as a close approximation of incremental parsingin this paper we have analyzed the potential for incremental processing in deterministic dependency parsingour first result is negative since we have shown that strict incrementality is not achievable within the restrictive parsing framework considered herehowever we have also shown that the arceager parsing algorithm is optimal for incremental dependency parsing given the constraints imposed by the overall frameworkmoreover we have shown that in practical parsing the algorithm performs incremental processing for the majority of input structuresif we consider all sentences in the test data the share is roughly two thirds but if we limit our attention to wellformed output it is almost 90since deterministic dependency parsing has previously been shown to be competitive in terms of parsing accuracy we believe that this is a promising approach for situations that require parsing to be robust efficient and incrementalthe work presented in this paper was supported by a grant from the swedish research council the memorybased classifiers used in the experiments were constructed using the tilburg memorybased learner thanks to three anonymous reviewers for constructive comments on the submitted paper
W04-0308
incrementality in deterministic dependency parsingdeterministic dependency parsing is a robust and efficient approach to syntactic parsing of unrestricted natural language textin this paper we analyze its potential for incremental processing and conclude that strict incrementality is not achievable within this frameworkhowever we also show that it is possible to minimize the number of structures that require nonincremental processing by choosing an optimal parsing algorithmthis claim is substantiated with experimental evidence showing that the algorithm achieves incremental parsing for 689 of the input when tested on a random sample of swedish text
senseval3 task automatic labeling of semantic roles the senseval3 task to perform automatic labeling of semantic roles was designed to encourage research into and use of the framenet dataset the task was based on the considerable expansion of the framenet data since the baseline study of automatic labeling of semantic roles by gildea and jurafsky the framenet data provide an extensive body of gold standard data that can be used in lexical semantics research as the basis for its further exploitation in nlp applications eight teams participated in the task with a total of 20 runs discussions among participants during development of the task and the scoring of their runs contributed to a successful task participants used a wide variety of techniques investigating many aspects of the framenet data they achieved results showing considerable improvements from gildea and jurafskys baseline study importantly their efforts have contributed considerably to making the complex framenet dataset more accessible they have amply demonstrated that framenet is a substantial lexical resource that will permit extensive further research and exploitation in nlp applications in the future wordsense disambiguation has frequently been criticized as a task in search of a reasonsince a considerable portion of a sense inventory has only a single sense the question has been raised whether the amount of effort required by disambiguation is worthwhileheretofore the focus of disambiguation has been on the sense inventory and has not examined the major reason why we would have lexical knowledge bases how the meanings would be represented and thus available for use in natural language processing applicationsat the present time a major paradigm for representing meaning has emerged in frame semantics specifically in the framenet projecta worthy objective for the senseval community is the development of a wide range of methods for automating frame semantics specifically identifying and labeling semantic roles in sentencesan important baseline study of this process has recently appeared in the literature the framenet project has put together a body of handlabeled data and the gildea and jurafsky study has put together a set of suitable metrics for evaluating the performance of an automatic systemthis senseval3 task calls for the development of systems to meet the same objectives as the gildea and jurafsky studythe data for this task is a sample of the framenet handannotated dataevaluation of systems is measured using precision and recall of frame elements and overlap of a systems frame element sentence positions with those identified in the framenet datathe basic task for senseval3 is given a sentence a target word and its frame identify the frame elements within that sentence and tag them with the appropriate frame element namethe framenet project has just released a major revision to its database with 487 frames using 696 distinctlynamed frame elements this release includes 132968 annotated sentences the senseval3 task used 8002 of these sentences selected randomly from 40 frames having at least 370 annotations 1 participants were provided with a training set that identified for each of the 40 frames the lexical unit identification number and a sentence identification namethey were also provided with the answers ie the frame element names and their beginning and ending positionssince the training set was much larger than the test set participants were required to use the framenet 11 dataset to obtain the full sentence its target word and the tagged frame elementsfor the test data participants were provided for each frame with sentence instances that identified the lexical unit the lexical unit identification number the sentence identification number the full sentence and a specification of the target along with its start and end positionsparticipants were required to submit their answers in a text file with one answer per lineeach line was to identify the frame name and the sentence identifier and then all the frame elements with their start and end positions that their systems were able to identifyfor example for the sentence however its task is made much more difficult by the fact that derogations granted to the welsh water authority allow it to pump raw sewage into both those rivers the correct answer would appear as follows the sentences provided to participants were not presegmented this was left to the participants systemsthe framenet dataset contains considerable information that was tagged by the framenet lexicographersparticipants could use any and all of the framenet data in developing and training their systemsin the test participants could use any of this data but were strongly encouraged to use only data available in the sentence itself and in the frame that is identifiedparticipants could submit two runs one with and one without using the additional data these were scored separatelyframenet recognizes the permissibility of conceptually salient frame elements that have not been instantiated in a sentence these are called null instantiations an example occurs in the following sentence from the motion frame i went and stood in the sitting room doorway but i could not get any further my legs would not move in this case the framenet taggers considered the path frame element to be an indefinite null instantiation frame elements that have been so designated for a particular sentence appear to be core frame elements but not all core frame elements missing from a sentence have designated as null instantiationsthe correct answer for this case based on the tagging is as follows motion1087911 theme path participants were instructed to identify null instantiations in submissions by giving a value for the frame elements position2 participants were told in the task description that null instantiations would be analyzed separately3 for this senseval task participants were allowed to download the training data at any time the 21day restriction on submission of results after downloading the training data was waived since this is a new senseval task and the dataset is very complexparticipants could work with the training data as long as they wishedthe 7day restriction of submitting results after downloading the test data still appliedin general framenet frames contain many frame elements most of which are not instantiated in a given sentencesystems were not penalized if they returned more frame elements than those identified by the framenet taggersfor the 8002 sentences in the test set only 16212 frame elements constituted the answer setin scoring the runs each frame element returned by a system was counted as an item attemptedif the frame element was one that had been identified by the framenet taggers the answer was scored as correctin addition however the scoring program required that the frame boundaries identified by the systems answer had to overlap with the boundaries identified by framenetan additional measure of system performance was the degree of overlapif a systems answer coincided exactly to framenets start and end position the system received an overlap score of 10if not the overlap score was the number of characters overlapping divided by the length of the framenet start and end positions 4 the number attempted was the number of nonnull frame elements generated by a systemprecision was computed as the number of correct answers divided by the number attemptedrecall was computed as the number of correct answers divided by the number of frame elements in the test setoverlap was the average overlap of all correct answersthe percent attempted was the number of frame elements generated divided by the number of frame elements in the test set multiplied by 100if a system returned frame elements not identified in the test set its precision would be lowereight teams submitted 20 runsthree teams submitted runs only for the restricted case the other five 4hence the problem with an element having as the start and end positions teams submitted at least two runs with one team submitting 8 runs and another submitting 4 runsfour of these five teams submitted a restricted run and an unrestricted run the results for the classification task are shown in table 1the average precision over all these runs is 0803 and the average recall is 0757the overlap in each run is almost identical to the precision and differs slightly because there may have been some slight positional errors in either the framenet data or the sentence string provided in the test datathe results for the restricted case are shown in table 2the average precision over all these runs is 0595 and the average recall is 0481the average overlap is noticeably lower than the precision indicating the additional difficulty for these runs of identifying the frame element boundariesin both cases the percent attempted is quite high except for one system in the restricted runsthis indicates that systems were able to identify potential frame elements in quite a large percentage of the casessystems were allowed to return any number of frame elements for a sentence and it is possible for a system to identify more frame elements than were identified by the framenet taggersfor example run 08a asserted many more frame elements than were identified in the answer keyas a result its percent attempted was much higher than 100 percentthe number of frame elements in other runs not identified in the answer key is unknownthe effect of a higher number attempted lowers the precision for a run and increases the percent attemptedoverall the results achieved in this senseval3 task were quite highseveral teams achieved results much better than those obtained by gildea and jurafskythe average precision of 080 for all runs in the unrestricted case is only slightly lower than the 82 accuracy achieved in that study when using presegmented constituentsmany teams achieved precision at or above 090 indicating that their routines for classifying constituents is quite goodin view of the fact that the number of frames and frame elements in framenet has expanded considerably since the gildea and jurafsky study it appears that the methods employed have become quite accurate in classifying constituents5 results for the restricted were also quite good in comparison with the gildea and jurafsky study which achieved 65 precision and 61 recall at the more difficult task of simultaneously segmenting constituents and identifying their semantic role in this task four teams achieved results between 80 and 90 percent for precision and between 65 and 78 percent for recallthe participants in this task used a wide variety of methods and data in their systemsin addition they used the framenet dataset from a wide diversity of perspectivesin some cases they developed mechanisms for grouping the framenet data by part of speech or making use of the nascent inheritance hierarchy in framenetin some cases they used all frames as a basis for training and in others they 5the diversity of frame elements in the test data has not yet been investigated so the assertion that this task is more difficult is based solely on the general expansion of framenet employed novel grouping methods based on the similarities among different framesthe successes of many teams seems to indicate that the framenet dataset is an excellent lexical resource and that the resources devoted to its development have been quite valuablethe collective efforts of the participants have contributed greatly to making this complex database more accessible and more amenable to even further development not only for research purposes but also for use in many nlp applications
W04-0803
senseval3 task automatic labeling of semantic rolesthe senseval3 task to perform automatic labeling of semantic roles was designed to encourage research into and use of the framenet datasetthe task was based on the considerable expansion of the framenet data since the baseline study of automatic labeling of semantic roles by gildea and jurafskythe framenet data provide an extensive body of gold standard data that can be used in lexical semantics research as the basis for its further exploitation in nlp applicationseight teams participated in the task with a total of 20 runsdiscussions among participants during development of the task and the scoring of their runs contributed to a successful taskparticipants used a wide variety of techniques investigating many aspects of the framenet datathey achieved results showing considerable improvements from gildea and jurafskys baseline studyimportantly their efforts have contributed considerably to making the complex framenet dataset more accessiblethey have amply demonstrated that framenet is a substantial lexical resource that will permit extensive further research and exploitation in nlp applications in the futurewe conduct an evaluation exercise in the senseval3 workshop
the senseval3 english lexical sample task this paper presents the task definition resources participating systems and comparative results for the english lexical sample task which was orgaas part of the evaluation exercise the task drew the participation of 27 teams from around the world with a total of 47 systems we describe in this paper the task definition resources participating systems and comparative results for the english lexical sample task which was organized as part of the senseval3 evaluation exercisethe goal of this task was to create a framework for evaluation of systems that perform targeted word sense disambiguationthis task is a followup to similar tasks organized during the senseval1 and senseval2 evaluationsthe main changes in this years evaluation consist of a new methodology for collecting annotated data and a new sense inventory used for verb entries 2 building a sense tagged corpus with volunteer contributions over the web the sense annotated corpus required for this task was built using the open mind word expert system 1to overcome the current lack of sense tagged data and the limitations imposed by the creation of such data using trained lexicographers the omwe system enables the collection of semantically annotated corpora over the websense tagged examples are collected using a webbased application that allows contributors to annotate words with their meaningsthe tagging exercise proceeds as followsfor each target word the system extracts a set of sentences from a large textual corpusthese examples are presented to the contributors who are asked to select the most appropriate sense for the target word in each sentencethe selection is made using checkboxes which list all possible senses of the current target word plus two additional choices unclear and none of the above although users are encouraged to select only one meaning per word the selection of two or more senses is also possiblethe results of the classification submitted by other users are not presented to avoid artificial biasessimilar to the annotation scheme used for the english lexical sample at senseval2 we use a tag until two agree scheme with an upper bound on the number of annotations collected for each item set to fourthe data set used for the senseval3 english lexical sample task consists of examples extracted from the british national corpus earlier versions of omwe also included data from the penn treebank corpus the los angeles times collection as provided during trec conferences and open mind common sense the sense inventory used for nouns and adjectives is wordnet 171 which is consistent with the annotations done for the same task during senseval2verbs are instead annotated with senses from wordsmyth the main reason motivating selection of a different sense inventory is the weak verb performance of systems participating in the english lexical sample in senseval2 which may be due to the high number of senses defined for verbs in the wordnet sense inventoryby choosing a different set of senses we hope to gain insight into the dependence of difficulty of the sense disambiguation task on sense inventoriestable 1 presents the number of words under each part of speech and the average number of senses for each classfor this evaluation exercise we decided to isolate the task of semantic tagging from the task of identifying multiword expressions we applied a filter that removed all examples pertaining to multiword expressions prior to the tagging phaseconsequently the training and test data sets made available for this task do not contain collocations as possible target words but only single word unitsthis is a somewhat different definition of the task as compared to previous similar evaluations the difference may have an impact on the overall performance achieved by systems participating in the taskthe intertagger agreement obtained so far is closely comparable to the agreement figures previously reported in the literaturekilgarriff mentions that for the senseval2 nouns and adjectives there was a 665 agreement between the first two taggings entered for each itemabout 12 of that tagging consisted of multiword expressions and proper nouns which are usually not ambiguous and which are not considered during our data collection processso far we measured a 628 intertagger agreement between the first two taggings for single word tagging plus closeto100 precision in tagging multiword expressions and proper nouns this results in an overall agreement of about 673 which is reasonable and closely comparable with previous figuresnote that these figures are collected for the entire omwe data set build so far which consists of annotated data for more than 350 wordsin addition to raw intertagger agreement the kappa statistic which removes from the agreement rate the amount of agreement that is expected by chance was also determinedwe measure two figures microaverage where number of senses agreement by chance and are determined as an average for all words in the set and macroaverage where intertagger agreement agreement by chance and are individually determined for each of the words in the set and then combined in an overall averagewith an average of five senses per word the average value for the agreement by chance is measured at 020 resulting in a micro statistic of 058for macro estimations we assume that word senses follow the distribution observed in the omwe annotated data and under this assumption the macro is 03527 teams participated in this word sense disambiguation tasktables 2 and 3 list the names of the participating systems the corresponding institutions and the name of the first author which can be used as reference to a paper in this volume with more detailed descriptions of the systems and additional analysis of the resultsthere were no restrictions placed on the number of submissions each team could makea total number of 47 submissions were received for this tasktables 2 and 3 show all the submissions for each team gives a brief description of their approaches and lists the precision and recall obtained by each system under fine and coarse grained evaluationsthe precisionrecall baseline obtained for this task under the most frequent sense heuristic is 552 and 645 the performance of most systems is significantly higher than the baseline with the best system performing at 729 for fine grained scoringnot surprisingly several of the top performing systems are based on combinations of multiple classifiers which shows once again that voting schemes that combine several learning algorithms outperform the accuracy of individual classifiersthe english lexical sample task in senseval3 featured english ambiguous words that were to be tagged with their most appropriate wordnet or wordsmyth sensethe objective of this task was to determine feasibility of reliably finding the english lexical sample word sense disambiguation taskprecision and recall figures are provided for both fine grained and coarse grained scoringcorresponding team and reference to system description are indicated for the first system for each team appropriate sense for words with various degrees of polysemy using different sense inventories and determine the usefulness of sense annotated data collected over the web the results of 47 systems that participated in this event tentatively suggest that supervised machine learning techniques can significantly improve over the most frequent sense baseline and also that it is possible to design unsupervised techniques for reliable word sense disambiguationadditionally this task has highlighted creation of testing and training data by leveraging the knowledge of web volunteersthe training and test data sets used in this exercise are available online from httpwwwsensevalorg and httpteachcomputersorgmany thanks to all those who contributed to the open mind word expert project making this task possiblein particular we are grateful to gwen lenker our most productive contributorwe are also grateful to all the participants in this task for their hard work and involvement in this evaluation exercisewithout them all these comparative analyses would not be possiblewe are indebted to the princeton wordnet team for making wordnet available free of charge and to robert parks from wordsmyth for making available the verb entries used in this evaluationwe are particularly grateful to the national science foundation for their support under research grant iis0336793 and to the university of north texas for a research grant that provided funding for contributor prizes
W04-0807
the senseval3 english lexical sample taskthis paper presents the task definition resources participating systems and comparative results for the english lexical sample task which was organized as part of the senseval3 evaluation exercisethe task drew the participation of 27 teams from around the world with a total of 47 systems
the english allwords task merlsysteml 459 447 irstddd09you 729 441 autopsyou 490 433 clr04aw 506 431 autopsnvsyou 563 354 merlsystem2 480 352 dlsiuaallnosu 343 275 table 2 quotwithout youquot scores sorted by recall a s or you suffix after the system name indicates that the system was reported as supervised or unsupervised respectively system name email contact autops dianamsussexacuk autopsnvs dianamsussexacuk clr04aw kenclrescom dfaunsupaw davidlsiunedes dlsiuanosu montoyodlsiuaes gamblaw bartdecadtuaacbe ir stddd00 strappakfitcit irstddd09 strappakfitcit irstdddlsi strappakfitcit koc university dyuretkuedutr kunlpengall hcseokm1pkoreaackr lccaw parkerlanguagecomputercom meaning lluismlsiupces meaning simple lluismlsiupces merlsysteml bhikshamerlcom merlsystem2 bhikshamerlcom merlsystem3 bhikshamerlcom r2d2 eaw montoyodlsiuaes senselearner radacsuntedu ujaen mgarciaujaenes usussexprob3 juditapreissclcamacuk usussexprob4 juditapreissclcamacuk usussexprob5 juditapreissclcamacuk upvshmmeaw amolinadsicupves upvciaosenso amolinadsicupves upvciaosens02 amolinadsicupves table 3 email contact for each system sorted alphabetically the unsupervised systems in fact all of the seven systems reported as supervised scored higher than any of the nine systems reported as unsupervised in both precision and recall the greatest difference between these results those of the allwords task is that a greater number of systems have now achieved scores at or above the baseline while this result is encouraging it seems that the best systems have a hit a wall in the 65 70 range this is not surprising given the typical interannotator agreement of 7075 for this task we believe that further significant progress must await the development of resources with coarsergrained sense distinctions and with glosses that draw explicit contrasts between the senses resources more suitable for the task at hand the glossesthe glosses do not themselves make the sense distinctions explicitin fact we believe that most of the annotator disagreements were like this example between closely related wordnet senses with only subtle distinctions and that more coarsegrained sense distinctions are needed 26 systems were submitted by a total of 16 teamsthe system names along with email contacts are listed in table 3two sets of scores were computed for each systemfor the first set of scores we assumed an answer of you whenever the system failed to provide a sensethus the instance would be scored as correct if the answer key also marked it as you and incorrect otherwisefor the second set of scores we simply skipped every instance where the system did not provide a sensethus precision was not affected by those instances but recall was loweredeven though any given team may have intended their results to be interpreted one way or the other we have included both sets of scores for comparative purposestable 1 shows the system performance under the first interpretation of the results the average precision and recall is 522table 2 shows the system performance under the second interpretation of the results the average precision is 574 and 519 is the average recallsince comprehensive groupings of the wordnet senses do not yet exist all results given are the result of finegrained scoringalthough we did not compute a baseline score we received several baseline figures from our participantsdeniz yuret of koc university computed a baseline of 609 precision and recall by using the first wordnet entry for the given word and partofspeechbart decadt of the university of antwerp and submitter of the gamblaw system provided a baseline of 624 using the same method as with the senseval2 english allwords task the supervised systems fared much better than the unsupervised systems in fact all of the seven systems reported as supervised scored higher than any of the nine systems reported as unsupervised in both precision and recall the greatest difference between these results and those of the senseval2 english allwords task is that a greater number of systems have now achieved scores at or above the baselinewhile this result is encouraging it seems that the best systems have a hit a wall in the 6570 rangethis is not surprising given the typical interannotator agreement of 7075 for this taskwe believe that further significant progress must await the development of resources with coarsergrained sense distinctions and with glosses that draw explicit contrasts between the senses resources more suitable for the task at hand
W04-0811
the english allwords taskwe describe our experience in preparing the sensetagged corpus used in the english allwords task and we tabulate the scores
rouge a package for automatic evaluation of summaries for recalloriented understudy for gisting evaluation it includes measures to automatically determine the quality of a summary by comparing it to other summaries created by humans the measures count the number of overlapping units such as ngram word sequences and word pairs between the computergenerated summary to be evaluated and the ideal summaries created by humans this paper introduces four different included in the summarization evaluation package and their evaluations three of them have been used in the document understanding conference 2004 a largescale summarization evaluation sponsored by nist traditionally evaluation of summarization involves human judgments of different quality metrics for example coherence conciseness grammaticality readability and content however even simple manual evaluation of summaries on a large scale over a few linguistic quality questions and content coverage as in the document understanding conference would require over 3000 hours of human effortsthis is very expensive and difficult to conduct in a frequent basistherefore how to evaluate summaries automatically has drawn a lot of attention in the summarization research community in recent yearsfor example saggion et al proposed three contentbased evaluation methods that measure similarity between summariesthese methods are cosine similarity unit overlap and longest common subsequencehowever they did not show how the results of these automatic evaluation methods correlate to human judgmentsfollowing the successful application of automatic evaluation methods such as bleu in machine translation evaluation lin and hovy showed that methods similar to bleu ie ngram cooccurrence statistics could be applied to evaluate summariesin this paper we introduce a package rouge for automatic evaluation of summaries and its evaluationsrouge stands for recalloriented understudy for gisting evaluationit includes several automatic evaluation methods that measure the similarity between summarieswe describe rougen in section 2 rougel in section 3 rougew in section 4 and rouges in section 5section 6 shows how these measures correlate with human judgments using duc 2001 2002 and 2003 datasection 7 concludes this paper and discusses future directionsformally rougen is an ngram recall between a candidate summary and a set of reference summariesrougen is computed as follows where n stands for the length of the ngram gramn and countmatch is the maximum number of ngrams cooccurring in a candidate summary and a set of reference summariesit is clear that rougen is a recallrelated measure because the denominator of the equation is the total sum of the number of ngrams occurring at the reference summary sidea closely related measure bleu used in automatic evaluation of machine translation is a precisionbased measurebleu measures how well a candidate translation matches a set of reference translations by counting the percentage of ngrams in the candidate translation overlapping with the referencesplease see papineni et al for details about bleunote that the number of ngrams in the denominator of the rougen formula increases as we add more referencesthis is intuitive and reasonable because there might exist multiple good summariesevery time we add a reference into the pool we expand the space of alternative summariesby controlling what types of references we add to the reference pool we can design evaluations that focus on different aspects of summarizationalso note that the numerator sums over all reference summariesthis effectively gives more weight to matching ngrams occurring in multiple referencestherefore a candidate summary that contains words shared by more references is favored by the rougen measurethis is again very intuitive and reasonable because we normally prefer a candidate summary that is more similar to consensus among reference summariesso far we only demonstrated how to compute rougen using a single referencewhen multiple references are used we compute pairwise summarylevel rougen between a candidate summary s and every reference ri in the reference setwe then take the maximum of pairwise summarylevel rougen scores as the final multiple reference rougen scorethis can be written as follows this procedure is also applied to computation of rougel rougew and rouges in the implementation we use a jackknifing proceduregiven m references we compute the best score over m sets of m1 referencesthe final rougen score is the average of the m rougen scores using different m1 referencesthe jackknifing procedure is adopted since we often need to compare system and human performance and the reference summaries are usually the only human summaries availableusing this procedure we are able to estimate average human performance by averaging m rougen scores of one reference vs the rest m1 referencesalthough the jackknifing procedure is not necessary when we just want to compute rouge scores using multiple references it is applied in all rouge score computations in the rouge evaluation packagein the next section we describe a rouge measure based on longest common subsequences between two summariesa sequence z z1 z2 zn is a subsequence of another sequence x x1 x2 xm if there exists a strict increasing sequence i1 i2 ik of indices of x such that for all j 1 2 k we have xij zj given two sequences x and y the longest common subsequence of x and y is a common subsequence with maximum lengthlcs has been used in identifying cognate candidates during construction of nbest translation lexicon from parallel textmelamed used the ratio between the length of the lcs of two words and the length of the longer word of the two words to measure the cognateness between themhe used lcs as an approximate string matching algorithmsaggion et al used normalized pairwise lcs to compare similarity between two texts in automatic summarization evaluationto apply lcs in summarization evaluation we view a summary sentence as a sequence of wordsthe intuition is that the longer the lcs of two summary sentences is the more similar the two summaries arewe propose using lcsbased fmeasure to estimate the similarity between two summaries x of length m and y of length n assuming x is a reference summary sentence and y is a candidate summary sentence as follows where lcs is the length of a longest common subsequence of x and y and ß plcsrlcs when flcsrlcsflcsplcsin duc ß is set to a very big number therefore only rlcs is consideredwe call the lcsbased fmeasure ieequation 4 rougel notice that rougel is 1 when x y while rougel is zero when lcs 0 ie there is nothing in common between x and y fmeasure or its equivalents has been shown to have met several theoretical criteria in measuring accuracy involving more than one factor the composite factors are lcsbased recall and precision in this casemelamed et al used unigram fmeasure to estimate machine translation quality and showed that unigram fmeasure was as good as bleuone advantage of using lcs is that it does not require consecutive matches but insequence matches that reflect sentence level word order as ngramsthe other advantage is that it automatically includes longest insequence common ngrams therefore no predefined ngram length is necessaryrougel as defined in equation 4 has the property that its value is less than or equal to the minimum of unigram fmeasure of x and y unigram recall reflects the proportion of words in x that are also present in y while unigram precision is the proportion of words in y that are also in x unigram recall and precision count all cooccurring words regardless their orders while rougel counts only insequence cooccurrencesby only awarding credit to insequence unigram matches rougel also captures sentence level structure in a natural wayconsider the following example s1 police killed the gunman s2 police kill the gunman s3 the gunman kill police we only consider rouge2 ien2 for the purpose of explanationusing s1 as the reference and s2 and s3 as the candidate summary sentences s2 and s3 would have the same rouge2 score since they both have one bigram iethe gunmanhowever s2 and s3 have very different meaningsin the case of rougel s2 has a score of 34 075 and s3 has a score of 24 05 with ß 1therefore s2 is better than s3 according to rougelthis example also illustrated that rougel can work reliably at sentence levelhowever lcs suffers one disadvantage that it only counts the main insequence words therefore other alternative lcses and shorter sequences are not reflected in the final scorefor example given the following candidate sentence s4 the gunman police killed using s1 as its reference lcs counts either the gunman or police killed but not both therefore s4 has the same rougel score as s3rouge2 would prefer s4 than s3previous section described how to compute sentencelevel lcsbased fmeasure scorewhen applying to summarylevel we take the union lcs matches between a reference summary sentence ri and every candidate summary sentence cjgiven a reference summary of you sentences containing a total of m words and a candidate summary of v sentences containing a total of n words the summarylevel lcsbased fmeasure can be computed as follows again ß is set to a very big number in duc ie only rlcs is consideredlcs is the lcs score of the union longest common subsequence between reference sentence ri and candidate summary c for example if ri w1 w2 w3 w4 w5 and c contains two sentences c1 w1 w2 w6 w7 w8 and c2 w1 w3 w8 w9 w5 then the longest common subsequence of ri and c1 is w1 w2 and the longest common subsequence of ri and c2 is w1 w3 w5the union longest common subsequence of ri c1 and c2 is w1 w2 w3 w5 and lcs 45the normalized pairwise lcs proposed by radev et al between two summaries s1 and s2 lcsmead is written as follows assuming s1 has m words and s2 has n words equation 8 can be rewritten as equation 9 due to symmetry we then define mead lcs recall and mead lcs precision as follows we can rewrite equation in terms of rlcsmead and plcsmead with a constant parameter ß 1 as follows equation 12 shows that normalized pairwise lcs as defined in radev et al and implemented in mead is also a fmeasure with ß 1sentencelevel normalized pairwise lcs is the same as rougel with ß 1besides setting ß 1 summarylevel normalized pairwise lcs is different from rougel in how a sentence gets its lcs score from its referencesnormalized pairwise lcs takes the best lcs score while rougel takes the union lcs scorelcs has many nice properties as we have described in the previous sectionsunfortunately the basic lcs also has a problem that it does not differentiate lcses of different spatial relations within their embedding sequencesfor example given a reference sequence x and two candidate sequences y1 and y2 as follows y1 and y2 have the same rougel scorehowever in this case y1 should be the better choice than y2 because y1 has consecutive matchesto improve the basic lcs method we can simply remember the length of consecutive matches encountered so far to a regular two dimensional dynamic program table computing lcswe call this weighted lcs and use k to indicate the length of the current consecutive matches ending at words xi and yjgiven two sentences x and y the wlcs score of x and y can be computed using the following dynamic programming procedure for f for any positive integers x and yin other words consecutive matches are awarded more scores than nonconsecutive matchesfor example fak b when k 0 and a b 0this function charges a gap penalty of b for each nonconsecutive ngram sequencesanother possible function family is the polynomial family of the form ka where a 1however in order to normalize the final rougew score we also prefer to have a function that has a close form inverse functionfor example fk2 has a close form inverse function f 1k12fmeasure based on wlcs can be computed as follows given two sequences x of length m and y of length n where f 1 is the inverse function of f in duc ß is set to a very big number therefore only rwlcs is consideredwe call the wlcsbased fmeasure ieequation 15 rougewusing equation 15 and fk2 as the weighting function the rougew scores for sequences y1 and y2 are 0571 and 0286 respectivelytherefore y1 would be ranked higher than y2 using wlcswe use the polynomial function of the form ka in the rouge evaluation packagein the next section we introduce the skipbigram cooccurrence statisticsskipbigram is any pair of words in their sentence order allowing for arbitrary gapsskipbigram cooccurrence statistics measure the overlap of skipbigrams between a candidate translation and a set of reference translationsusing the example given in section 31 each sentence has c1 6 skipbigramsfor example s1 has the following skipbigrams s2 has three skipbigram matches with s1 s3 has one skipbigram match with s1 and s4 has two skipbigram matches with s1 given translations x of length m and y of length n assuming x is a reference translation and y is a candidate translation we compute skipbigrambased fmeasure as follows where skip2 is the number of skipbigram matches between x and y ß controlling the relative importance of pskip2 and rskip2 and c is the combination functionwe call the skipbigrambased fmeasure ieequation 18 rougesusing equation 18 with ß 1 and s1 as the reference s2s rouges score is 05 s3 is 0167 and s4 is 0333therefore s2 is better than s3 and s4 and s4 is better than s3this result is more intuitive than using bleu2 and rougel one advantage of skipbigram vs bleu is that it does not require consecutive matches but is still sensitive to word ordercomparing skipbigram with lcs skipbigram counts all inorder matching word pairs while lcs only counts one longest common subsequenceapplying skipbigram without any constraint on the distance between the words spurious matches such as the the or of in might be counted as valid matchesto reduce these spurious matches we can limit the maximum skip distance dskip between two inorder words that is allowed to form a skipbigramfor example if we set dskip to 0 then rouges is equivalent to bigram overlap fmeasureif we set dskip to 4 then only word pairs of at most 4 words apart can form skipbigramsadjusting equations 16 17 and 18 to use maximum skip distance limit is straightforward we only count the skipbigram matches skip2 within the maximum skip distance and replace denominators of equations 16 c and 17 c with the actual numbers of within distance skipbigrams from the reference and the candidate respectivelyone potential problem for rouges is that it does not give any credit to a candidate sentence if the sentence does not have any word pair cooccurring with its referencesfor example the following sentence has a rouges score of zero s5 gunman the killed police s5 is the exact reverse of s1 and there is no skip bigram match between themhowever we would like to differentiate sentences similar to s5 from sentences that do not have single word cooccurrence with s1to achieve this we extend rouges with the addition of unigram as counting unitthe extended version is called rougesuwe can also obtain rougesu from rouges by adding a beginofsentence marker at the beginning of candidate and reference sentencesto assess the effectiveness of rouge measures we compute the correlation between rouge assigned summary scores and human assigned summary scoresthe intuition is that a good evaluation measure should assign a good score to a good summary and a bad score to a bad summarythe ground truth is based on human assigned scoresacquiring human judgments are usually very expensive fortunately we have duc 2001 2002 and 2003 evaluation data that include human judgments for the following besides these human judgments we also have 3 sets of manual summaries for duc 2001 2 sets for duc 2002 and 4 sets for duc 2003human judges assigned content coverage scores to a candidate summary by examining the percentage of content overlap between a manual summary unit ie elementary discourse unit or sentence and the candidate summary using summary evaluation environment3 developed by the university of southern californias information sciences institute the overall candidate summary score is the average of the content coverage scores of all the units in the manual summarynote that human judges used only one manual summary in all the evaluations although multiple alternative summaries were availablewith the duc data we computed pearsons product moment correlation coefficients spearmans rank order correlation coefficients and kendalls correlation coefficients between systems average rouge scores and their human assigned average coverage scores using single reference and multiple referencesto investigate the effect of stemming and inclusion or exclusion of stopwords we also ran experiments over original automatic and manual summaries stemmed4 version of the summaries and stopped version of the summaries for example we computed rouge scores for the 12 systems participated in the duc 2001 single document summarization evaluation using the case set with single reference and then calculated the three correlation scores for these 12 systems rouge scores vs human assigned average coverage scoresafter that we repeated the process using multiple references and then using stem and stop setstherefore 2 x 3 x 3 18 data points were collected for each rouge measure and each duc taskto assess the significance of the results we applied bootstrap resampling technique to estimate 95 confidence intervals for every correlation computation17 rouge measures were tested for each run using rouge evaluation package v121 rougen with n 1 to 9 rougel rougew with weighting factor a 12 rouges and rougesu with maximum skip distance d1o 1 4 and 9due to limitation of space we only report correlation analysis results based on pearsons correlation coefficientcorrelation analyses based on spearmans and kendalls correlation coefficients are tracking pearsons very closely and will be posted later at the rouge website5 for referencethe critical value6 for pearsons correlation is 0632 at 95 confidence with 8 degrees of freedomtable 1 shows the pearsons correlation coefficients of the 17 rouge measures vs human judgments on duc 2001 and 2002 100 words single document summarization datathe best values in each column are marked with dark color and statistically equivalent values to the best values are marked with graywe found that correlations were not affected by stemming or removal of stopwords in this data set rouge2 performed better among the rougen variants rougel rougew and rouges were all performing well and using multiple references improved performance though not muchall rouge measures achieved very good correlation with human judgments in the duc 2002 datathis might due to the double sample size in duc 2002 for each systemtable 2 shows the correlation analysis results on the duc 2003 single document very short summary datawe found that rouge1 rougel rougesu4 and 9 and rougew were very good measures in this category rougen with n 1 performed significantly worse than all other measures and exclusion of stopwords improved performance in general except for rouge1due to the large number of samples in this data set using multiple references did not improve correlationsin table 3 a1 a2 and a3 we show correlation analysis results on duc 2001 2002 and 2003 100 words multidocument summarization datathe results indicated that using multiple references improved correlation and exclusion of stopwords usually improved performancerouge1 2 and 3 performed fine but were not consistentrouge1 rouges4 rougesu4 rouges9 and rougesu9 with stopword removal had correlation above 070rougel and rougew did not work well in this set of datatable 3 c d1 d2 e1 e2 and f show the correlation analyses using multiple references on the rest of duc datathese results again suggested that exclusion of stopwords achieved better performance especially in multidocument summaries of 50 wordsbetter correlations were observed on long summary tasks ie200 and 400 words summariesthe relative performance of rouge measures followed the pattern of the 100 words multidocument summarization taskcomparing the results in table 3 with tables 1 and 2 we found that correlation values in the multidocument tasks rarely reached high 90 except in long summary tasksone possible explanation of this outcome is that we did not have large amount of samples for the multidocument tasksin the single document summarization tasks we had over 100 samples while we only had about 30 samples in the multidocument tasksthe only tasks that had over 30 samples was from duc 2002 and the correlations of rouge measures with human judgments on the 100 words summary task were much better and more stable than similar tasks in duc 2001 and 2003statistically stable human judgments of system performance might not be obtained due to lack of samples and this in turn caused instability of correlation analysesin this paper we introduced rouge an automatic evaluation package for summarization and conducted comprehensive evaluations of the automatic measures included in the rouge package using three years of duc datato check the significance of the results we estimated confidence intervals of correlations using bootstrap resamplingwe found that rouge2 rougel rougew and rouges worked well in single document summarization tasks rouge1 rougel rougew rougesu4 and rougesu9 performed great in evaluating very short summaries correlation of high 90 was hard to achieve for multidocument summarization tasks but rouge1 rouge2 rouges4 rouges9 rougesu4 and rougesu9 worked reasonably well when stopwords were excluded from matching exclusion of stopwords usually improved correlation and correlations to human judgments were increased by using multiple referencesin summary we showed that the rouge package could be used effectively in automatic evaluation of summariesin a separate study rougel w and s were also shown to be very effective in automatic evaluation of machine translationthe stability and reliability of rouge at different sample sizes was reported by the author in however how to achieve high correlation with human judgments in multidocument summarization tasks as rouge already did in single document summarization tasks is still an open research topicthe author would like to thank the anonymous reviewers for their constructive comments paul over at nist yousa and rouge users around the world for testing and providing useful feedback on earlier versions of the rouge evaluation package and the darpa tides project for supporting this research
W04-1013
rouge a package for automatic evaluation of summariesrouge stands for recalloriented understudy for gisting evaluationit includes measures to automatically determine the quality of a summary by comparing it to other summaries created by humansthe measures count the number of overlapping units such as ngram word sequences and word pairs between the computergenerated summary to be evaluated and the ideal summaries created by humansthis paper introduces four different rouge measures rougen rougel rougew and rouges included in the rouge summarization evaluation package and their evaluationsthree of them have been used in the document understanding conference 2004 a largescale summarization evaluation sponsored by nist
biomedical named entity recognition using conditional random fields and rich feature sets as the wealth of biomedical knowledge in the form of literature increases there is a rising need for effective natural language processing tools to assist in organizing curating and retrieving this informationto that end named entity recognition is an important first step for many of these larger information management goalsin recent years much attention has been focused on the problem of recognizing gene and protein mentions in biomedical abstractsthis paper presents a framework for simultaneously recognizing occurrences of protein dna rna cellline and celltype entity classes using conditional random fields with a variety of traditional and novel featuresi show that this approach can achieve an overall f1 measure around 70 which seems to be the current state of the artthe system described here was developed as part of the bionlpnlpba 2004 shared taskexperiments were conducted on a training and evaluation set provided by the task organizersbiomedical named entity recognition can be thought of as a sequence segmentation problem each word is a token in a sequence to be assigned a label conditional random fields are undirected statistical graphical models a special case of which is a linear chain that corresponds to a conditionally trained finitestate machinesuch models are well suited to sequence analysis and crfs in more accurately the data is in iob formatbdna labels the first word of a dna mention idna labels all subsequent words and o labels nonentitiesfor simplicity this paper only refers to the entities not all the iob label variants particular have been shown to be useful in partofspeech tagging shallow parsing and named entity recognition for newswire data they have also just recently been applied to the more limited task of finding gene and protein mentions with promising early resultslet o be an sequence of observed words of length n let 5 be a set of states in a finite state machine each corresponding to a label l e l let s be the sequence of states in 5 that correspond to the labels assigned to words in the input sequence o linearchain crfs define the conditional probability of a state sequence given an input sequence to be where zo is a normalization factor of all state sequences fj is one of m functions that describes a feature and λj is a learned weight for each such feature functionthis paper considers the case of crfs that use a firstorder markov independence assumption with binary feature functionsfor example a feature may have a value of 0 in most cases but given the text the atpase it has the value 1 along the transition where si_1 corresponds to a state with the label other si corresponds to a state with the label protein and fj is the feature function wordatpase e o at position i in the sequenceother feature functions that could have the value 1 along this transition are capitalized mixedcase and suffixaseintuitively the learned feature weight λj for each feature fj should be positive for features that are correlated with the target label negative for features that are anticorrelated with the label and near zero for relatively uninformative featuresthese weights are set to maximize the conditional log likelihood of labeled sequences in a training set d f when the training state sequences are fully labeled and unambiguous the objective function is convex thus the model is guaranteed to find the optimal weight settings in terms of llonce these settings are found the labeling for an new unlabeled sequence can be done using a modified viterbi algorithmcrfs are presented in more complete detail by lafferty et al these experiments use the mallet implementation of crfs which uses a quasinewton method called lbfgs to find these feature weights efficientlyone property that makes feature based statistical models like crfs so attractive is that they reduce the problem to finding an appropriate feature setthis section outlines the two main types of features used in these experimentsthe simplest and most obvious feature set is the vocabulary from the training datageneralizations over how these words appear are also importantthe present model includes training vocabulary 17 orthographic features based on regular expressions as well as prefixes and suffixes in the character length range 35words are also assigned a generalized word class similar to collins which replaces capital letters with a lowercase letters with a digits with 0 and all other characters with there is a similar brief word class feature which collapses consecutive identical characters into onethus the words il5 and sh3 would both be given the features wcaa0 and bwca0 while factin and tcells would both be assigned wca aaaaa and bwca ato model local context simply neighboring words in the window 11 are also added as featuresfor instance the middle token in the sequence human udg promoter would have features wordudg neighborhuman and neighborpromoterin addition to orthography the model could also benefit from generalized semantic word groupsif training sequences contain pmlrar alpha beta 2m and kappa bspecific dna binding protein all labeled with protein the model might learn that the words alpha beta and kappa are indicative of proteins but cannot capture the fact that they are all semantically related because they are greek letterssimilarly words with the feature wcaaa are often part of protein names such as rab alu and gag but the model may have a difficult time setting the weights for this feature when confronted with words like phe arg and cys which are amino acid abbreviations and not often labeled as part of a protein namethis sort of semantic domain knowledge can be provided in the form of lexiconsi prepared a total of 17 such lexicons which include 7 that were entered by hand and 4 corresponding to genes chromosome locations proteins and cell lines drawn from online public databases feature functions for the lexicons are set to 1 if they match words in the input sequence exactlyfor lexicon entries that are multiword all words are required to match in the input sequencesince no suitable database of terms for the celltype class was found online a lexicon was constructed by utilizing google sets6 an online tool which takes a few seed examples and leverages googles web index to return other terms that appear in similar formatting and context as the seeds on web pages across the internetseveral examples from the training data were used as seeds and new cell types were returnedthe process was repeated until the lexicon grew to roughly 50 entries though it could probably be more completewith all this information at the models disposal it can still be difficult to properly disambiguate between these entitiesfor example the acronym epc appears in these static lexicons both as a protein and as a cell line furthermore a single word like transcript is sometimes all that disambiguates between rna and dna mentions the crf can learn weights for these individual words but it may help to build general dynamic keyword lexicons that are associated with each label to assist in disambiguating between similar classes these keyword lexicons are generated automatically as followsall of the labeled terms are extracted from the training set and separated into five lists stop words greek letters and digits are filtered and remaining words are tallied for raw frequency counts under each entity class labelthese frequencies are then subjected to a x2 test where the null hypothesis is that a words frequency is the same for a given entity as it is for any other entity of interest all words for which the null hypothesis is rejected with a pvalue 0005 are added to the keyword lexicon for its majority classsome example keywords are listed in table 1two experiments were completed in the time allotted one crf model using only the orthographic features described in section 31 and a second system using all the semantic lexicons from 32 as welldetailed results are presented in table 2the orthographic model achieves an overall f1 measure of 698 on the evaluation set converging after 230 training iterations and approximately 18 hours of computationthe complete model however only reached an overall f1 of 695 on the evaluation set converging after 152 iterations in approximately 9 hoursthe deleterious effect of the semantic lexicons is surprising and puzzling7 however even though semantic lexicons slightly decrease overall performance it is worthwhile to note that adding lexicons actually improves both recall and precision for the rna and cellline entitiesthese happen to be the two lowest frequency class labels in the data together comprising less than 10 of the mentions in either the training or evaluation seterror analysis shows that several of the orthographic models false negatives for these entities are of the form messenger accumulation or nonadherent culture it may be that keyword lexicons contributed to the model identifying these low frequency terms more accuratelyalso of note is that in both experiments the crf framework achieves somewhat comparable performance across all entitiesin a previous attempt to use a hidden markov model to simultaneously recognize multiple biomedical entities hmm performance for a particular entity seemed more or less proportional to its frequency in the datathe advantage of the crf here may be due to the fact that hmms are generative models trained to learn the joint probability p where data for l may be sparse and use bayes rule to predict the best labelcrfs are discriminative models trained to maximize p directlyin short i have presented in detail a framework for recognizing multiple entity classes in biomedical abstracts with conditional random fieldsi have shown that a crfbased model with only simple orthographic features can achieve performance near the current state of the art while using semantic lexicons do not positively affect performance while the system presented here shows promise there is still much to be exploredricher syntactic information such as shallow parsing may be usefulthe method introduced in section 32 to generate semantic keywords can also be adapted to generate features for entityspecific morphology and context both linearly and hierarchically most interesting though might be to investigate why the lexicons do not generally helpone explanation is simply an issue of tokenizationwhile one abstract refers to il12 others may write il12 or il 12 similarly the generalization of entities to groups can cause problems for these rigid lexicons that require exact matchingenumerating all such variants for every entry in a lexicon is absurdperhaps relaxing the matching criteria and standardizing tokenization for both the input and lexicons will improve their utilityi would like to thank my advisor mark craven for his advice and guidance as well as andrew mccallum and aron culotta for answering my questions about the mallet systemthis work is supported by nlm training grant 5t15lm00735902 and nih grant r01 lm0705001
W04-1221
biomedical named entity recognition using conditional random fields and rich feature sets
the icsi meeting recorder dialog act corpus 2 data we describe a new corpus of over 180000 handannotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturallyoccurring meetings we provide a brief summary of the annotation system and labeling procedure interannotator reliability statistics overall distributional statistics a description of auxiliary files distributed with the corpus and information on how to obtain the data we describe a new corpus of over 180000 handannotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturallyoccurring meetingswe provide a brief summary of the annotation system and labeling procedure interannotator reliability statistics overall distributional statistics a description of auxiliary files distributed with the corpus and information on how to obtain the datanatural meetings offer rich opportunities for studying a variety of complex discourse phenomenameetings contain regions of high speaker overlap affective variation complicated interaction structures abandoned or interrupted utterances and other interesting turntaking and discourselevel phenomenain addition meetings that occur naturally involve real topics debates issues and social dynamics that should generalize more readily to other real meetings than might data collected using artificial scenariosthus meetings pose interesting challenges to descriptive and theoretical models of discourse as well as to researchers in the speech recognition community 479131415we describe a new corpus of handannotated dialog acts and adjacency pairs for roughly 72 hours of naturally occurring multiparty meetingsthe meetings were recorded at the international computer science institute as part of the icsi meeting recorder project 9word transcripts and audio files from that corpus are available through the linguistic data consortium in this paper we provide a first description of the meeting recorder dialog act corpus a companion set of annotations that augment the word transcriptions with discourselevel segmentations dialog act information and adjacency pair informationthe corpus is currently available online for research purposes 16 and we plan a future release through the ldcthe icsi meeting corpus data is described in detail in 9it consists of 75 meetings each roughly an hour in lengththere are 53 unique speakers in the corpus and an average of about 6 speakers per meetingreflecting the makeup of the institute there are more male than female speakers there are a28 native english speakers although many of the nonnative english speakers are quite fluentof the 75 meetings 29 are meetings of the icsi meeting recorder project itself 23 are meetings of a research group focused on robustness in automatic speech recognition 15 involve a group discussing natural language processing and neural theories of language and 8 are miscellaneous meeting typesthe last set includes 2 very interesting meetings involving the corpus transcribers as participants annotation involved three types of information marking of da segment boundaries marking of das themselves and marking of correspondences between das each type of annotation is described in detail in 7segmentation methods were developed based on separating out speech regions having different discourse functions but also paying attention to pauses and intonational groupingto distinguish utterances that are prosodically one unit but which contain multiple das we use a pipe bar in the annotationsthis allows the researcher to either split or not split at the bar depending on the research goalswe examined existing annotation systems including 125681011 for similarity to the style of interaction in the icsi meetingswe found that swbddamsl 11 a system adapted from damsl 6 provided a fairly good fitalthough our meetings were natural and thus had real agenda items the dialog was less like humanhuman or humanmachine taskoriented dialog added in mrdatags in italics are based on the swbddamsl version but have had meanings modified for mrdathe ordering of tags in the table is explained as follows in the mapping of damsl tags to swbddamsl tags in the swbddamsl manual tags were ordered in categories such as communication status information requests and so onin the mapping of mrda tags to swbddamsl tags here we have retained the same overall ordering of tags within the table but we do not explicitly mark the higherlevel swbddamsl categories in order to avoid confusion since categorical structure differs in the two systems and more like humanhuman casual conversation since we were working with english rather than spanish and did not view a large tag set as a problem we preferred 611 over 58 for this workwe modified the system in 11 a number of ways as indicated in figure 1 and as explained further in 7the mrda system requires one general tag per da and attaches a variable number of following specific tagsexcluding nonlabelable cases there are 11 general tags and 39 specific tagsthere are two disruption forms two types of indecipherable utterances and a nonda tag to denote rising tone an interface allowed annotators to play regions of speech modify transcripts and enter da and adjacency pair information as well as other commentsmeetings were divided into 10 minute chunks labeling time averaged about 3 hours per chunk although this varied considerably depending on the complexity of the dialogan example from one of the meetings is shown in figure 2 as an illustration of some of the types of interactions we observe in the corpusaudio files and additional sample excerpts are available from 16in addition to the obvious high degree of overlaproughly one third of all words are overlappednote the explicit struggle for the floor indicated by the two failed floor grabbers by speakers c5 and c6furthermore 6 of the 19 total utterances express some form of agreement or disagreement with previous utterancesalso of the 19 utterances within the excerpt 9 are incomplete due to interruption by another talker as is typical of many regions in the corpus showing high speaker overlapwe find in related work that regions of high overlap correlate with high speaker involvement or hot spots 15the example also provides a taste of the frequency and complexity of adjacency pair informationfor example within only half a minute speaker c5 has interacted with speakers c3 and c6 and speaker c6 has interacted with speakers c2 and c5we computed interlabeler reliability among the three labelers for both segmentation and da labeling using randomly selected excerpts from the 75 labeled meetingssince agreement on da segmentation does not appear to have standard associated metrics in the literature we developed our own approachthe philosophy is that any difference in words at the beginning andor end of a da could result in a different label for that da and the more words that are mismatched the more likely the difference in labelas a very strict measure of reliability we used the following approach take one labelers transcript as a reference look at each other labelers wordsfor each word look at the utterance it comes from and see if the reference has the exact same utterance if it does there is a matchmatch every word in the utterance and then mark the matched utterance in the reference so it cannot be matched again repeat this process for each word in each referencelabeler pair and rotate to the next labeler as the referencenote that this metric requires perfect matching of the full utterance a word is in for that word to be matchedfor example in the following case labelers agree on 3 segmentation locations but the agreement on our metric is only 014 since only 1 of 7 words is matched overall segmentation results on this metric are provided by labeler pair in table 1we examined agreement on da labels using the kappa statistic 3 which adjusts for chance agreementbecause of the large number of unique full label combinations we report kappa values in table 2 using various class mappings distributed with the corpusvalues are shown by labeler pairthe overall value of kappa for our basic sixway classmap is 080 representing good agreement for this type of taskwe provide basic statistics based on the dialog act labels for the 75 meetingsif we ignore the tag marking rising intonation since this is not a da tag we find 180218 total tagstable 3 shows the distribution of the tags in more detailif instead we look at only the 11 obligatory general tags for which there is one per da and if we split labels at the pipe bar the total is 113560 the distribution of general tags is shown in table 4we include other useful information with the corpuswordlevel time information is available based on alignments from an automatic speech recognizerannotator comments are also providedwe suggest various ways to group the large set of labels into a smaller set of classes depending on the research focusfinally the corpus contains information that may be useful in for developing automatic modeling of prosody such as handmarked annotation of rising intonationwe thank chuck wooters don baron chris oei and andreas stolcke for software assistance ashley krupski for contributions to the annotation scheme andrei popescubelis for analysis and comments on a release of the 50 meetings and barbara peskin and jane edwards for general advice and feedbackthis work was supported by an icsi subcontract to the university of washington on a darpa communicator project icsi nsf itr award iis0121396 sri nasa award ncc21256 sri nsf iri9619921 an sri darpa roar project an icsi award from the swiss national science foundation through the research network im2 and by the eu framework 6 project on augmented multiparty interaction the views are those of the authors and do not represent the views of the funding agencies
W04-2319
the icsi meeting recorder dialog act corpuswe describe a new corpus of over 180000 handannotated dialog act tags and accompanying adjacency pair annotations for roughly 72 hours of speech from 75 naturallyoccurring meetingswe provide a brief summary of the annotation system and labeling procedure interannotator reliability statistics overall distributional statistics a description of auxiliary files distributed with the corpus and information on how to obtain the data
a linear programming formulation for global inference in natural language tasks given a collection of discrete random variables representing outcomes of learned local predictors in natural language eg named entities and relations we seek an optimal global assignment to the variables in the presence of general constraints examples of these constraints include the type of arguments a relation can take and the mutual activity of different relations etc we develop a linear programming formulation for this problem and evaluate it in the context of simultaneously learning named entities and relations our approach allows us to efficiently incorporate domain and task specific constraints at decision time resulting in significant improvements in the accuracy and the humanlike quality of the inferences natural language decisions often depend on the outcomes of several different but mutually dependent predictionsthese predictions must respect some constraints that could arise from the nature of the data or from domain or task specific conditionsfor example in partofspeech tagging a sentence must have at least one verb and cannot have three consecutive verbsthese facts can be used as constraintsin named entity recognition no entities can overlap is a common constraint used in various works efficient solutions to problems of these sort have been given when the constraints on the predictors are sequential these solutions can be categorized into the following two frameworkslearning global models trains a probabilistic model under the constraints imposed by the domainexamples include variations of hmms conditional models and sequential variations of markov random fields the other framework inference with classifiers views maintaining constraints and learning classifiers as separate processesvarious local classifiers are trained without the knowledge of constraintsthe predictions are taken as input on the inference procedure which then finds the best global predictionin addition to the conceptual simplicity of this approach it also seems to perform better experimentally typically efficient inference procedures in both frameworks rely on dynamic programming which works well in sequential datahowever in many important problems the structure is more general resulting in computationally intractable inferenceproblems of these sorts have been studied in computer vision where inference is generally performed over low level measurements rather than over higher level predictors this work develops a novel inference with classifiers approachrather than being restricted on sequential data we study a fairly general settingthe problem is defined in terms of a collection of discrete random variables representing binary relations and their arguments we seek an optimal assignment to the variables in the presence of the constraints on the binary relations between variables and the relation typesthe key insight to this solution comes from recent techniques developed for approximation algorithms following this work we model inference as an optimization problem and show how to cast it as a linear programusing existing numerical packages which are able to solve very large linear programming problems in a very short time1 inference can be done very quicklyour approach could be contrasted with other approaches to sequential inference or to general markov random field approaches the key difference is that in these approaches the model is learned globally under the constraints imposed by the domainin our approach predictors do not need to be learned in the context of the decision tasks but rather can be learned in other contexts or incorporated as background knowledgethis way our approach allows the incorporation of constraints into decisions in a dynamic fashion and can therefore support task specific inferencesthe significance of this is clearly shown in our experimental resultswe develop our models in the context of natural language inferences and evaluate it here on the problem of simultaneously recognizing named entities and relations between themthis is the problem of recognizing the kill relation in the sentence jv oswald was murdered at jfk after his assassin r you kfj this task requires making several local decisions such as identifying named entities in the sentence in order to support the relation identificationfor example it may be useful to identify that oswald and kfj are people and jfk is a locationthis in turn may help to identify that the kill action is described in the sentenceat the same time the relation kill constrains its arguments to be people and helps to enforce that oswald and kfj are likely to be people while jfk is notin our model we first learn a collection of local predictors eg entity and relation identifiersat decision time given a sentence we produce a global decision that optimizes over the suggestions of the classifiers that are active in the sentence known constraints among them and potentially domain or tasks specific constraints relevant to the current decisionalthough a bruteforce algorithm may seem feasible for short sentences as the number of entity variable grows the computation becomes intractable very quicklygiven n entities in a sentence there are o possible relations between themassume that each variable can take l labels thus there are ln2 possible assignments which is too large even for a small n when evaluated on simultaneous learning of named entities and relations our approach not only provides a significant improvement in the predictors accuracy more importantly it provides coherent solutionswhile many statistical methods make stupid mistakes that no human ever makes as we show our approach improves also the quality of the inference significantlythe rest of the paper is organized as followssection 2 formally defines our problem and section 3 describes the computational approach we proposeexperimental results are given in section 4 followed by some discussion and conclusion in section 5we consider the relational inference problem within the reasoning with classifiers paradigm and study a specific but fairly general instantiation of this problem motivated by the problem of recognizing named entities and relations between them we consider a set v which consists of two types of variables v e you r the first set of variables e e1 e2 en ranges gthe value assigned to ei e is denoted fe gthe second set of variables are rij1ijni0j is viewed as binary relations over e specifically for each pair of entities ei and ej i j we use rij and rji to denote the relations and respectivelythe set of labels of relations is gr and the label assigned to relation rij r is fr grapparently there exists some constraints on the labels of corresponding relation and entity variablesfor instance if the relation is live in then the first entity should be a person and the second entity should be a locationthe correspondence between the relation and entity variables can be represented by a bipartite grapheach relation variable rij is connected to its first entity ei and second entity ejwe use n1 and n2 to denote the entity variables of a relation rijspecifically ei n1 and ej n2in addition we define a set of constraints on the outcomes of the variables in v c1 g x gr 0 1 constraint values of the first argument of a relationc2 is defined similarly and constrains the second argument a relation can takefor example is in c1 but not in c2 because the first entity of relation born in has to be a person and the second entity can only be a location instead of a personnote that while we define the constraints here as boolean our formalisms in fact allows for stochastic constraintsalso note that we can define a large number of constraints such as cr gr x gr 0 1 which constrain types of relations etcin fact as will be clear in sec3 the language for defining constraints is very rich linear equalities over v we exemplify the framework using the problem of simultaneous recognition of named entities and relations in sentencesbriefly speaking we assume a learning mechanism that can recognize entity phrases in sentences based on local contextual featuressimilarly we assume a learning mechanism that can recognize the semantic relation between two given phrases in a sentencewe seek an inference algorithm that can produce a coherent labeling of entities and relations in a given sentencefurthermore it follows as best as possible the recommendation of the entity and relation classifiers but also satisfies natural constraints that exist on whether specific entities can be the argument of specific relations whether two relations can occur together at the same time or any other information that might be available at the inference time we note that a large number of problems can be modeled this wayexamples include problems such as chunking sentences coreference resolution and sequencing problems in computational biologyin fact each of the components of our problem here the separate task of recognizing named entities in sentences and the task of recognizing semantic relations between phrases can be modeled this wayhowever our goal is specifically to consider interacting problems at different levels resulting in more complex constraints among them and exhibit the power of our methodthe most direct way to formalize our inference problem is via the formalism of markov random field theory rather than doing that for computational reasons we first use a fairly standard transformation of mrf to a discrete optimization problem for detailsspecifically under weak assumptions we can view the inference problem as the following optimization problem which aims to minimize the objective function that is the sum of the following two cost functionsassignment cost the cost of deviating from the assignment of the variables v given by the classifiersthe specific cost function we use is defined as follows let l be the label assigned to variable you v if the marginal probability estimation is p p then the assignment cost cu is log p constraint cost the cost imposed by breaking constraints between neighboring nodesthe specific cost function we use is defined as follows consider two entity nodes ei ej and its corresponding relation node rij that is ei n 1 and ej n2the constraint cost indicates whether the labels are consistent with the constraintsin particular we use d1 is 0 if c1 otherwise d1 is 2similarly we use d2 to force the consistency of the second argument of a relationsince we are seeking the most probable global assignment that satisfies the constraints therefore the overall cost function we optimize for a global labeling f of all variables isunfortunately it is not hard to see that the combinatorial problem is computationally intractable even when placing assumptions on the cost function the computational approach we adopt is to develop a linear programming formulation of the problem and then solve the corresponding integer linear programming problemour lp formulation is based on the method proposed by since the objective function is not a linear function in terms of the labels we introduce new binary variables to represent different possible assignments to each original variable we then represent the objective function as a linear function of these binary variableslet xyoui1 be a 01variable defined to be 1 if and only if variable you is labeled i where you e i ge or you r i grfor example xe121 1 when the label of entity e1 is 2 xr2331 0 when the label of relation r23 is not 3let xrijreie11 be a 01variable indicating whether relation rij is assigned label r and its first argument ei is assigned label e1for instance xr121e121 1 means the label of relation r12 is 1 and the label of its first argument e1 is 2similarly xrijreje21 1 indicates that rij is assigned label r and its second argument ej is assigned label e2with these definitions the optimization problem can be represented as the following ilp problem equations and require that each entity or relation variable can only be assigned one labelequations and assure that the assignment to each entity or relation variable is consistent with the assignment to its neighboring variables and are the integral constraints on these binary variablesthere are several advantages of representing the problem in an lp formulationfirst of all linear equalities are fairly general and are able to represent many types of constraints more importantly an ilp problem at this scale can be solved very quickly using current commercial lpilp packages like or we introduce the general strategies of solving an ilp problem hereto solve an ilp problem a natural idea is to relax the integral constraintsthat is replacing and with if lpr returns an integer solution then it is also the optimal solution to the ilp problemif the solution is non integer then at least it gives a lower bound to the value of the cost function which can be used in modifying the problem and getting closer to deriving an optimal integer solutiona direct way to handle the non integer solution is called rounding which finds an integer point that is close to the non integer solutionunder some conditions of cost functions which do not hold here a well designed rounding algorithm can be shown that the rounded solution is a good approximation to the optimal solution nevertheless in general the outcomes of the rounding procedure may not even be a legal solution to the problembranch and bound is the method that divides an ilp problem into several lp subproblemsit uses lpr as a subroutine to generate dual bounds to reduce the search space and finds the optimal solution as wellwhen lpr finds a non integer solution it splits the problem on the non integer variablefor example suppose variable xi is fractional in an non integer solution to the ilp problem mincx x s x 01n where s is the linear constraintsthe ilp problem can be split into two sub lpr problems mincx x sxi 0 and mincx x sxi 1since any feasible solution provides an upper bound and any lpr solution generates a lower bound the search tree can be effectively cutanother strategy of dealing with non integer points which is often combined with branch bound is called cutting planewhen a non integer solution is given by lpr it adds a new linear constraint that makes the non integer point infeasible while still keeps the optimal integer solution in the feasible regionas a result the feasible region is closer to the ideal polyhedron which is the convex hull of feasible integer solutionsthe most famous cutting plane algorithm is gomorys fractional cutting plane method which can be shown that only finite number of additional constraints are neededmoreover researchers develop different cutting plane algorithms for different types of ilp problemsone example is which only focuses on binary ilp problemsalthough in theory a search based strategy may need several steps to find the optimal solution lpr always generates integer solutions in our experimentsthis phenomenon may link to the theory of unimodularitywhen the coefficient matrix of a given linear program in its standard form is unimodular it can be shown that the optimal solution to the linear program is in fact integral in other words lpr is guaranteed to produce an integer solutiontheorem 31 indicates that if a linear programming problem is in its standard form then regardless of the cost function and the integral vector b the optimal solution is an integer if and only if the coefficient matrix a is unimodularalthough the coefficient matrix in our problem is not unimodular lpr still produces integer solutions for all the we have experimented withthis may be due to the fact that the coefficient matrix shares many properties of a unimodular matrixas a result most of the vertices of the polyhedron are integer pointsanother possible reason is that given the cost function we have the optimal solution is always integerbecause of the availability of very efficient lpilp packages we defer the exploration of this direction for nowwe describe below two experiments on the problem of simultaneously recognizing entities and relationsin the first we view the task as a knowledge acquisition task we let the system read sentences and identify entities and relations among themgiven that this is a difficult task which may require quite often information beyond the sentence we consider also a forced decision task in which we simulate a question answering situation we ask the system say who killed whom and evaluate it on identifying correctly the relation and its arguments given that it is known that somewhere in this sentence this relation is activein addition this evaluation exhibits the ability of our approach to incorporate task specific constraints at decision timeour experiments are based on the trec data set that we annotated for named entities and relationsin order to effectively observe the interaction between relations and entities we picked 1437 sentences that have at least one active relationamong those sentences there are 5336 entities and 19048 pairs of entities entity labels include 1685 persons 1968 locations 978 organizations and 705 othersrelation labels include 406 located in 394 work for 451 orgbased in 521 live in 268 kill and 17007 nonenote that most pairs of entities have no active relations at alltherefore relation none significantly outnumbers othersexamples of each relation label and the constraints between a relation variable and its two entity arguments are shown as followsrelation entity1 entity2 example located in loc loc work for per org orgbased in org loc live in per loc kill per per in order to focus on the evaluation of our inference procedure we assume the problem of segmentation is solved and the entity boundaries are given to us as input thus we only concentrate on their classificationswe evaluate our lp based global inference procedure against two simpler approaches and a third that is given more information at learning timebasic only tests our entity and relation classifiers which are trained independently using only local featuresin particular the relation classifier does not know the labels of its entity arguments and the entity classifier does not know the labels of relations in the sentence eithersince basic classifiers are used in all approaches we describe how they are trained herefor the entity classifier one set of features are extracted from words within a size 4 window around the target phrasethey are words partofspeech tags and conjunctions of them bigrams and trigrams of the mixture of words and tagsin addition some other features are extracted from the target phrase including symbol explanation icap the first character of a word is capitalized acap all characters of a word are capitalized incap some characters of a word are capitalized suffix the suffix of a word is ing ment etc bigram bigram of words in the target phrase len number of words in the target phrase place3 the phrase ishas a known places name prof3 the phrase ishas a professional title name3 the phrase ishas a known persons name for the relation classifier there are three sets of features features similar to those used in the entity classification are extracted from the two argument entities of the relation conjunctions of the features from the two arguments some patterns extracted from the sentence or between the two argumentssome features in category are the number of words between arg1 and arg2 whether arg1 and arg2 are the same word or arg1 is the beginning of the sentence and has words that consist of all capitalized characters where ar1 and ar2 represent the first and second argument entities respectivelyin addition table 1 presents some patterns we usethe learning algorithm used is a variation of the winnow update rule incorporated in snow a multiclass classifier that is specifically tailored for large scale learning taskssnow learns a sparse network of linear functions in which the targets are represented as linear functions over a common feature spacewhile snow can be used as a classifier and predicts using a winnertakeall mechanism over the activation value of the target classes we can also rely directly on the raw activation value it outputs which is the weighted linear sum of the active features to estimate the posteriorsit can be verified that the resulting values are monotonic with the confidence in the prediction therefore provide a good source of probability estimationwe use softmax over the raw activation values as conditional probabilitiesspecifically suppose the number of classes is n and the raw activation values of class i is actithe posterior estimation for class i is derived by the following equationpipeline mimics the typical strategy in solving complex natural language problems separating a task into several stages and solving them sequentiallyfor example a named entity recognizer may be trained using a different corpus in advance and given to a relation classifier as a tool to extract featuresthis approach first trains an entity classifier as described in the basic approach and then uses the prediction of entities in addition to other local features to learn the relation identifiernote that although the true labels of entities are known here when training the relation identifier this may not be the case in general nlp problemssince only the predicted entity labels are available in testing learning on the predictions of the entity classifier presumably makes the relation classifier more tolerant to the mistakes of the entity classifierin fact we also observe this phenomenon empiricallywhen the relation classifier is trained using the true entity labels the performance is much worse than using the predicted entity labelslp is our global inference procedureit takes as input the constraints between a relation and its entity arguments and the output of the basic classifiersnote that lp may change the predictions for either entity labels or relation labels while pipeline fully trusts the labels of entity classifier and only the relation predictions may be different from the basic relation classifierin other words lp is able to enhance the performance of entity classification which is impossible for pipelinethe final approach omniscience tests the conceptual upper bound of this entityrelation classification problemit also trains the two classifiers separately as the basic approachhowever it assumes that the entity classifier knows the correct relation labels and similarly the relation classifier knows the right entity labels as wellthis additional information is then used as features in training and testingnote that this assumption is totally unrealisticnevertheless it may give us a hint that how much a global inference can achievetables 2 3 show the performance of each approach in fβ1 using 5fold crossvalidationthe results show that lp performs consistently better than basic and pipeline both in entities and relationsnote that lp does not apply learning at all but still outperforms pipeline which uses entity predictions as new features in learningthe results of the omniscient classifiers reveal that there is still room for improvementone option is to apply learning to tune a better cost function in the lp approachone of the more significant results in our experiments we believe is the improvement in the quality of the decisionsas mentioned in sec1 incorporating constraints helps to avoid inconsistency in classificationit is inpi teresting to investigate how often such mistakes happen without global inference and see how effectively the global inference enhances thisfor this purpose we define the quality of the decision as followsfor an active relation of which the label is classified correctly if both its argument entities are also predicted correctly we count it as a coherent predictionquality is then the number of coherent predictions divided by the sum of coherent and incoherent predictionssince the basic and pipeline approaches do not have a global view of the labels of entities and relations 5 to 25 of the predictions are incoherenttherefore the quality is not always goodon the other hand our global inference procedure lp takes the natural constraints into account so it never generates incoherent predictionsif the relation classifier has the correct entity labels as features a good learner should learn the constraints as wellas a result the quality of omniscient is almost as good as lpanother experiment we did is the forced decision test which boosts the f1 of kill relation to 862here we consider only sentences in which the kill relation is activewe force the system to determine which of the possible relations in a sentence has this relation by adding a new linear equalitythis is a realistic situation in that it adds an external constraint not present at the time of learning the classifiers and it evaluates the ability of our inference algorithm to cope with itthe results exhibit that our expectations are correctin fact we believe that in natural situations the number of constraints that can apply is even largerobserving the algorithm performs on other specific forced decision tasks verifies that lp is reliable in these situationsas shown in the experiment it even performs better than omniscience which is given more information at learning time but cannot adapt to the situation at decision timewe presented an linear programming based approach for global inference where decisions depend on the outcomes of several different but mutually dependent classifierseven in the presence of a fairly general constraint structure deviating from the sequential nature typically studied this approach can find the optimal solution efficientlycontrary to general search schemes which do not guarantee optimality the linear programming approach provides an efficient way to finding the optimal solutionthe key advantage of the linear programming formulation is its generality and flexibility in particular it supports the ability to incorporate classifiers learned in other contexts hints supplied and decision time constraints and reason with all these for the best global predictionin sharp contrast with the typically used pipeline framework our formulation does not blindly trust the results of some classifiers and therefore is able to overcome mistakes made by classifiers with the help of constraintsour experiments have demonstrated these advantages by considering the interaction between entity and relation classifiersin fact more classifiers can be added and used within the same frameworkfor example if coreference resolution is available it is possible to incorporate it in the form of constraints that force the labels of the coreferred entities to be the same consequently this may enhance the performance of entityrelation recognition and at the same time correct possible coreference resolution errorsanother example is to use chunking information for better relation identification suppose for example that we have available chunking information that identifies subjverb and verbobject phrasesgiven a sentence that has the verb murder we may conclude that the subject and object of this verb are in a kill relationsince the chunking information is used in the global inference procedure this information will contribute to enhancing its performance and robustness relying on having more constraints and overcoming possible mistakes by some of the classifiersmoreover in an interactive environment where a user can supply new constraints this framework is able to make use of the new information and enhance the performance at decision time without retraining the classifiersas we show our formulation supports not only improved accuracy but also improves the humanlike quality of the decisionswe believe that it has the potential to be a powerful way for supporting natural language inferencesacknowledgements this research has been supported by nfs grants career iis9984168 itr iis0085836 eia0224453 an onr muri award and an equipment donation from amdwe also thank the anonymous referees for their useful comments
W04-2401
a linear programming formulation for global inference in natural language tasksgiven a collection of discrete random variables representing outcomes of learned local predictors in natural language eg named entities and relations we seek an optimal global assignment to the variables in the presence of general constraintsexamples of these constraints include the type of arguments a relation can take and the mutual activity of different relations etcwe develop a linear programming formulation for this problem and evaluate it in the context of simultaneously learning named entities and relationsour approach allows us to efficiently incorporate domain and task specific constraints at decision time resulting in significant improvements in the accuracy and the humanlike quality of the inferenceswe use ilp to deal with the joint inference problem of named entity and relation identificationwe applied an ilp model to the task of the simultaneous assignment of semantic roles to the entities mentioned in a sentence and recognition of the relations holding between themwe described a classificationbased framework in which they jointly learn to identify named entities and relations
word sense discrimination by clustering contexts in vector and similarity spaces this paper systematically compares unsupervised word sense discrimination techniques that cluster instances of a target word that occur in raw text using both vector and similarity spaces the context of each instance is represented as a vector in a high dimensional feature space discrimination is achieved by clustering these context vectors directly in vector space and also by finding pairwise similarities among the vectors and then clustering in similarity space we employ two different representations of the context in which a target word occurs first order context vectors represent the context of each instance of a target word as a vector of features that occur in that context second order context vectors are an indirect representation of the context based on the average of vectors that represent the words that occur in the context we evaluate the discriminated clusters by carrying out experiments ussensetagged instances of 24 words and the well known corpora most words in natural language have multiple possible meanings that can only be determined by considering the context in which they occurgiven a target word used in a number of different contexts word sense discrimination is the process of grouping these instances of the target word together by determining which contexts are the most similar to each otherthis is motivated by who hypothesize that words with similar meanings are often used in similar contextshence word sense discrimination reduces to the problem of finding classes of similar contexts such that each class represents a single word senseput another way contexts that are grouped together in the same class represent a particular word sensewhile there has been some previous work in sense discrimination by comparison it is much less than that devoted to word sense disambiguation which is the process of assigning a meaning to a word from a predefined set of possibilitieshowever solutions to disambiguation usually require the availability of an external knowledge source or manually created sense tagged training dataas such these are knowledge intensive methods that are difficult to adapt to new domainsby contrast word sense discrimination is an unsupervised clustering problemthis is an attractive methodology because it is a knowledge lean approach based on evidence found in simple raw textmanually sense tagged text is not required nor are specific knowledge rich resources like dictionaries or ontologiesinstances are clustered based on their mutual contextual similarities which can be completely computed from the text itselfthis paper presents a systematic comparison of discrimination techniques suggested by pedersen and bruce and by schutze this paper also proposes and evaluates several extensions to these techniqueswe begin with a summary of previous work and then a discussion of features and two types of context vectorswe summarize techniques for clustering in vector versus similarity spaces and then present our experimental methodology including a discussion of the data used in our experimentsthen we describe our approach to the evaluation of unsupervised word sense discriminationfinally we present an analysis of our experimental results and conclude with directions for future work and propose a similarity based discrimination approach that computes similarity among each pair of instances of the target wordthis information is recorded in a similarity matrix whose rowscolumns represent the instances of the target word that are to be discriminatedthe cell entries of the matrix show the degree to which the pair of instances represented by the corresponding row and column are similarthe similarity is computed from the first order context vectors of the instances which show each instance as a vector of features that directly occur near the target word in that instance introduces second order context vectors that represent an instance by averaging the feature vectors of the content words that occur in the context of the target word in that instancethese second order context vectors then become the input to the clustering algorithm which clusters the given contexts in vector space instead of building the similarity matrix structurethere are some significant differences in the approaches suggested by pedersen and bruce and by schutzeas yet there has not been any systematic study to determine which set of techniques results in better sense discriminationin the sections that follow we highlight some of the differences between these approachespedersen and bruce represent the context of each test instance as a vector of features that directly occur near the target word in that instancewe refer to this representation as the first order context vectorschutze by contrast uses the second order context representation that averages the first order context vectors of individual features that occur near the target word in the instancethus schutze represents each feature as a vector of words that occur in its context and then computes the context of the target word by adding the feature vectors of significant content words that occur near the target word in that contextpedersen and bruce use a small number of local features that include cooccurrence and part of speech information near the target wordthey select features from the same test data that is being discriminated which is a common practice in clustering in generalschutze represents contexts in a high dimensional feature space that is created using a separate large corpus he selects features based on their frequency counts or loglikelihood ratios in this corpusin this paper we adopt schutzes approach and select features from a separate corpus of training data in part because the number of test instances may be relatively small and may not be suitable for selecting a good feature setin addition this makes it possible to explore variations in the training data while maintaining a consistent test setsince the training data used in unsupervised clustering does not need to be sense tagged in future work we plan to develop methods of collecting very large amounts of raw corpora from the web and other online sources and use it to extract featuresschutze represents each feature as a vector of words that cooccur with that feature in the training datathese feature vectors are in fact the first order context vectors of the feature words the words that cooccur with the feature words form the dimensions of the feature spaceschutze reduces the dimensionality of this feature space using singular value decomposition which is also employed by related techniques such as latent semantic indexing and latent semantic analysis svd has the effect of converting a word level feature space into a concept level semantic space that smoothes the fine distinctions between features that represent similar conceptspedersen and bruce represent instances in a similarity space where each instance can be seen as a point and the distance between any two points is a function of their mutual similaritiesthe similarity matrix showing the pairwise similarities among the instances is given as the input to the agglomerative clustering algorithmthe context group discrimination method used by schutze on the other hand operates on the vector representations of instances and thus works in vector spacealso he employs a hybrid clustering approach which uses both an agglomerative and the estimation maximization algorithmfirst order context vectors directly indicate which features make up a contextin all of our experiments the context of the target word is limited to 20 surrounding content words on either sidethis is true both when we are selecting features from a set of training data or when we are converting test instances into vectors for clusteringthe particular features we are interested in are bigrams and cooccurrencescooccurrences are words that occur within five positions of the target word bigrams are ordered pairs of words that cooccur within five positions of each otherthus cooccurrences are unordered word pairs that include the target word whereas bigrams are ordered pairs that may or may not include the targetboth the cooccurrences and the bigrams must occur in at least two instances in the training data and the two words must have a log likelihood ratio in excess of 3841 which has the effect of removing cooccurrences and bigrams that have more than 95 chance of being independent of the target wordafter selecting a set of cooccurrences or bigrams from a corpus of training data a first order context representation is created for each test instancethis shows how many times each feature occurs in the context of the target word in that instancea test instance can be represented by a second order context vector by finding the average of the first order context vectors that are associated with the words that occur near the target wordthus the second order context representation relies on the first order context vectors of feature wordsthe second order experiments in this paper use two different types of features cooccurrences and bigrams defined as they are in the first order experimentseach cooccurrence identified in training data is assigned a unique index and occupies the corresponding rowcolumn in a word cooccurrence matrixthis is constructed from the cooccurrence pairs and is a symmetric adjacency matrix whose cell values show the loglikelihood ratio for the pair of words representing the corresponding row and columneach row of the co occurrence matrix can be seen as a first order context vector of the word represented by that rowthe set of words forming the rowscolumns of the cooccurrence matrix are treated as the feature wordsbigram features lead to a bigram matrix such that for each selected bigram wordiwordj wordi represents a single row say the ith row and wordj represents a single column say the jth column of the bigram matrixthen the value of cell indicates the loglikelihood ratio of the words in the bigram wordiwordjeach row of the bigram matrix can be seen as a bigram vector that shows the scores of all bigrams in which the word represented by that row occurs as the first wordthus the words representing the rows of the bigram matrix make the feature set while the words representing the columns form the dimensions of the feature spacethe objective of clustering is to take a set of instances represented as either a similarity matrix or context vectors and cluster together instances that are more like each other than they are to the instances that belong to other clustersclustering algorithms are classified into three main categories hierarchical partitional and hybrid methods that incorporate ideas from boththe algorithm acts as a search strategy that dictates how to proceed through the instancesthe actual choice of which clusters to split or merge is decided by a criteria functionthis section describes the clustering algorithms and criteria functions that have been employed in our experimentshierarchical algorithms are either agglomerative or divisivethey both proceed iteratively and merge or divide clusters at each stepagglomerative algorithms start with each instance in a separate cluster and merge a pair of clusters at each iteration until there is only a single cluster remainingdivisive methods start with all instances in the same cluster and split one cluster into two during each iteration until all instances are in their own clusterthe most widely known criteria functions used with hierarchical agglomerative algorithms are single link complete link and average link also known as upgma points out that single link clustering tends to place all instances into a single elongated cluster whereas and show that hierarchical agglomerative clustering using average link fares wellthus we have chosen to use average linkupgma as our criteria function for the agglomerative experimentsin similarity space each instance can be viewed as a node in a weighted graphthe weights on edges joining two nodes indicate their pairwise similarity as measured by the cosine between the context vectors that represent the pair of instanceswhen agglomerative clustering starts each node is in its own cluster and is considered to be the centroid of that clusterat each iteration average link selects the pair of clusters whose centroids are most similar and merges them into a single clusterfor example suppose the clusters i and j are to be merged into a single cluster ijthe weights on all other edges that connect existing nodes to the new node ij must now be revisedsuppose that q is such a nodethe new weight in the graph is computed by averaging the weight on the edge between nodes i and q and that on the edge between j and qin other words in vector space average link starts by assigning each vector to a single clusterthe centroid of each cluster is found by calculating the average of all the context vectors that make up the clusterat each iteration average link selects the pair of clusters whose centroids are closest with respect to their cosinesthe selected pair of clusters is merged and a centroid is computed for this newly created clusterpartitional algorithms divide an entire set of instances into a predetermined number of clusters without going through a series of pairwise comparisonsas such these methods are somewhat faster than hierarchical algorithmsfor example the well known kmeans algorithm is partitionalin vector space each instance is represented by a context vectorkmeans initially selects k random vectors to serve as centroids of these initial k clustersit then assigns every other vector to one of the k clusters whose centroid is closest to that vectorafter all vectors are assigned it recomputes the cluster centroids by averaging all of the vectors assigned to that clusterthis repeats until convergence that is until no vector changes its cluster across iterations and the centroids stabilizein similarity space each instance can be viewed as a node of a fully connected weighted graph whose edges indicate the similarity between the instances they connectkmeans will first select k random nodes that represent the centroids of the initial k clustersit will then assign every other node i to one of the k clusters such that the edge joining i and the centroid of that cluster has maximum weight among the edges joining i to all centroidsit is generally believed that the quality of clustering by partitional algorithms is inferior to that of the agglomerative methodshowever a recent study has suggested that these conclusions are based on experiments conducted with smaller data sets and that with larger data sets partitional algorithms are not only faster but lead to better resultsin particular zhao and karypis recommend a hybrid approach known as repeated bisectionsthis overcomes the main weakness with partitional approaches which is the instability in clustering solutions due to the choice of the initial random centroidsrepeated bisections starts with all instances in a single clusterat each iteration it selects one cluster whose bisection optimizes the chosen criteria functionthe cluster is bisected using standard kmeans method with k2 while the criteria function maximizes the similarity between each instance and the centroid of the cluster to which it is assignedas such this is a hybrid method that combines a hierarchical divisive approach with partitioningwe use 24 of the 73 words in the senseval2 sense tagged corpus and the line hard and serve sense tagged corporaeach of these corpora are made up of instances that consist of 2 or 3 sentences that include a single target word that has a manually assigned sense taghowever we ignore the sense tags at all times except during evaluationat no point do the sense tags enter into the clustering or feature selection processesto be clear we do not believe that unsupervised word sense discrimination needs to be carried out relative to a preexisting set of sensesin fact one of the great advantages of unsupervised technique is that it does not need a manually annotated texthowever here we employ sensetagged text in order to evaluate the clusters that we discoverthe senseval2 data is already divided into training and test sets and those splits were retained for these experimentsthe senseval2 data is relatively small in that each word has approximately 50200 training and test instancesthe data is particularly challenging for unsupervised algorithms due to the large number of fine grained senses generally 8 to 12 per wordthe small volume of data combined with large number of possible senses leads to very small set of examples for most of the sensesas a result prior to clustering we filter the training and test data independently such that any instance that uses a sense that occurs in less than 10 of the available instances for a given word is removedwe then eliminate any words that have less than 90 training instances after filteringthis process leaves us with a set of 24 senseval2 words which includes the 14 nouns 6 adjectives and 4 verbs that are shown in table 1in creating our evaluation standard we assume that each instance will be assigned to at most a single clustertherefore if an instance has multiple correct senses associated with it we treat the most frequent of these as the desired tag and ignore the others as possible correct answers in the test datathe line hard and serve corpora do not have a standard trainingtest split so these were randomly divided into 6040 trainingtest splitsdue to the large number of training and test instances for these words we filtered out instances associated with any sense that occurred in less than 5 of the training or test instanceswe also randomly selected five pairs of words from the senseval2 data and mixed their instances together after mixing the data was filtered such that any sense that made up less than 10 in the training or test data of the new mixed sample was removed this is why the total number of instances for the mixed pairs is not the same as the sum of those for the individual wordsthese mixwords were created in order to provide data that included both fine grained and coarse grained distinctionstable 1 shows all words that were used in our experiments along with their parts of speechthereafter we show the number of training and test instances that remain after filtering and the number of senses found in the test data we also show the percentage of the majority sense in the test data this is particularly useful since this is the accuracy that would be attained by a baseline clustering algorithm that puts all test instances into a single clusterwhen we cluster test instances we specify an upper limit on the number of clusters that can be discoveredin these experiments that value is 7this reflects the fact that we do not know apriori the number of possible senses a word will havethis also allows us to verify the hypothesis that a good clustering approach will automatically discover approximately same number of clusters as senses for that word and the extra clusters will contain very few instancesas can be seen from column s in table 1 most of the words have 2 to 4 senses on an averageof the 7 clusters created by an algorithm we detect the significant clusters by ignoring clusters that contain less than 2 of the total instancesthe instances in the discarded clusters are counted as unclustered instances and are subtracted from the total number of instancesour basic strategy for evaluation is to assign available sense tags to the discovered clusters such that the assignment leads to a maximally accurate mapping of senses to clustersthe problem of assigning senses to clusters becomes one of reordering the columns of a confusion matrix that shows how senses and clusters align such that the diagonal sum is maximizedthis corresponds to several well known problems among them the assignment problem in operations research or determining the maximal matching of a bipartite graph in graph theoryduring evaluation we assign one sense to at most one cluster and vice versawhen the number of discovered clusters is the same as the number of senses then there is a one to one mapping between themwhen the number of clusters is greater than the number of actual senses then some clusters will be left unassignedand when the number of senses is greater than the number of clusters some senses will not be assigned to any clusterthe reason for not assigning a single sense to multiple clusters or multiple senses to one cluster is that we are assuming one sense per instance and one sense per clusterwe measure the precision and recall based on this maximally accurate assignment of sense tags to clustersprecision is defined as the number of instances that are clustered correctly divided by the number of instances clustered while recall is the number of instances clustered correctly over the total number of instancesfrom that we compute the fmeasure which is two times the precision and recall divided by the sum of precision and recallwe present the discrimination results for six configurations of features context representations and clustering algorithmsthese were run on each of the 27 target words and also on the five mixed wordswhat follows is a concise description of each configurationall of the sc experiments use second order context vectors and hence follow the approach suggested by schutzeexperiment pb2 clusters the pedersen and bruce style context vectors using the schutze like clustering scheme while sc2 tries to see the effect of using the pedersen and bruce style clustering method on schutze style context vectorsthe motivation behind experiments pb3 and sc3 is to try bigram features in both pb and sc style context vectorsthe fmeasure associated with the discrimination of each word is shown in table 1any score that is significantly greater than the majority sense is shown in bold facewe employ three different types of data in our experimentsthe senseval2 words have a relatively small number of training and test instances however the line hard and serve data is much larger where each contains around 4200 training and test instances combinedmixed word are unique because they combined the instances of multiple target words and thereby have a larger number of senses to discriminateeach type of data brings with it unique characteristics and sheds light on different aspects of our experimentstable 2 compares pb1 against pb3 and sc1 against sc3 when these methods are used to discriminate the 24 senseval2 wordsour objective is to study the effect of using bigram features against cooccurrences in first and second order context vectors while using relatively small amounts of training data per wordnote that pb1 and sc1 use cooccurrence features while pb3 and sc3 rely on bigram featuresthis table shows the number of nouns adjectives and verbs where bigrams were more effective than cooccurrences less effective worse and equal when using repeated bisections clustering versus the upgma technique on first and second order vectorsin short table 3 compares pb1 against pb2 and sc1 against sc2from this we observe that with both first order and second order context vectors repeated bisections is more effective than upgmathis suggests that it is better suited to deal with very small amounts of sparse datatable 4 summarizes the overall performance of each of these experiments compared with the majority classthis table shows the number of words for which an experiment performed better than the the majority class broken down by part of speechnote that sc3 and sc1 are most often better than the majority class followed closely by pb2 and sc2this suggests that the second order context vectors have an advantage over the first order vectors for small training data as is found among the 24 senseval2 wordswe believe that second order methods work better on smaller amounts of data in that the feature spaces are quite small and are not able to support the degree of exact matching of features between instances that first order vectors requiresecond order context vectors succeed in such cases because they find indirect second order co occurrences of feature words and hence describe the context more extensively than the first order representationswith smaller quantities of data there is less possibility of finding instances that use exactly the same set of wordssemantically related instances use words that are conceptually the same but perhaps not lexicallysecond order context vectors are designed to identify such relationships in that exact matching is not required but rather words that occur in similar contexts will have similar vectorsthe comparatively good performance of pb1 and pb3 in the case of the line hard and serve data suggests that first order context vectors when clustered with upgma perform relatively well on larger samples of datamoreover among the sc experiments on this data the performance of sc2 is relatively highthis further suggests that upgma performs much better than repeated bisections with larger amounts of training datathese observations correspond with the hypothesis drawn from the senseval2 resultsthat is a large amount of training data will lead to a larger feature space and hence there is a greater chance of matching more features directly in the context of the test instanceshence the first order context vectors that rely on the immediate context of the target word succeed as the contexts are more likely to use similar sets of words that in turn are selected from a large feature collectionnearly all of the experiments carried out with the 6 different methods perform better than the majority sense in the case of the mixwordsthis is partially due to the fact that these words have a large number of senses and therefore have low majority classifiersin addition recall that this data is created by mixing instances of distinct target words which leads to a subset of coarse grained senses within the data that are easier to discover than the senses of a single wordtable 1 shows that the top 3 experiments for each of the mixedwords are all second order vectors we believe that this is due to the sparsity of the feature spaces of this datasince there are so many different senses the number of first order features that would be required to correctly discriminate them is very high leading to better results for second order vectorswe plan to conduct experiments that compare the effect of using very large amounts of training data versus smaller amounts where each instance includes the target word we will draw our large corpora from a variety of sources including the british national corpus the english gigaword corpus and the webour motivation is that the larger corpora will provide more generic cooccurrence information about words without regard to a particular target wordhowever the data specific to a given target word will capture the word usages in the immediate context of the target wordthus we will test the hypothesis that a smaller sample of data where each instance includes the target word is more effective for sense discrimination than a more general corpus of training datawe are also planning to automatically attach descriptive labels to the discovered clusters that capture the underlying word sensethese labels will be created from the most characteristic features used by the instances belonging to the same clusterby comparing such descriptive features of each cluster with the words that occur in actual dictionary definitions of the target word we plan to carry out fully automated word sense disambiguation that does not rely on any manually annotated textwe present an extensive comparative analysis of word sense discrimination techniques using first order and second order context vectors where both can be employed in similarity and vector spacewe conclude that for larger amounts of homogeneous data such as the line hard and serve data the first order context vector representation and the upgma clustering algorithm are the most effective at word sense discriminationwe believe this is the case because in a large sample of data it is very likely that the features that occur in the training data will also occur in the test data making it possible to represent test instances with fairly rich feature setswhen given smaller amounts of data like senseval2 second order context vectors and a hybrid clustering method like repeated bisections perform betterthis occurs because in small and sparse data direct first order features are seldom observed in both the training and the test datahowever the indirect second order cooccurrence relationships that are captured by these methods provide sufficient information for discrimination to proceedthis research is supported by a national science foundation faculty early career development award all of the experiments in this paper were carried out with version 047 of the senseclusters package freely available from the url shown on the title page
W04-2406
word sense discrimination by clustering contexts in vector and similarity spacesthis paper systematically compares unsupervised word sense discrimination techniques that cluster instances of a target word that occur in raw text using both vector and similarity spacesthe context of each instance is represented as a vector in a high dimensional feature spacediscrimination is achieved by clustering these context vectors directly in vector space and also by finding pairwise similarities among the vectors and then clustering in similarity spacewe employ two different representations of the context in which a target word occursfirst order context vectors represent the context of each instance of a target word as a vector of features that occur in that contextsecond order context vectors are an indirect representation of the context based on the average of vectors that represent the words that occur in the contextwe evaluate the discriminated clusters by carrying out experiments using sensetagged instances of 24 senseval2 words and the well known line hard and serve sensetagged corpora
memorybased dependency parsing this paper reports the results of experimentsusing memorybased learning to guide a de terministic dependency parser for unrestricted natural language text using data from a small treebank of swedish memorybased classifiers for predicting the next action of the parser are constructed the accuracy of a classifier as such is evaluated on heldout data derived from the treebank and its performance as a parserguide is evaluated by parsing the heldout por tion of the treebank the evaluation shows thatmemorybased learning gives a signficant i am provement over a previous probabilistic modelbased on maximum conditional likelihood esti mation and that the inclusion of lexical features improves the accuracy even further deterministic dependency parsing has recently been proposed as a robust and efficient method for syntactic pars ing of unrestricted natural language text dependency parsing means that the goal of the parsing process is to constructa dependency graph of the kind depicted in figure 1de terministic parsing means that we always derive a singleanalysis for each input stringmoreover this single anal ysis is derived in a monotonic fashion with no redundancy or backtracking which makes it possible to parse natural language sentences in linear time in this paper we report experiments using memorybased learning to guide the parser described in nivre using data from a small treebank of swedish unlike most pre vious work on datadriven dependency parsing we assume that dependency graphs are la beled with dependency types although the evaluationwill give results for both labeled and unlabeled represen tationsthe paper is structured as followssection 2 gives the necessary background definitions and introduces the idea of guided parsing as well as memorybased learningsection 3 describes the data used in the experiments the evaluation metrics and the models and algorithms used in the learning processresults from the experiments are given in section 4 while conclusions and suggestions for further research are presented in section 521 dependency graphsthe linguistic tradition of dependency grammar comprises a large and fairly diverse family of theories and formalisms that share certain basic assumptions about syn tactic structure in particular the assumption that syntacticstructure consists of lexical nodes linked by binary re lations called dependencies sgall melcuk hudson thus the common formal property of dependency structures as compared to the representations based on constituency is the lack of nonterminal nodesin a dependency structure every word token is depen dent on at most one other word token usually called its head or regent which means that the structure can be represented as a directed graph with nodes representing word tokens and arcs representing dependency relationsin addition arcs may be labeled with specific dependency typesfigure 1 shows a labeled dependency graph for asimple swedish sentence where each word of the sentence is labeled with its part of speech and each arc la beled with a grammatical functionformally we define dependency graphs in the follow ing way pp pa id figure 1 dependency graph for swedish sentence 1let are r1 rm be the set of permissible dependency types 2a dependency graph for a string of words w w1wn is a labeled directed graph d where w is the set of nodes ie word tokens in the input string a is a set of labeled arcs we write wi wj to express that wi precedes wj in the string w we write wi rwj to say that there is an arc from wi to wj labeled r and wi wj to say that there is an arc from wi to wj we use to denote the reflexive and transitive closure of the unlabeled arcrelation and we use and for the correspond ing undirected relations ie wi wj iff wi wj or wj withe five conditions given in figure 2 are satisfiedfor a more detailed discussion of dependency graphs and wellformedness conditions the reader is referred to nivre 22 parsing algorithmthe parsing algorithm presented in nivre is in many ways similar to the basic shiftreduce algorithm for contextfree grammars although theparse actions are different given that no nonterminal sym bols are usedmoreover unlike the algorithm of yamadaand matsumoto the algorithm considered here actually uses a blend of bottomup and topdown processing constructing leftdependencies bottomup and rightdependencies topdown in order to achieve incrementalityfor a similar but nondeterministic approach to depen dency parsing see obrebski parser configurations are represented by triples s i a where s is the stack i is the list of input tokens and a is the arc relation for the dependency graphgiven an inputstring w the parser is initialized to nilw and termi nates when it reaches a configuration snil athe input string w is accepted if the dependency graph d given at termination is wellformed otherwise w is rejectedthe behavior of the parser is defined by the transitions defined in figure 3 1the transition leftarc adds an arc wj rwifrom the next input token wj to the token wi on top of the stack and reduces wi from the stack2the transition rightarc adds an arc wi rwjfrom the token wi on top of the stack to the next in put token wj and shifts wj onto the stackken wi on top of the stack4the transition shift shifts the next input token wi onto the stackthe transitions leftarc and rightarc are subject to conditions that ensure that the graph conditions uniquelabel and single head are satisfiedby contrast the re duce transition can only be applied if the token on top of the stack already has a headfor shift the only condition is that the input list is nonemptyas it stands this transition system is nondeterminis tic since several transitions can often be applied to the same configurationthus in order to get a deterministic parser we need to introduce a mechanism for resolving transition conflictsregardless of which mechanism is used the parser is guaranteed to terminate after at most 2n transitions given an input string of length n this means that as long as transitions can be per formed in constant time the running time of the parser will be linear in the length of the inputmoreover the parser is guaranteed to produce a dependency graph that is acyclic and projective this means that the depen dency graph given at termination is wellformed if and only if it is connected unique label are rsingle head wi wk acyclic connected wiwj projective figure 2 wellformedness conditions on dependency graphs initialization nilw termination snil aleftarc wiswj i aswj i a wkr a rightarc wiswj i awj wis i a wkr a reduce wis i as i awjr a shift swii awis i afigure 3 parser transitions 23 guided parsingone way of turning a nondeterministic parser into a deter ministic one is to use a guide that can inform the parser at each nondeterministic choice point cfkay boullier guided parsing is normally used to improve the efficiency of a nondeterministic parsereg by letting a simpler parser con struct a first analysis that can be used to guide the choice of the more complex parserthis is the approach taken for example in boullier in our case we rather want to use the guide to i am prove the accuracy of a deterministic parser starting from a baseline of randomized choiceone way of doing this is to use a treebank ie a corpus of analyzed sentences to train a classifier that can predict the next transition given the current configuration of the parserhowever in order to maintain the efficiency of the parser the classifier must also be implemented in such a way that each transition can still be performed in constant timeprevious work in this area includes the use of memory based learning to guide a standard shiftreduce parser and the use of support vector machines to guide a deterministic depen dency parser in theexperiments reported in this paper we apply memory based learning within a deterministic dependency parsing framework24 memorybased learningmemorybased learning and problem solving is based ontwo fundamental principles learning is the simple stor age of experiences in memory and solving a new problem is achieved by reusing solutions from similar previously solved problems it is inspired by thenearest neighbor approach in statistical pattern recogni tion and artificial intelligence as well as the analogical modeling approach in linguistics in machine learning terms it can be characterized as a lazy learning methodsince it defers processing of input until needed and pro cesses input by combining stored data memorybased learning has been successfully appliedto a number of problems in natural language processing such as graphemetophoneme conversion part ofspeech tagging prepositionalphrase attachment and base noun phrase chunking most relevant in the present context is the use of memory based learning to predict the actions of a shiftreduce parser with promising results reported in veenstra and daelemans the main reason for using memorybased learning inthe present context is the flexibility offered by similarity based extrapolation when classifying previously unseenconfigurations since previous experiments with a proba bilistic model has shown that a fixed backoff sequence does not work well in this case moreover the memorybased approach can easily handle multiclass classification unlike the support vector machines used by yamada and matsumoto for the experiments reported in this paper we have used the software package timbl which provides a variety of metrics al gorithms and extra functions on top of the classical knearest neighbor classification kernel such as value distance metrics and distance weighted class voting 31 target function and approximationthe function we want to approximate is a mapping f from parser configurations to parser actions where each action consists of a transition and a dependency type f config larare sh here config is the set of all possible parser configura tions and r is the set of dependency types as beforehowever in order to make the problem tractable we try to learn a function fwhose domain is a finite space of parser states which are abstractions over configurationsfor this purpose we define a number of features that can be used to define different models of parser statethe features used in this study are listed in table 1the first five features deal with properties of the token on top of the stackin addition to the word form itself we consider its partofspeech the dependency type by which it is related to its head and the dependency types by which it is related to its leftmost and rightmost dependent respectively the following three features refer to properties of the next input tokenin this case there are no features corresponding to topdep and topright since the relevant dependencies can never be present atdecision timethe final feature is a simple looka head using the partofspeech of the next plus one input tokenin the experiments reported below we have used two different parser state models one called the lexical model which includes all nine features and one called the nonlexical model where the two lexical features top and next are omittedfor both these models wehave used memorybased learning with different parame ter settings as implemented timblfor comparison we have included an earlier classifier that uses the same features as the nonlexical model butwhere prediction is based on maximum conditional likeli hood estimationthis classifier always predicts the most probable transition given the state and the most probable dependency type given the transition and the state withconditional probabilities being estimated by the empirical distribution in the training datasmoothing is per formed only for zero frequency events in which case the classifier backs off to more general models by omittingfirst the features topleft and look and then the fea tures topright and nextleft if even this does not help the classifier predicts reduce if permissible and shift otherwisethis model which we will refer to as the mcle model is described in more detail in nivre 32 datait is standard practice in datadriven approaches to nat ural language parsing to use treebanks both for training and evaluationthus the penn treebank of american english has been used to train and evaluate the best available parsers of unrestricted english text one problem whendeveloping a parser for swedish is that there is no com parable largescale treebank available for swedishfor the experiments reported in this paper we have used a manually annotated corpus of written swedish created at lund university in the 1970s and consistingmainly of informative texts from official sources although the original annotation scheme isan eclectic combination of constituent structure depen dency structure and topological fields it has proven possible to convert the annotated sentences to dependency graphs with fairly high accuracyin the conversion process we have reduced the orig inal finegrained classification of grammatical functions to a more restricted set of 16 dependency types whichare listed in table 2we have also replaced the origi nal partofspeech annotation by using the same automatic tagger that is used for preprocessing in the parserthis is a standard probabilistic tagger trained on the stockholmumeacorpus of written swedish and found to have an accuracy of 9596 when tested on heldout datasince the function we want to learn is a mapping from parser states to transitions the treebank data cannot be used directly as training and test feature description top the token on top of the stack toppos the partofspeech of top topdep the dependency type of top topleft the dependency type of tops leftmost dependent topright the dependency type of tops rightmost dependent next the next input token nextpos the partofspeech of next nextleft the dependency type of nexts leftmost dependent lookpos the partofspeech of the next plus one input token table 1 parser state featuresdatainstead we have to simulate the parser on the tree bank in order to derive for each sentence the transition sequence corresponding to the correct dependency treegiven the result of this simulation we can construct a data set consisting of pairs s t where s is a parser state and t is the correct transition from that state unlike standard shift reduce parsing the simulation of the current algorithm is almost deterministic and is guaranteed to be correct if the input dependency tree is wellformedthe complete converted treebank contains 6316 sentences and 97623 word tokens which gives a mean sentence length of 155 wordsthe treebank has been divided into three nonoverlapping data sets 80 for train ing 10 for developmentvalidation and 10 for final testing the results presented below are all from the validation setwhen talking about test and validation data we make a distinction between the sentence data which refers to the original annotated sentences in the treebank and the transition data which refers to the transitions derived bysimulating the parser on these sentenceswhile the sen tence data for validation consists of 631 sentences the corresponding transition data contains 15913 instancesfor training only transition data is relevant and the train ing data set contains 371977 instances33 evaluationthe output of the memorybased learner is a classifier that predicts the next transition given the current state of the parserthe quality of thisclassifier has been evaluated with respect to both predic tion accuracy and parsing accuracyprediction accuracy refers to the quality of the clas sifier as such ie how well it predicts the next transitiongiven the correct parser state and is measured by the clas sification accuracy on unseen transition data we use mcnemars test for statistical sig nificanceparsing accuracy refers to the quality of the classifier as a guide for the deterministic parser and is measured by the accuracy obtained when parsing unseen sentence datamore precisely parsing accuracy is measured by the attachment score which is a standard measure used in studies of dependency parsing the attachment score is computed as theproportion of tokens that are as signed the correct head since parsing is a sentencelevel task we believe that the overall attachment score should be computed as themean attachment score per sentence which gives an es timate of the expected attachment score for an arbitrary sentencehowever since most previous studies instead use the mean attachment score per word we will give this measure as wellin order to measure label accuracy we also define a la beled attachment score where both the head and the label must be correct but which is otherwise computed in the same way as the ordinary attachment scorefor parsing accuracy we use a paired ttest for statistical significancetable 3 shows the prediction accuracy achieved with memorybased learning for the lexical and nonlexical model with two different parameter settings for the learnerthe results in the first column were obtained with the default settings of the timbl package in particular the ib1 classification algorithm the overlap distance metricfeatures weighted by gain ratio k 1 ie classification based on a single nearest neighbor11in timbl the value of k in fact refers to k nearest dis tances rather than k nearest neighbors which means that evenwith k 1 the nearest neighbor set can contain several in label dependency type adv adverbial modifier app apposition att attribute cc coordination det determiner id nonfirst element of multiword expression i am infinitive dependent on infinitive marker ip punctuation mark dependent on lexical head inf infinitival complement obj object pr complement of preposition prd predicative complement sub subject uk main verb of subordinate clause dependent on complementizer vc verb chain xx unclassifiable dependent table 2 dependency types in swedish treebank model default maximum nonlexical 868 874 lexical 884 897 table 3 prediction accuracy for mbl modelsthe second column shows the accuracy for the best pa rameter settings found in the experiments which differ from the default in the follow ing respectsoverlap metric replaced by the modified value dis tance metric no weighting of featuresk 5 ie classification based on 5 nearest neigh borsdistance weighted class voting with inverse distance weighting for more information about the different parameters and settings the reader is referred to daelemans et al the results show that the lexical model performs con sistently better than the nonlexical model and that the difference increases with the optimization of the learning algorithm this confirms previous results from statistical parsing indicating that lex ical information is crucial for disambiguation 1999 charniak 2000as regards optimization we may note that although there is a significant improvement for both models the magnitude of the difference is relatively smalltable 4 shows the parsing accuracy obtained with theoptimized versions of the mbl models compared to the mcle model described in sec tion 3we see that mbl outperforms the mcle model even when limited to the same features this can probably be explained by the fact that the similaritybased smoothing built into the memorybased approach gives a better extrapolation than the fixed backoff sequence in the mcle modelwe also see that the lexical mbl model outperforms both the other modelsif we compare the labeled attachmentscore to the prediction accuracy we observe a substantial drop which is of course onlyto be expectedthe unlabeled attachment score is naturally higher and it is worth noting that the relative differ ence between the mbl lexical model and the other twomodels is much smallerthis indicates that the advan tage of the lexical model mainly concerns the accuracy in predicting dependency type in addition to transitionmodel labeled unlabeled mcle 747 815 mbl nonlexical 765 829 mbl lexical 817 857 table 4 parsing accuracy for mcle and mbl models attachment score per sentence if we compare the results concerning parsing accuracy to those obtained for other languages we note that the best unlabeled attachment score is lower than forenglish where the best results are above 90 but higher than for czech this is encouraging given that the size of the training set in our experiments is fairly small onlyabout 10 of the standard training set for the penn tree bankone reason why our results nevertheless compare reasonably well with those obtained with the much larger training set is probably that the conversion to dependency trees is more accurate for the swedish treebank given theexplicit annotatation of grammatical functionsmore over the fact that our parser uses labeled dependenciesis probably also significant since the possibility of using information from previously assigned de pendencies during parsing seems to have a positive effect on accuracy finally it may be interesting to consider the accuracy for individual dependency typestable 5 gives labeled precision labeled recall and unlabeled attachment scorefor four of the most important types with the mbl lex ical modelthe results indicate that subjects have the highest accuracy especially when labels are taken intoaccountobjects and predicative complements have comparable attachment accuracy but are more often misclas sified with respect to dependency typefor adverbial modifiers finally attachment accuracy is lower than for the other dependency types which is largely due to the notorious ppattachment problemin this paper we have shown that a combination of memorybased learning and deterministic dependency parsing can be used to construct a robust and efficient parser for unrestricted natural language text achieving a parsing accuracy which is close to the state of the art evenwith relatively limited amounts of training dataclas sifiers based on memorybased learning achieve higher parsing accuracy than previous probabilistic models and the improvement increases if lexical information is added to the modelsuggestions for further research includes the further exploration of alternative models and parameter settings but also the combination of inductive and analytical learning to impose highlevel linguistic constraints and the development of new parsing methods in addition it is important to evaluate the approach with respect to other languages and corpora in order to increase the comparability with other approachesacknowledgements the work presented in this paper was supported by agrant from the swedish research council the memorybased classifiers used in the experi ments were constructed using the tilburg memorybasedlearner we are grateful to three anonymous reviewers for constructive com ments on the preliminary version of the paper
W04-2407
memorybased dependency parsingthis paper reports the results of experiments using memorybased learning to guide a deterministic dependency parser for unrestricted natural language textusing data from a small treebank of swedish memorybased classifiers for predicting the next action of the parser are constructedthe accuracy of a classifier as such is evaluated on heldout data derived from the treebank and its performance as a parser guide is evaluated by parsing the heldout portion of the treebankthe evaluation shows that memorybased learning gives a signficant improvement over a previous probabilistic model based on maximum conditional likelihood estimation and that the inclusion of lexical features improves the accuracy even further
models for the semantic classification of noun phrases roles in 28 relation no 1 2 3 6 7 11 13 15 16 21 25 the rest 006103 011268 000939 004225 039437 001878 003286 025822 004694 001878 000469 0 table 5 sample row from the conditional probability table where the feature pair is entityentity the numbers in the top row identify the semantic relations level level 1 level 2 level 3 level 4 number of 9 52 70 122 features number 9 46 47 47 features no of feature pairs 57 out of 81 189 out of 2392 204 out of 3290 250 out of 5734 number of 1 152 181 225 with only one relation average number 27692 1291 11765 1144 nonzero relations per line table 6 statistics for the semantic class features by level of specialization this paper is about the automatic labeling of semantic relations in noun phrases the semantic relations are the underlying relations between two concepts expressed by words or phraseswe distinguish here between semantic relations and semantic rolessemantic roles are always between verbs and other constituents whereas semantic relations can occur between any constituents for example in complex nominals genitives prepositional phrases attached to nouns or discourse level thus in a sense semantic relations are more general than semantic roles and many semantic role types will appear on our list of semantic relationsthe following np level constructions are considered here and compound nominals consisting of two consecutive nouns adjective noun constructions where the adjectival modifier is derived from a noun genitives and adjective phrases in which the modifier noun is expressed by a prepositional phrase which functions as an adjective example saturdays snowfall topped a oneday record in hartford connecticut with the total of 125 inches the weather service saidthe storm claimed its fatality thursday when a car which was driven by a college student skidded on an interstate overpass in the mountains of virginia and hit a concrete barrier police saidthere are several semantic relations at the noun phrase level saturdays snowfall is a genitive encoding a temporal relation oneday record is a topic noun compound indicating that record is about oneday snowing an ellipsis here record in hartford is an adjective phrase in a location relation total of 125 inches is an ofgenitive that expresses measure weather service is a noun compound in a topic relation car which was driven by a college student encodes a theme semantic role in an adjectival clause college student is a compound nominal in a partwholememberof relation interstate overpass is a location noun compound mountains of virginia is an ofgenitive showing a partwholeplacearea and location relation concrete barrier is a noun compound encoding partwholestuffofafter many iterations over a period of time we identified a set of semantic relations that cover a large majority of text semanticstable 1 lists these relations their definitions examples and some referencesmost of the time the semantic relations are encoded by lexicosyntactic patterns that are highly ambiguousone pattern can express a number of semantic relations its disambiguation being provided by the context or world knowledgeoften semantic relations are not disjoint or mutually exclusive two or more appearing in the same lexical constructthis is called semantic blend for example the expression texas city contains both a location as well as a partwhole relationother researchers have identified other sets of semantic relations our list contains the most frequently used semantic relations we have observed on a large corpusbesides the work on semantic roles considerable interest has been shown in the automatic interpretation of complex nominals and especially of compound nominalsthe focus here is to determine the semantic relations that hold between different concepts within the same phrase and to analyze the meaning of these compoundsseveral approaches have been proposed for empirical nouncompound interpretation such as syntactic analysis based on statistical techniques another popular approach focuses on the interpretation of the underlying semanticsmany researchers that followed this approach relied mostly on handcoded rules more recently have proposed automatic methods that analyze and detect noun compounds relations from text focused on the medical domain making use of a lexical ontology and standard machine learning techniqueswe approach the problem topdown namely identify and study first the characteristics or feature vectors of each noun phrase linguistic pattern then develop models for their semantic classificationthis is in contrast to our prior approach when we studied one relation at a time and learned constraints to identify only that relationwe study the distribution of the semantic relations across different np patterns and analyze the similarities and differences among resulting semantic spaceswe define a semantic space as the set of semantic relations an np construction can encodewe aim at uncovering the general aspects that govern the np semantics and thus delineate the semantic space within clusters of semantic relationsthis process has the advantage of reducing the annotation effort a time consuming activityinstead of manually annotating a corpus for each semantic relation we do it only for each syntactic pattern and get a clear view of its semantic spacethis syntacticosemantic approach allows us to explore various np semantic classification models in a unified waythis approach stemmed from our desire to answer questions such as it is well understood and agreed in linguistics that concepts can be represented in many ways using various constructions at different syntactic levelsthis is in part why we decided to take the syntacticosemantic approach that analyzes semantic relations at different syntactic levels of representationin this paper we focus only on the behavior of semantic relations at np levela thorough understanding of the syntactic and semantic characteristics of nps provides valuable insights into defining the most representative feature vectors that ultimately drive the discriminating learning modelslevi defines complex nominals as expressions that have a head noun preceded by one or more modifying nouns or by adjectives derived from nouns most importantly for us each sequence of nouns or possibly adjectives and nouns has a particular meaning as a whole carrying an implicit semantic relation for example spoon handle or musical clock cns have been studied intensively in linguistics psycholinguistics philosophy and computational linguistics for a long timethe semantic interpretation of cns proves to be very difficult for a number of reasons sometimes the meaning changes with the head other times with the modifier cns interpretation is knowledge intensive and can be idiosyncraticfor example in order to interpret correctly gm car we have to know that gm is a carproducing company there can be many possible semantic relations between a given pair of word constituentsfor example usa city can be regarded as a location as well as a partwhole relation interpretation of cns can be highly contextdependentfor example apple juice seat can be defined as seat with apple juice on the table in front of it the semantic interpretation of genitive constructions is considered problematic by linguists because they involve an implicit relation that seems to allow for a large variety of relational interpretations for example johns carpossessorpossessee marys brotherkinship last years exhibitiontemporal a picture of my nicedepictiondepicted and the deserts oasispartwholeplaceareaa characteristic of these constructions is that they are very productive as the construction can be given various interpretations depending on the contextone such example is kates book that can mean the book kate owns the book kate wrote or the book kate is very fond ofthus the features that contribute to the semantic interpretation of genitives are the nouns semantic classes the type of genitives discourse and pragmatic informationadjective phrases are prepositional phrases attached to nouns acting as adjectives prepositions play an important role both syntactically and semanticallysemantically speaking prepositional constructions can encode various semantic relations their interpretations being provided most of the time by the underlying contextfor instance the preposition with can encode different semantic relations it was the girl with blue eyes the conclusion for us is that in addition to the nouns semantic classes the preposition and the context play important roles herein order to focus our research we will concentrate for now only on noun noun or adjective noun compositional constructions at np level ie those whose meaning can be derived from the meaning of the constituent nouns we do not consider metaphorical names metonymies proper names and nps with coordinate structures in which neither noun is the head however we check if the constructions are noncompositional but only for statistical purposesfortunately some of these can be identified with the help of lexiconsin order to provide a unified approach for the detection of semantic relations at different np levels we analyzed the syntactic and semantic behavior of these constructions on a large opendomain corpora of examplesour intention is to answer questions like what are the semantic relations encoded by the nplevel constructions what is their distribution on a large corpus is there a common subset of semantic relations that can be fully paraphrased by all types ofnp constructions how many nps are lexicalizedwe have assembled a corpus from two sources wall street journal articles from trec9 and extended wordnet glosses we used xwn 20 since all its glosses are syntactically parsed and their words semantically disambiguated which saved us considerable amount of timetable 2 shows for each syntactic category the number of randomly selected sentences from each corpus the number of instances found in these sentences and finally the number of instances that our group managed to annotate by handthe annotation of each example consisted of specifying its feature vector and the most appropriate semantic relation from those listed in table 1the annotators four phd students in computational semantics worked in groups of two each group focusing on one half of the corpora to annotatenoun noun sequences of words were extracted using the lauer heuristic which looks for consecutive pairs of nouns that are neither preceded nor succeeded by a noun after each sentence was syntactically parsed with charniak parser moreover they were provided with the sentence in which the pairs occurred along with their corresponding wordnet senseswhenever the annotators found an example encoding a semantic relation other than those provided or they did not know what interpretation to give they had to tag it as othersbesides the type of relation the annotators were asked to provide information about the order of the modifier and the head nouns in the syntactic constructions if applicablefor instance in owner of carpossession the possessor owner is followed by the possessee car while in car ofjohnpossessionr the order is reversedon average 30 of the training examples had the nouns in reverse ordermost of the time one instance was tagged with one semantic relation but there were also situations in which an example could belong to more than one relation in the same contextfor example the genitive city of usa was tagged as a partwholeplacearea relation and as a location relationoverall there were 608 such cases in the training corporamoreover the annotators were asked to indicate if the instance was lexicalized or notalso the judges tagged the np nouns in the training corpus with their corresponding wordnet sensesthe annotators agreement was measured using the kappa statistics one of the most frequently used measure of interannotator agreement for classification tasks where is the proportion of times the raters agree and is the probability of agreement by chancethe k coefficient is 1 if there is a total agreement among the annotators and 0 if there is no agreement other than that expected to occur by chancetable 3 shows the semantic relations interannotator agreement on both training and test corpora for each np constructionfor each construction the corpus was splint into 8020 trainingtesting ratio after agreementwe computed the k coefficient only for those instances tagged with one of the 35 semantic relationsfor each pattern we also computed the number of pairs that were tagged with others by both annotators over the number of examples classified in this category by at least one of the judges averaged by the number of patterns consideredthe k coefficient shows a fair to good level of agreement for the training and testing data on the set of 35 relations taking into consideration the task difficultythis can be explained by the instructions the annotators received prior to annotation and by their expertise in lexical semanticsthere were many heated discussions as welleven noun phrase constructions are very productive allowing for a large number of possible interpretations table 4 shows that a relatively small set of 35 semantic relations covers a significant part of the semantic distribution of these constructions on a large opendomain corpusmoreover the distribution of these relations is dependent on the type of np construction each type encoding a particular subsetfor example in the case of ofgenitives there were 21 relations found from the total of 35 relations consideredthe most frequently occurring relations were partwhole attributeholder possession location source topic and themeby comparing the subsets of semantic relations in each column we can notice that these semantic spaces are not identical proving our initial intuition that the np constructions cannot be alternative ways of packing the same informationtable 4 also shows that there is a subset of semantic relations that can be fully encoded by all types of np constructionsthe statistics about the lexicalized examples are as follows nn adjn sgenitive ofgenitive adjective phrase from the 3001 lexicalized noun compounds 18 were proper namesthis simple analysis leads to the important conclusion that the np constructions must be treated separately as their semantic content is differentthis observation is also partially consistent with other recent work in linguistics and computational linguistics on the grammatical variation of the english genitives noun compounds and adjective phraseswe can draw from here the following conclusions given each np syntactic construction considered the goal is to develop a procedure for the automatic labeling of the semantic relations they encodethe semantic relation derives from the lexical syntactic semantic and contextual features of each np constructionsemantic classification of syntactic patterns in general can be formulated as a learning problem and thus benefit from the theoretical foundation and experience gained with various learning paradigmsthis is a multiclass classification problem since the output can be one of the semantic relations in the setwe cast this as a supervised learning problem where input output pairs are available as training dataan important first step is to map the characteristics of each np construction into feature vectorslet us define with the feature vector of an instance and let be the space of all instances ie the multiclass classification is performed by a function that maps the feature space into a semantic space where is the set of semantic relations from table 1 ie let be the training set of examples or instances where is the number of examples each accompanied by its semantic relation label the problem is to decide which semantic relation to assign to a new unseen example in order to classify a given set of examples one needs some kind of measure of the similarity between any two given members of most of the times it is difficult to explicitly define this function since can contain features with numerical as well as nonnumerical valuesnote that the features thus space vary from an np pattern to another and the classification function will be pattern dependentthe novelty of this learning problem is the feature space and the nature ofthe discriminating an essential aspect of our approach below is the word sense disambiguation of the content words using a stateoftheart opentext wsd system each word is mapped into its corresponding wordnet 20 sensewhen disambiguating each word the wsd algorithm takes into account the surrounding words and this is one important way through which context gets to play a role in the semantic classification of npsso far we have identified and experimented with the following np features specifies the wordnet synset of the modifier nounin case the modifier is a denominal adjective we take the synset of the noun from which the adjective is derivedexample musical clock makeproduce and electric clock instrumentseveral learning models can be used to provide the discriminating function so far we have experimented with three models semantic scattering decision trees and naive bayesthe first is described below the other two are fairly well known from the machine learning literaturesemantic scatteringthis is a new model developed by us particularly useful for the classification of compound nominals without nominalizationthe semantic relation in this case derives from the semantics of the two noun concepts participating in these constructions as well as the surrounding contextmodel formulationlet us define with and the sets of semantic class features of the np modifiers and respectively np heads the compound nominal semantics is distinctly specified by the feature pair written shortly as given feature pair the probability of a semantic relation r is defined as the ratio between the number of occurrences of a relation r in the presence of feature pair over the number of occurrences of feature pair in the corpusthe most probable relation is since the number of possible noun synsets combinations is large it is difficult to measure the quantities and on a training corpus to calculate one way of approximating the feature vector is to perform a semantic generalization by replacing the synsets with their most general hypernyms followed by a series of specializations for the purpose of eliminating ambiguities in the training datathere are 9 noun hierarchies thus only 81 possible combinations at the most general leveltable 5 shows a row of the probability matrix for each entry for which there is more than one relation is scattered into other subclasses through an iterative process till there is only one semantic relation per linethis can be achieved by specializing the feature pairs semantic classes with their immediate wordnet hyponymsthe iterative process stops when new training data does not bring any improvements the fmeasure results obtained so far are summarized in table 7overall these results are very encouraging given the complexity of the probleman important way of improving the performance of a system is to do a detailed error analysis of the resultswe have analyzed the sources of errors in each case and found out that most of them are due to errors in automatic sense disambiguation missing combinations of features that occur in testing but not in the training data levels of specialization are too high errors caused by metonymy errors in the modifierhead order and othersthese errors could be substantially decreased with more research efforta further analysis of the data led us to consider a different criterion of classification that splits the examples into nominalizations and nonnominalizationsthe reason is that nominalization noun phrases seem to call for a different set of learning features than the nonnominalization noun phrases taking advantage of the underlying verbargument structuredetails about this approach are provided in semantic relations occur with high frequency in open text and thus their discovery is paramount for many applicationsone important application is question answeringa powerful method of answering more difficult questions is to associate to each question the semantic relation that reflects the meaning of that question and then search for that semantic relation over the candidates of semantically tagged paragraphshere is an exampleqwhere have nuclear incidents occurredfrom the question stem word where we know the question asks for a location which is found in the complex nominal three mile islandlocation of the sentence the three mile island nuclear incident caused a doe policy crisis leading to the correct answer three mile islandqwhat did the factory in howell michigan makethe verb make tells us to look for a makeproduce relation which is found in the complex nominal car factorymakeproduce of the text the car factory in howell michigan closed on dec 22 1991 which leads to answer caranother important application is building semantically rich ontologieslast but not least the discovery of text semantic relations can improve syntactic parsing and even wsd which in turn affects directly the accuracy of other nlp modules and applicationswe consider these applications for future work
W04-2609
models for the semantic classification of noun phrasesthis paper presents an approach for detecting semantic relations in noun phrasesa learning algorithm called semantic scattering is used to automatically label complex nominals genitives and adjectival noun phrases with the corresponding semantic relationwe propose a 35 class scheme to classify relations in various phraseswe propose a method called semantic scattering for interpreting ncs
the nombank project an interim report this paper describes nombank a project that will provide argument structure for instances of common nouns in the penn treebank ii corpus nombank is part of a larger effort to add additional layers of annotation to the penn treebank ii corpus the university of pennsylvanias propbank nombank and other annotation projects taken together should lead to the creation of better tools for the automatic analysis of text this paper describes the nombank project in detail including its specifications and the process involved in creating the resource this paper introduces the nombank projectwhen complete nombank will provide argument structure for instances of about 5000 common nouns in the penn treebank ii corpusnombank is part of a larger effort to add layers of annotation to the penn treebank ii corpuspropbank nombank and other annotation projects taken together should lead to the creation of better tools for the automatic analysis of textthese annotation projects may be viewed as part of what we think of as an a la carte strategy for corpusbased natural language processingthe fragile and inaccurate multistage parsers of a few decades were replaced by treebankbased parsers which had better performance but typically provided more shallow analyses1 as the same set of data is annotated with more and more levels of annotation a new type of multistage processing becomes possible that could reintroduce this information 1a treebankbased parser output is defined by the treebank on which it is basedas these treebanks tend to be of a fairly shallow syntactic nature the resulting parsers tend to be so also but in a more robust fashioneach stage of processing is defined by a body of annotated data which provides a symbolic framework for that level of representationresearchers are free to create and use programs that map between any two levels of representation or which map from bare sentences to any level of representation2 furthermore users are free to shop around among the available programs to map from one stage to anotherthe hope is that the standardization imposed by the annotated data will insure that many researchers will be working within the same set of frameworks so that one researchers success will have a greater chance of benefiting the whole communitywhether or not one adapts an a la carte approach nombank and propbank projects provide users with data to recognize regularizations of lexically and syntactically related sentence structuresfor example suppose one has an information extraction system tuned to a hiringfiring scenario one could use nombank and propbank to generalize patterns so that one pattern would do the work of severalgiven a pattern stating that the object of appoint is john and the subject is ibm a propbanknombank enlightened system could detect that ibm hired john from the following strings ibm appointed john john was appointed by ibm ibms appointment of john the appointment of john by ibm and john is the current ibm appointeesystems that do not regularize across predicates would require separate patterns for each of these environmentsthe nombank project went through several stages before annotation could beginwe had to create specifications and various lexical resources to delineate the taskonce the task was set we identified classes of wordswe used these classes to approximate lexical entries make time estimates and create automatic procedures to aid in 2here we use the term level of representation quite loosely to include individual components of what might conventionally be considered a single levelrel growth arg1 in dividends arg2ext 12 argmtmp next year 8 a possible yous troop reduction in south koreanom wargms rel reduction arg1 yous troop argmloc in south korea argmadv possible annotationfor the first nine months of the project the nombank staff consisted of one supervisor and one annotatoronce the specifications were nailed down we hired additional annotators to complete the projectthis paper provides an overview of the project including an abbreviated version of the specifications and a chronicle of our progressfigure 1 lists some sample nombank propositions along with the class of the noun predicate for each markable instance of a common noun in the penn treebank annotators create a proposition a subset of the features rel support arg0 arg1 arg2 arg3 arg4 argm paired with pointers to phrases in penn treebank ii treesa noun instance is markable if it is accompanied by one of its arguments or if it is a nominalization and it is accompanied by one of the allowable types of adjuncts the same set of adjuncts used in propbank3 the basic idea is that each triple rel sense argnum uniquely defines an argument given a particular sense of a particular rel where argnum is one of the numbered arguments and sense is one of the senses of that relthe arguments are essentially the same as the initial relations of relational grammar for example agents tend to be classified as arg0 patients and themes tend to be classified as arg1 and indirect objects of all kinds tend to be classified as arg2the lexical entry or frame for each noun provides one inventory of argument labels for each sense of that word4 each proposition consists of an instance of an argumenttaking noun plus arguments support items andor adjuncts support items are words that link arguments that occur outside an np to the nominal predicate that heads that np eg made supports we as the arg0 of decision in we made a decisionargms are adjuncts of the nounhowever we only mark the sort of adjuncts that also occur in sentences locations temporal sentence adverbial and various othersbefore we could begin annotation we needed to classify all the common nouns in the corpuswe needed to know which nouns were markable and make initial approximations of the inventories of senses and arguments for each nountoward this end we pooled a number of resources comlex syntax nomlex and the verb classes from we also used string matching techniques and hand classification in combination with programs that automatically merge crucial features of these resourcesthe result was nomlexplus a nomlexstyle dictionary which includes the original 1000 entries in nomlex plus 6000 additional entries the resulting noun classes include verbal nominalizations adjectival nominalizations and 16 other classes such as relational and partitive nouns nomlexplus helped us break down the nouns into classes which in turn helped us gain an understanding of the difficulty of the task and the manpower needed to complete the taskwe used a combination of nomlexplus and propbanks lexical entries to produce automatic approximations of noun frames for nombankthese entries specify the inventory of argument roles for the annotatorsfor nominalizations of verbs that were covered in propbank we used straightforward procedures to convert existing propbank lexical entries to nominal oneshowever other entries needed to be created by automatic means by hand or by a combination of the twofigure 2 compares the propbank lexical entry for the verb claim with the nombank entry for the noun claimthe noun claim and the verb claim share both the assert sense and the seize sense permitting the same set of argument roles for those senseshowever only the assert sense is actually attested in the sample propbank corpus that was available when we began working on nombankthus we added the seize sense to both the noun and verb entriesthe noun claim also has a lawsuit sense which bears an entry similar to the verb suethus our initial entry for the noun claim was a copy of the verb entry at that timean annotator edited the frames to reflect noun usage she added the second and third senses to the noun frame and updated the verb frame to include the second sensein nomlexplus we marked anniversary and advantage as cousins of nominalizations indicating that their lexical entries should be modeled respectively on the verbs commemorate and exploit although both entries needed to be modified in some respectwe use the term cousins of nominalizations to refer to those nouns which take argument structure similar to some verb but which are not morphologically related to that wordexamples are provided in figure 3 and 4for adjective nominalizations we began with simple procedures which created frames based on nomlexplus entries the entry for accuracy plus a simple example is provided in figure 5 the attributelike frame is one of the most common frames for adjective nominalizationsto cover the remaining nouns in the corpus we created classes of lexical items and manually constructed one frame for each classeach member of a class was was given the corresponding framefigure 6 provides a sample of these classes along with descriptions of their framesas with the nominalization cousins annotators sometimes had to adjust these frames for particular wordsbeginning with the propbank and nombank propositions in figure 7 it is straightforward to derive the roles arg0 agent arg1 thing remembered arg2 times celebrated noun example investors celebrated the second anniversary of black mondayroles arg0 exploiter arg1 entity exploited noun example investors took advantage of tuesday s stock rallyrel advantage support took arg0 investors arg1 of tuesdays stock rally 1attributelike roles arg1 theme noun example the accuracy of seasonal adjustments built into the employment data rel accuracy arg1 of seasonal adjustments built into propbank rel gave arg0 they arg1 a standing ovation arg2 the chefs nombank rel ovation arg0 they arg1 the chefs support gave combined propbanknombank graphical representation in figure 8 in which each role corresponds to an arc labelfor this example think of the argument structure of the noun ovation as analogous to the verb applaudaccording to our analysis they are both the givers and the applauders and the chefs are both the recipients of something given and the ones who are applaudedgave and ovation have two distinct directional relations a standing ovation is something that is given and gave serves as a link between ovation and its two argumentsthis diagram demonstrates how nombank is being designed for easy integration with propbankwe believe that this is the sort of predicate argument representation that will be needed to easily merge this work with other annotation effortsas of this writing we have created the various lexicons associated with nombankthis has allowed us to break down the task as follows there are approximately 240000 instances of common nouns in the ptb at least 36000 of these are nouns that cannot take arguments and therefore need not be looked at by an there are approximately 99000 instances of verbal nominalizations or related items there are approximately 34000 partitives 18000 subject nominalizations 14000 environmental nouns 14000 relational nouns and fewer instances of the various other classesapproximately 16 of the cases are instances of nouns which occur in multiple classes5 the difficulty of the annotation runs the gamut from nominalization instances which include the most arguments the most adjuncts and the most instances of support to the partitives which have the simplest and most predictable structurewe have conducted some preliminary consistency tests for about 500 instances of verbal nominalizations during the training phases of nombankthese tests yielded interannotator agreement rates of about 85 for argument roles and lower for adjunct roleswe are currently engaging in an effort to improve these results6 we have identified certain main areas of disagreement including disagreements concerning support verbs and the shared arguments that go with them disagreements about role assignment to prenominals and differences between annotators caused by errors in addition to improving our specifications and annotator help texts we are beginning to employ some automatic means for error detectionfor inconsistencies with support our main line of attack has been to outline problems and solutions in our specificationswe do not have any automatic system in effect yet although we may in the near futuresupport verbs are verbs which 5when a noun fits into multiple categories those categories may predict multiple senses but not necessarilyfor example drive has a nominalization sense and an attribute sense thus the lexical entry for drive includes both sensesin constrast teacher in the math teacher has the same analysis regardless of whether one thinks of it as the nominalization of teach or as a relational noun6consistency is the average precision and recall against a gold standardthe preliminary tests were conducted during training and only on verbal nominalizations connect nouns to one of their arguments via argument sharingfor example in john took a walk the verb took shares its subject with the noun walksupport verbs can be problematic for a number of reasonsfirst of all the concept of argument sharing is not black and whiteto illustrate these shades of gray compare the relation of mary to attack in marys attack against the alligator mary launched an attack against the alligator mary participated in an attack against the alligator mary planned an attack against the alligator and mary considered an attack against the alligatorin each subsequent example marys level of agency decreases with respect to the noun attackhowever in each case mary may still be viewed as some sort of potential attackerit turned out that the most consistent position for us to take was to assume all degrees of argumenthood were validso we would mark mary as the arg0 of attack in all these instancesthis is consistent with the way control and raising structures are marked for verbs eg john is the subject of leave and do in john did not seem to leave and john helped do the project under most accounts of verbal argument structure that take argument sharing into accountof course a liberal view of support has the danger of overgenerationconsider for example market conditions led to the cancellation of the planned exchangethe unwary annotator might assume that market conditions is the arg0 of cancellationin fact the combination lead to and cancellation do not have any of the typical features of support described in figure 9however the final piece of evidence is that market conditions violate the selection restrictions of cancellationthus the following paraphrase is illformed market conditions canceled the planned exchangethis suggests that market conditions is the subject of lead and not the subject of cancellationtherefore this is not an instance of support in spite of the apparent similaritywe require that the support relation be lexicalin other words there must be something special about a support verb or the combination of the support verb and the noun to license the argument sharing relationin addition to support we have cataloged several argument sharing phenomena which are markablefor example consider the sentence president bush arrived for a celebrationclearly president bush is the arg0 of celebration however arrive is not a support verbthe phrase for a celebration is a subjectoriented adverbial similar to adverbs like willingly which takes the subject of the sentence as an argumentthus president bush could also be the subject of celebration in president bush waddled into town for the celebration and many similar sentences that contain this ppfinally there are cases where argument sharing may support verbnoun pairs can be idiosyncratically connected to the point that some researchers would call them idioms or phrasal verbs eg take a walk keep tabs onthe verb can be essentially empty eg make an attack have a visitthe verbnoun combination may take a different set of arguments than either does alone eg take advantage ofsome support verbs share the subject of almost any nominalization in a particular argument slotfor example attempt shares its subject with most following nominalizations eg he attempted an attackthese are the a lot like raisingcontrol predicatesin some cases the support verb and noun are from similar semantic classes making argument sharing very likely eg fight a battle be implied by discourse processes but which we do not mark for example the words proponent and rival strongly imply that certain arguments appear in the discourse but not necessarily in the same sentencefor example in they did not want the company to fall into the hands of a rival there is an implication that the company is an arg1 of rival ie a rival should be interpreted as a rival of the company7 the connection between a rival and the company is called a bridging relation in other words fall into the hands of does not link rival with the company by means of supportthe fact that a discourse relation is responsible for this connection becomes evident when you see that the link between rival and company can cross sentence boundaries eg the company was losing moneythis was because a rival had come up with a really clever marketing strategyargm is the annotation tag used for nonarguments also known as adjunctsfor nouns it was decided to only tag such types of adjuncts as are also found with verbs eg temporal locative manner etcthe rationale for this included only the argumenttaking common nouns are being annotated and other sorts of adjuncts occur with common nouns in general narrowing the list of potential labels helped keep the labeling consistent and this was the minimum set of adjuncts that would keep the noun annotation consistent with the verb annotationunfortunately it was not always clear whether a prenominal modifier fell into one of our classes or notif an annotator felt that a modifier was somehow important there was a temptation to push it into one of the modifier classes even if it was not a perfect fitfurthermore some annotators had a broader view than others as to the sorts of semantic relationships that fell within particular classes of adjuncts particularly locative manner and extent unlike the support verbs which are often idiosyncratic to particular nominal predicates adjunct prenominal modifiers usually behave the same way regardless of the noun with which they occurin order to identify these lexical properties of prenominals we created a list of all time nouns from comlex syntax and we created a specialized dictionary of adjectives with adverbial properties which we call adjadvthe list of adjectiveadverb pairs in adjadv came from two sources a list of adjectives that are morphologically linked to ly adverbs created using some string matching techniques and adjectiveadverb pairs from catvar we pruned this list to only include adjectives found in the penn treebank and then edited out inappropriate word pairswe completed the dictionary by transferring portions of the comlex syntax adverb entries to the corresponding adjectiveswe now use adjadv and our list of temporal nouns to evaluate nombank annotation of modifierseach annotated left modifier is compared against our dictionariesif a modifier is a temporal noun it can bear the argmtmp role eg the temporal noun morning can fill the argmtmp slot in the morning broadcastmost other common nouns are compatible with argument role slots eg the noun news can fill the arg1 slot in the news broadcastfinally roles associated with adjectives depend on their adjadv entry eg possible can be an argmadv in possible broadcasts due to the epistemic feature encoded in the lexical entry for possible discrepancies between these procedures and the annotator are resolved on a case by case basisif the dictionary is wrong the dictionary should be changed eg root as in root because was added to the dictionary as a potential mnr adjective with a meaning like the adverb basicallyhowever if the annotator is wrong the annotation should be changed eg if an annotator marked slow as a argmtmp the program would let them know that it should be a argmmnrthis process both helps with annotation accuracy and enriches our lexical databasewe used other procedures to detect errors including nomtype argument nominalizations are nominalizations that play the role of one of the arguments in the rolesetthus the word acquirer should be assigned the arg0 role in the following example because acquirer is a subject nominalization rel acquirer arg0 acquirer arg1 of manville argmadv possible a procedure can compare the nomlexplus entry for each noun to each annotated instance of that noun to check for incompatibilitiesillformedness impossible instances are ruled outchecks are made to make sure obligatory labels are present and illegal labels are notsimilarly procedures make sure that infinitive arguments are marked with the prd function tag probable illformedness certain configurations of role labels are possible but very unlikelyfor example the same argument role should not appear more than once furthermore it is unlikely for the first word of a sentence to be an argument unless the main predicate is nearby or unless there is a nearby support verbfinally it is unlikely that there is an empty category that is an argument of a predicate noun unless the empty category is linked to some real npb wrongpos we use procedures that are part of our systems for generating glarf a predicate argument framework discussed in to detect incorrect parts of speech in the penn treebankif an instance is predicted to be a part of speech other than a common noun but it is still tagged that instance is flaggedfor example if a word tagged as a singular common noun is the first word in a vp it is probably tagged with the wrong part of speechthe processes described in the previous subsections are used to create a list of annotation instances to check along with short standardized descriptions of what was wrong eg wrongpos nonfunctional etcannotators do a second pass eempty categories mark invisible constituents in the treebank eg the subject of want in john wanted e to leave on just these instances we will conduct a formal evaluation of this procedure over the next monthwe are just starting a new phase in this project the creation of an automatic annotatorusing techniques similar to those described in in combination with our work on glarf we expect to build a handcoded propbanker a program designed to produce a propbanknombank style analysis from penn treebank style inputalthough the propbanker should work with input in the form of either treebank annotation or treebankbased parser output this project only requires application to the penn treebank itselfwhile previous programs with similar goals were statisticsbased this tool will be based completely on handcoded rules and lexical resourcesdepending on its accuracy automatically produced annotation should be useful as either a preprocessor or as an error detectorwe expect high precision for very simple frames eg nouns like lot as in figure 10annotators will have the opportunity to judge whether particular automatic annotation is good enough to serve as a preprocessorwe hypothesize that a comparison of automatic annotation that fails this level of accuracy against the hand annotation will still be useful for detecting errorscomparisons between the hand annotated data and the automatically annotated data will yield a set of instances that warrant further checking along the same lines as our previously described error checking mechanismsthis paper outlines our current efforts to produce nombank annotation of the argument structure for most common nouns in the penn treebank ii corpusthis is part of a larger effort to produce more detailed annotation of the penn treebankannotation for nombank is progressing quicklywe began with a single annotator while we worked on setting the task and have ramped up to four annotatorswe continue to work on various quality control procedures which we outline abovein the near future we intend to create an automatic annotation program to be used both as a preprocessor for manual annotation and as a supplement to error detectionthe argument structure of nps has been less studied both in theoretical and computational linguistics than the argument structure of verbsas with our work on nomlex we are hoping that nombank will substantially contribute to improving the nlp communitys ability to understand and process noun argument structuretion or the policy of the yous governmentwe would also like to acknowledge the people at the university of pennsylvania who helped make nombank possible including martha palmer scott cotton paul kingsbury and olga babkomalayain particular the use of propbanks annotation tool and frame files proved invaluable to our effort
W04-2705
the nombank project an interim reportthis paper describes nombank a project that will provide argument structure for instances of common nouns in the penn treebank ii corpusnombank is part of a larger effort to add additional layers of annotation to the penn treebank ii corpusthe university of pennsylvanias propbank nombank and other annotation projects taken together should lead to the creation of better tools for the automatic analysis of textthis paper describes the nombank project in detail including its specifications and the process involved in creating the resourcewe provide coarse annotations for some of the possessive con st ructions in the penn treebank but only those that meet their criteria
the language of bioscience facts speculations and statements in between we explore the use of speculative language in medline abstracts results from a manual annotation experiment suggest that the notion of speculative sentence can be reliably annotated by humans in addition an experiment with automated methods also suggest that reliable automated methods might also be developed distributional observations are also presented as well as a discussion of possible uses for a system that can recognize speculative language the scientific process involves making hypotheses gathering evidence using inductive reasoning to reach a conclusion based on the data and then making new hypothesesscientist are often not completely certain of a conclusionthis lack of definite belief is often reflected in the way scientists discuss their workin this paper we focus on expressions of levels of belief the expressions of hypotheses tentative conclusions hedges and speculationsaffect is used in linguistics as a label for this topicthis is not a wellknown topic in the field of text processing of bioscience literaturethus we present a large number of examples to elucidate the variety and nature of the phenomenawe then return to a discussion of the goals importance and possible uses of this researchthe sentences in the following box contain fragments expressing a relatively high level of speculationthe level of belief expressed by an author is often difficult to ascertain from an isolated sentence and often the context of the abstract is neededall examples in the paper are from abstracts available at the nation library of medicine pubmed webpage the pubmed identifier is provided following each sentencepdcd4 may thus constitute a useful molecular target for cancer prevention as the gt box has also previously been shown to play a role in gene regulation of other genes these newly isolated sp2 and sp3 proteins might regulate expression not only of the tcr gene but of other genes as well on the basis of these complementary results it has been concluded that curcumin shows very high binding to bsa probably at the hydrophobic cavities inside the protein curcumin downregulates ki67 pcna and mutant p53 mrnas in breast cancer cells these properties may underlie chemopreventive action the next examples contain fragments that are speculative but probably less so than those abovethe containing sentence does sibilitythe examples above are speculative and the sentence below expresses a definite statement about two possibilities provide some context but the rest of the abstract if not the full text is often necessary along with enough knowledge of field to understand textremoval of the carboxy terminus enables erp to interact with a variety of etsbinding sites including the e74 site the igh enhancer pi site and the lck promoter ets site suggesting a carboxyterminal negative regulatory domain in addition we show that a component of the rasdependent mitogenactivated protein kinase pathway nerve growth factorinducible cjun exerts its effects on receptor gene promoter activity most likely through proteinprotein interactions with sp1 results suggest that one of the mechanisms of curcumin inhibition of prostate cancer may be via inhibition ofakt the previous examples contain phrases such as most likely and suggesting which in these cases explicitly mark a level of belief less than 100the next examples are not as explicitly marked to date and such as can also be used in purely definite statementsto date we find that the signaling pathway triggered by each type of insult is distinct however the inability of igf1 insulin and pma to stimulate 3betahsd type 1 expression by themselves in the absence of il4 indicates that the multiple pathways downstream ofirs1 and irs2 must act in cooperation with an il4specific signaling molecule such as the transcription factor stat6 these findings highlight the feasibility of modulating ho1 expression during hypothermic storage to confer tissues a better protection to counteract the damage characteristic of organ transplantation the words may and might were both used to express speculation in the examples above but are ambiguous between expressing speculation versus posthe level of lfb1 binding activity in adenoidcystic as well as trabecular tumours shows some variation and may either be lower or higher than in the nontumorous tissue the sentence below involves the adjective putative in an apositive noun phrase modifier a different syntactic form that in the previous examplesit also clearly shows that the speculative portion is often confined to only a part of the information provided in a sentencewe report here the isolation ofhuman zinc finger 2 a putative zincfinger transcription factor by motifdirected differential display of mrna extracted from histaminestimulated human vein endothelial cells of course definite sentences also come in a varietythe definite sentences below vary in topic and formaffinity chromatography and coimmunoprecipitation assays demonstrated that cjun and tag physically interact with each other however nfkappab was increased at 3 h while ap1 and creb were increased at 15 h we studied the transcript distribution of cjun junb and jund in the rat brain an inclusive model for all steps in the targeting ofproteins to subnuclear sites cannot yet be proposed we have been talking about speculative fragments and speculative sentencesfor the rest of the paper we define a speculative sentence to be one that contains at least one speculative fragmenta definite sentence contains no speculative fragmentsin this study we only considered annotations at the sentence levelhowever in future work we plan to work on subsentential annotationsour general goal is to investigate speculative speech in bioscience literature and explore how it might be used in hlt applications for bioscientistsa more specific goal is to investigate the use of speculative speech in medline abstracts because of their accessibilitythere are a number of reasons supporting the importance of understanding speculative speech in the following we expand upon these points in the contexts of i information retrieval ii information extraction and iii knowledge discoveryin the context of information retrieval an example information need might be i am looking for speculations about the x gene in liver tissue one of the authors spoke at a research department of a drug company and the biologists present expressed this sort of information needon the other hand one of the authors has also encountered the opposite need i am looking for definite statements about transcription factors that interact with nf kappa b both these information needs would be easier to fulfill if automated annotation of speculative passages was possiblein the context of information extraction a similar situation existsfor example extracting tables of proteinprotein interactions would benefit from knowing which interactions were speculative and which were definitein the context of knowledge discovery speculation might play a number of rolesone possibility would be to use current speculative statements about a topic of interest as a seed for the automated knowledge discovery processfor example terms could be extracted from speculative fragments and used to guide the initial steps of the knowledge discovery processa less direct but perhaps even more important use is in building testtrain datasets for knowledge discovery systemsfor example let us assume that in a 1985 publication we find a speculation about two topicsconcepts a and c being related and later in a 1995 document there is a definite statement declaring that a and c are connected via bthis pair of statements can then form the basis of a discovery problemwe may use it to test a kr systems ability to predict b as the connecting aspect between a and c and to do this using data prior to the 1995 publicationthe same example could also be used differently kr systems could be assessed on their ability to make a speculation between a and c using data up to 1985 excluding the particular publication making the speculationin this way such pairs of temporally ordered speculativedefinite statements may be of value in kr researchdifferentiating between speculative and definite statements is one part of finding such statement pairswe know of no work specifically on speculative speech in the context of text processing of bioscience literaturehowever some work on information extraction from bioscience literature has dealt with speculative speechfor example discusses uncertainty and hedging in radiology reports and their system assigns one of five levels of certainty to extracted findingstext processing systems in general have focused factual languagehowever a growing number of researchers have started work on other aspects of language such as expressing opinions style of writing etcfor example a human language technology workshop will be held this spring entitled exploring attitude and affect in text theories and applications previous work along these lines includes this research focuses on newswire texts and other texts on the topic of politics and current eventsthere has been recent work on classifying sentences from medline abstracts for the categories such as object background conclusions in addition early work built text grammars for empirical research abstracts categorized and assigned structure concerning rhetorical roles of the sentenceshowever none of this work addresses the speculative vs definite distinction we are interested inthere has also been some work on constructing test sets for knowledge discoveryseveral researchers have used the discoveries by swanson and smalheiser to test their own algorithmsthe two problems most commonly used in replication studies are their discovery of a link between raynauds disease and fish oils and their discovery of several links between migraine and magnesium the most comprehensive replication to date is which employs eight swanson and smalheiser discoveries as a test bedin the remainder of the paper we describe a manual annotation experiment we performed give preliminary results on our attempts to automatically annotate sentences as containing speculative fragments and conclude with comments on possible future workin this experiment four human annotators manually marked sentences as highly speculative low speculative or definitesome of the questions we hoped to answer with this experiment were can we characterize what a speculative sentence is can a distinction between high and low speculation be made how much speculative speech is there where are speculative sentences located in the abstract is there variation across topicsthe annotators were instructed to follow written annotation guidelines which we provide in appendix of this paperwe wanted to explore how well the annotators agreed on relatively abstract classifications such as requires extrapolation from actual findings and thus we refrained from writing instructions such as if the sentence contains a form of suggest then mark it as speculative into the guidelineswe chose three topics to work on and used the following pubmed queries to gather abstracts the first topic is gene regulation and is about molecular biology research on transcription factors promoter regions gene expression etcthe second topic is crohns disease which is a chronic relapsing intestinal inflammation and has a number of genes or chromosomal loci associated with itthe third topic is turmeric a spice widely used in asia and highly regarded for its curative and analgesic propertiesthese include the treatment of burns stomach ulcers and ailments and various skin diseasesthere has been a surge of interest in curcumin over the last decadeeach abstract set was prepared for annotation as follows the order of the abstracts was randomized and the abstracts were broken into sentences using mxterminator the following people performed the annotations padmini srinivasan who has analyzed crohns and turmeric documents for a separate knowledge discover research task xin ying qiu who is completely new to all three topics marc light who has some experience with gene regulation texts vladimir leontiev who is a research scientist in an anatomy and cell biology departmentit certainly would have been preferable to have four experts on the topics do the annotation but this was not possiblethe following manual annotations were performed the 63 double annotated gene regulation abstracts contained 547 sentencesthe additional abstracts marked by light contained 344 sentences summing to 891 sentences of gene regulation abstractsthus there is an average of almost 9 sentences per gene regulation abstractthe 100 turmeric abstracts contained 738 sentencesthe other sets contain twice as many sentences as abstracts since only the last two sentences where annotatedthe annotation of each sentence was performed in the context of its abstractthis was true even when only the last two sentences where annotatedthe annotation guidelines in the appendix were used by all annotatorsin addition at the start of the experiment general issues were discussed but none of the specific examples in the sets af we worked with three categories low speculative high speculative and definiteall sentences were annotated with one of thesethe general idea behind the low speculative level was that the authors expressed a statement in such a way that it is clear that it follows almost directly from results but not quitethere is a small leap of faitha high speculative statement would contain a more dramatic leap from the results mentioned in the abstractour interannotator agreement results are expressed in the following four tablesthe first table contains values for the kappa statistic of agreement for the gene regulation data and the crohns data three values were computed kappa for threeway agreement twoway and twoway due to the lack of any sentences marked high in set c a kappa value for high vs low is not possiblekappa scores between 06 and 08 are generally considered encouraging but not outstandinghvslvsd svsd hvsl genereg 053 068 003 crohns 063 063 na pun intendedthe following two tables are confusion matrices the first for gene regulation data and the second for the crohns data if we consider one of the annotators as defining truth then we can compute precision and recall numbers for the other annotator on finding speculative sentencesif we choose leontiev and srinivasan as defining truth then light and qiu receive the scores below precision recall 068 078 070 064 as is evident from the confusion matrices the amount of data that we redundantly annotated is small and thus the kappa numbers are at best to be taken as trendshowever it does seem that the speculative vs definite distinction can be made with some reliabilityin contrast the high speculation vs low speculation distinction cannotthe gene regulation annotations marked by light can be used to answer questions about the position of speculative fragments in abstractsconsider the histogramlike table belowthe first row refers to speculative sentences and the second to definitethe columns refer to the last sentence of an abstract the penultimate elsewhere and a row sumthe number in brackets is the raw countremember that the number of abstracts in sets a b together is 100it is clear that almost all of the speculations come towards the end of the abstractin fact the final sentence contains a speculation more often than notin addition consider the data where all sentences in an abstract were annotated there were 1456 definitive sentences and 173 speculative sentence light qiu finally if we consider the last two sentences of all the data we have 1712 definitive sentences and 381 speculative sentences definite does not receive precision and recall valuesthe substring method was run on a subset of the datasets where the gene regulation data was removedwe decided to explore the ability of an svmbased text classifier to select speculative sentences from the abstractsfor this the abstracts were first processed using the smart retrieval system in order to obtain representation vectors alternative representations were tried involving stemming and term weighting since results obtained were similar we present only results using stemming and no weightsthe classifier experiments followed a 10fold crossvalidation designwe used sv mlight packages with all settings at default valueswe ran experiments in two modesfirst we considered only the last 2 sentencesfor this we pooled all hand tagged sentences from the three topic areas second we explored classification on all sentences in the document if we assume a default strategy as a simple baseline where the majority decision is always made then we get an accuracy of 82 for the classification problem on the last two sentences data set and 89 for the all sentences data setanother baseline option is to use a set of strings and look for them as substrings in the sentencesthe following 14 strings were identified by light while annotating the gene regulation abstracts suggest potential likely may at least in part possibl potential further investigation unlikely putative insights point toward promise proposethe automated system then looks for these substrings in a sentence and if found the sentence is marked as speculative and as definite if notin the table below the scores for the three methods of annotation are listed as rowswe give accuracy on the categorization task and precision and recall numbers for finding speculative sentencesthe format is precisionrecall all as percentagesthe majority method annotating every sentence as again the results are preliminary since the amount of data is small and the feature set we explored was limited to wordshowever it should be noted that both the substring and the svm systems performs well suggesting that speculation in abstracts is lexically marked but in a somewhat ambiguous fashionthis conclusion is also supported by the fact that neither system used positional features and yet the precision and recall on the all sentence data set is similar to the last two sentences data setthe work presented here is preliminary but promisingit seems that the notion of speculative sentence can be characterized enabling manual annotationhowever we did not manage to characterize the distinction between high and low speculationin addition it seems likely that automated systems will be able to achieve useful accuracyfinally abstracts seem to include a fair amount of speculative informationfuture work concerning manual annotation would include revising the guidelines throwing out the high vs low distinction annotating more data annotating subsentential units annotating the focus of the speculation and annotating full text articleswe are also ignorant of work in linguistics that almost certainly exists and may be informativewe have started this process by considering and future work concerning automatic annotation includes expanding the substring system with more substrings and perhaps more complicated regular expressions expanding the feature set of the svm trying out other classification methods such as decision treesfinally we plan on building some of the applications mentioned a speculation search engine transcription factor interaction tables with a speculationdefinite column and knowledge discovery test setswe would like to thank vladimir leontiev for his time and effort annotating gene regulation abstractsin addition we would like to thank david eichmann for his assistance with our database querieswe would also like to thank lynette hirschman for assistance with the title of this paperfinally we would like to thank the anonymous workshop reviewers for their comments
W04-3103
the language of bioscience facts speculations and statements in betweenwe explore the use of speculative language in medline abstractsresults from a manual annotation experiment suggest that the notion of speculative sentence can be reliably annotated by humansin addition an experiment with automated methods also suggest that reliable automated methods might also be developeddistributional observations are also presented as well as a discussion of possible uses for a system that can recognize speculative languagewe focus on introducing the problem exploring annotation issues and outlining potential applications rather than on the specificities of the ml approach and present some results using a manually crafted substring matching classifier and a supervised svm on a collection of medline abstractswe explore issues with annotating speculative language in biomedicine and outline potential applicationswe present a study on annotating hedges in biomedical documents
integrated annotation for biomedical information extraction we describe an approach to two areas of biomedical information extraction drug development and cancer genomics we have developed a framework which includes corpus annotation integrated at multiple levels a treebank containing syntactic structure a propbank containing predicateargument structure and annotation of entities and relations among the entities crucial to this approach is the proper characterization of entities as relation components which allows the integration of the entity annotation with the syntactic structure while retaining the capacity to annotate and extract more complex events we are training statistical taggers using this annotation for such extraction as well as using them for improving the annotation process work over the last few years in literature data mining for biology has progressed from linguistically unsophisticated models to the adaptation of natural language processing techniques that use full parsers and coreference to extract relations that span multiple sentences in this work we describe an approach to two areas of biomedical information extraction drug development and cancer genomics that is based on developing a corpus that integrates different levels of semantic and syntactic annotationthis corpus will be a resource for training machine learning algorithms useful for information extraction and retrieval and other datamining applicationswe are currently annotating only abstracts although in the future we plan to expand this to fulltext articleswe also plan to make publicly available the corpus and associated statistical taggerswe are collaborating with researchers in the division of oncology at the childrens hospital of philadelphia with the goal of automatically mining the corpus of cancer literature for those associations that link specified variations in individual genes with known malignanciesin particular we are interested in extracting three entities in the following relationship gene x with genomic variation event y is correlated with malignancy zfor example wt1 is deleted in wilms tumor s such statements found in the literature represent individual genevariationmalignancy observablesa collection of such observables serves two important functionsfirst it summarizes known relationships between genes variation events and malignancies in the cancer literatureas such it can be used to augment information available from curated public databases as well as serve as an independent test for accuracy and completeness of such repositoriessecond it allows inferences to be made about gene variation and malignancy associations that may not be explicitly stated in the literature both at the fact and entity instance levelssuch inferences provide testable hypotheses and thus future research targetsthe other major area of focus in collaboration with researchers in the knowledge integration and discovery systems group at glaxosmithkline is the extraction of information about enzymes focusing initially on compounds that affect the activity of the cytochrome p450 family of proteinsfor example the goal is to see a phrase like amiodarone weakly inhibited cyp2c9 cyp2d6 and cyp3a4mediated activities with ki values of 4512716 and extract the facts amiodarone inhibits cyp2c9 with ki4512716 amiodarone inhibits cyp2d6 with ki4512716 amiodarone inhibits cyp3a4 with ki4512716 previous work at gsk has used search algorithms that are based on pattern matching rules filling template slotsthe rules rely on identifying the relevant passages by first identifying compound names and then associating them with a limited number of relational terms such as inhibit or inactivatethis is similar to other work in biomedical extraction projects creating good patternaction rules for an ie problem is far from simplethere are many complexities in the different ways that a relation can be expressed in language such as syntactic alternations and the heavy use of coordinationwhile sufficiently complex patterns can deal with these issues it requires a good amount of time and effort to build such handcrafted rules particularly since such rules are developed for each specific problema corpus that is annotated with sufficient syntactic and semantic structure offers the promise of training taggers for quicker and easier information extractionthe corpus that we are developing for the two different application demands consists of three levels of annotation the entities and relations among the entities for the oncology or cyp domain syntactic structure and predicateargument structure this is a novel approach from the pointofview of nlp since previous efforts at treebanking and propbanking have been independent of the special status of any entities and previous efforts at entity annotation have been independent of corresponding layers of syntactic and semantic structurethe decomposition of larger entities into components of a relation worthwhile by itself on conceptual grounds for entity definition also allows the component entities to be mapped to the syntactic structurethese entities can be viewed as semantic types associated with syntactic constituents and so our expectation is that automated analyses of these related levels will interact in a mutually reinforcing and beneficial way for development of statistical taggersdevelopment of such statistical taggers is proceeding in parallel with the annotation effort and these taggers help in the annotation process as well as being steps towards automatic extractionin this paper we focus on the aspects of this project that have been developed and are in production while also trying to give enough of the overall vision to place the work that has been done in contextsection 2 discusses some of the main issues around the development of the guidelines for entity annotation for both the oncology and inhibition domainssection 3 first discusses the overall plan for the different levels of annotation and then focuses on the integration of the two levels currently in production entity annotation and syntactic structuresection 4 describes the flow of the annotation process including the development of the statistical taggers mentioned abovesection 5 is the conclusionannotation has been proceeding for both the oncology and the inhibition domainshere we give a summary of the main features of the annotation guidelines that have been developedwe have been influenced in this by previous work in annotation for biomedical information extraction however we differ in the domains we are annotating and the design philosophy for the entity guidelinesfor example we have been concentrating on explicit concepts for entities like genes rather than developing a widerange ontology for the various physical instantiationsgene entity for the sake of this project the definition for gene entity has two significant characteristicsfirst gene refers to a composite entity as opposed to the strict biological definitionas has been noted by others there are often ambiguities in the usage of the entity namesfor example it is sometimes unclear as to whether it is the gene or protein being referenced or the same name might refer to the gene or the protein at different locations in the same documentour approach to this problem is influenced by the named entity annotation in the automatic content extraction project in which geopolitical entities can have different roles such as location or organizationanalogously we consider a gene to be a composite entity that can have different roles throughout a documentstandardization of gene references between different texts and between gene synonyms is handled by externally referencing each instance to a standard ontology in the context of this project gene refers to a conceptual entity as opposed to the specific manifestation of a gene therefore we consider genes to be abstract concepts identifying genomic regions often associated with a function such as myc or trkb we do not consider actual instances of such genes within the geneentity domainsince we are interested in the association between geneentities and malignancies for this project genes are of interest to us when they have an associated variation eventtherefore the combination of gene entities and variation events provides us with an evoked entity representing the specific instance of a genevariation events as relations variations comprise a relationship between the following entities type location originalstate and alteredstate these four components represent the key elements necessary to describe any genomic variation eventvariations are often underspecified in the literature frequently having only two or three of these specificationscharacterizing individual variations as a relation among such components provides us with a great deal of flexibility 1 it allows us to capture the complete variation event even when specific components are broadly spaced in the text often spanning multiple sentences or even paragraphs 2 it provides us with a convenient means of tracking anaphora between detailed descriptions and 3 it provides a single structure capable of capturing the breadth of variation specifications malignancy the guidelines for malignancy annotation are under developmentwe are planning to define it in a manner analogous to variation whereby a malignancy is composed of various attribute types in the cyp inhibition annotation task we are tagging three types of entities each category has its own questions and uncertaintiesnames like cyp2c19 and cytochrome p450 enzymes proclaim their membership but there are many aliases and synonyms that do not proclaim themselves such as 1720lyasewe are compiling a list of such namesother substances is a potentially huge and vaguelydelimited set which in the current corpus includes grapefruit juice and red wine as well as more obviously biochemical entities like polyunsaturatedfatty acids and erythromycinthe quantitative measurements we are directly interested in are those directly related to inhibition such as ic50 and kwe tag the name of the measurement the numerical value and the unitfor example in the phrase was inhibited by troleandomycin ed50 is the name 1 the value and microm the unitwe are also tagging other measurements since it is easy to do and may provide valuable information for future ie workas has been noted in the literature on biomedical ie the same relation can take a number of syntactic formsfor example the family of words based on inhibit occurs commonly in medline abstracts about cyp enzymes in patterns like a inhibited b a inhibited the catalytic activity of b inhibition of b by a etcsuch alternations have led to the use of patternmatching rules to match all the relevant configurations and fill in template slots based on the resulting pattern matchesas discussed in the introduction dealing with such complications in patterns can take much time and effortour approach instead is to build an annotated corpus in which the predicateargument information is annotated on top of the parsing annotations in the treebank the resulting corpus being called a proposition bank or propbankthis newly annotated corpus is then used for training processors that will automatically extract such structures from new examplesin a propbank for biomedical text the types of inhibit examples listed above would consistently have their compounds labeled as arg0 and their enzymes labeled as arg1 for nominalized forms such as a is an inhibitor of b a caused inhibition of b inhibition of b by a as well the standard a inhibits bwe would also be able to label adjuncts consistently such as the with prepositional phrase in cyp3a4 activity was decreased by l s and f with ic values of about 200 mmin accordance with other calibratable verbs such as rise fall decline etc this phrase would be labeled as an arg2extent regardless of its syntactic rolea propbank has been built on top of the penn treebank and has been used to train semantic taggers for extracting argument roles for the predicates of interest regardless of the particular syntactic context1 such semantic taggers have been developed by using machine learning techniques trained on the penn propbank however the penn treebank and propbank involve the annotation of wall street journal textthis text being a financial domain differs in significant ways from the biomedical text and so it is necessary for this approach to have a corpus of biomedical texts such as medline articles annotated for both syntactic structure and shallow semantic structure in this project the syntactic and semantic annotation is being done on a corpus which is also being annotated for entities as described in section 2since semantic taggers of the sort described above result in semantic roles assigned to syntactic tree constituents it is desirable to have the entities correspond to syntactic constituents so that the semantic roles are assigned to entitiesthe entity information can function as type information and be taken advantage of by learning algorithms to help characterize the properties of the terms filling specified roles in a given predicatethis integration of these three different annotation levels including the entities is being done for the first time2 and we discuss here three main challenges to this correspondence between entities and constituents entities that are large enough to cut across multiple constituents entities within prenominal modifiers and coordination3 relations and large entities one major area of concern is the possibility of entities that contain more than one syntactic constituent and do not match any node in the syntax treefor example as discussed in section 2 a variation event includes material on a variations type location and state and can cut not only across constituents but even sentences and paragraphsa simple example is point mutations at codon 12 containing both the nominal and following np note that while in isolation this could also be considered one syntactic constituent the np and pp together the actual context is point mutations at codon 12 in duodenal lavage fluidsince all pps are attached at the same level at codon 12 and in duodenal lavage fluid are sisters and so there is no constituent consisting ofjust point mutations at codon 12casting the variation event as a relation between different component entities allows the component entities to correspond to tree constituents while retaining the capacity to annotate and search for more complex eventsin this case one component entity point mutations cor2an influential precursor to this integration is the system described in our work is in much the same spirit although the representation of the predicateargument structure via propbank and the linkage to the entities is quite different as well as of course the domain of annotation3there are cases where the entities are so minimal that they are contained within a np not including the determiner such as cpg site in the np a cpg site entitieswe are not as concerned about these cases since we expect that such entity information properly contained within a base np can be associated with the full base np responds to a np node and at codon 12 is corresponds to the pp node that is the nps sisterat the same time the relation annotation contains the information relating these two constituentssimilarly while the malignancy entity definition is currently under development as mentioned in section 21 a guiding principle is that it will also be treated as a relation and broken down into component entitieswhile this also has conceptual benefits for the annotation guidelines it has the fortunate effect of making such otherwise syntaxunfriendly malignancies as colorectal adenomas containing early cancer and acute myelomonocytic leukemia in remission amenable for mapping the component parts to syntactic nodesentities within prenominal modifiers while we are for the most part following the penn treebank guidelines we are modifying them in two important aspectsone concerns the prenominal modifiers which in the penn treebank were left flat with no structure but in this biomedical domain contain much of the information eg cancerassociated autoimmune antigennot only would this have had no annotation for structure but even more bizarrely cancerassociated would have been a single token in the penn treebank thus making it impossible to capture the information as to what is associated with whatwe have developed new guidelines to assign structure to prenominal entities such as breast cancer as well as changed the tokenization guidelines to break up tokens such as cancerassociatedcoordination we have also modified the treebank annotation to account for the wellknown problem of entities that are discontinuous within a coordination structure eg k and hras where the entities are kras and hrasour annotation tool allows for discontinuous entities so that both kras and hras are annotated as genesunder standard penn treebank guidelines for tokenization and syntactic structure this would receive the flat structurek and hras in which there is no way to directly associate the entity kras with a constituent nodewe have modified the treebank guidelines so that kras and hras are both constituents with the ras part ofkras represented with an empty category coindexed with ras in hras4as mentioned in the introduction statistical taggers are being developed in parallel with the annotation effortwhile such taggers are part of the final goal of the project providing the building blocks for extracting entities and relations they are also useful in the annotation process itself so that the annotators only need to perform correction of automatically tagged data instead of starting from scratchwe are currently annotating medline abstracts for both the oncology and cyp domainsthe flowchart for the annotation process is shown in figure 1tokenization postagging entity annotation and treebanking are in full productionpropbank annotation and the merging of the entities and treebanking remain to be integrated into the current workflowthe table in figure 2 shows the number of abstracts completed for each annotation areathe annotation sequence begins with tokenization and partofspeech annotatingwhile both aspects are similar to those used for the penn treebank there are some differences partly alluded to in section 3tokens are somewhat more finegrained than in the penn treebank so that hras eg would consist of three tokens h and rastokenized and partofspeech annotated files are then sent to the entity annotators either for oncology or cyp depending on which domain the abstract has been chosen forthe entities described in section 2 are annotated at this stepwe are using wordfreak a javabased linguistic annotation tool5 for annotation of tokenization pos and entitiesfigure 3 is a screen shot of the oncology domain annotation here showing a variation relation being created out of component entities for type and locationin parallel with the entity annotation a file is treebanked ie annotated for its syntactic structurenote that this is done independently of the entity annotationthis is because the treebanking guidelines are relatively stable while the entity definitions can require a significant period of study before stabilizing and with the parallel treatment the treebanking can proceed without waiting for the entity annotationhowever this does mean that to produce the desired integrated annotation the entity and treebanking annotations need to be merged into one representationthe consideration of the issues described in section 3 has been carried out for the purpose of allowing this integration of the treebanking and entity annotationthis has been completed for some pilot documents but the full merging remains to be integrated into the workflow systemuntil recently the partofspeech annotation was done by handcorrecting the results of tagging the data with a partofspeech tagger trained on a modified form of the penn treebank6 the tagger is a maximumentropy model utilizing the opennlp package available athttpwwwsfnetprojectsopennlpit has now been retrained using 315 files figure 4 shows the improvement of the new vs the old pos tagger on the same 294 files that have been handcorrectedthese results are based on testing files that have already been tokenized and thus are an evaluation only of the pos tagger and not the tokenizerwhile not directly comparable to results such as due to the different tag sets and tokenization they are in the same general range7 the oncology and cyp entity annotation as well as the treebanking are still being done fully manually although that will change in the near futureinitial results for a tagger to identify the various components of a variation relation are promising although not yet integrated into annotation processthe tagger is based on the implementation of conditional random fields in the mallet toolkit briefly conditional random fields are loglinear models that rely on weighted features to make predictions on the inputfeatures used by our system include standard pattern matching and word features as well as some expertcreated regular expression features8using 10fold crossvalidation on 264 labelled abstracts containing 551 types 1064 locations and 557 states we obtained the following results an entity is considered correctly identified if and only if it matches the human labeling by both category and span at this stage we have not distinguished between initial and final stateswhile it is difficult to compare taggers that tag different types of entities crfs have been utilized for stateoftheart results in npchunking and gene and protein tagging currently we are beginning to investigate methods to identify relations over the variation components that are extracted using the entity taggerwe have described here an integrated annotation approach for two areas of biomedical information extractionwe discussed several issues that have arisen for this integration of annotation layersmuch effort has been spent on the entity definitions and how they relate to the higherlevel concepts which are desired for extractionthere are promising initial results for training taggers to extract these entitiesnext steps in the project include continued annotation of the layers we are currently doing integration of the level of predicateargument annotation and further development of the statistical taggers including taggers for identifying relations over their component entitiesthe project described in this paper is based at the institute for research in cognitive science at the university of pennsylvania and is supported by grant eia0205448 from the national science foundations information technology research programwe would like to thank aravind joshi jeremy lacivita paula matuszek tom morton and fernando pereira for their comments
W04-3111
integrated annotation for biomedical information extractionwe describe an approach to two areas of biomedical information extraction drug development and cancer genomicswe have developed a framework which includes corpus annotation integrated at multiple levels a treebank containing syntactic structure a propbank containing predicateargument structure and annotation of entities and relations among the entitiescrucial to this approach is the proper characterization of entities as relation components which allows the integration of the entity annotation with the syntactic structure while retaining the capacity to annotate and extract more complex eventswe are training statistical taggers using this annotation for such extraction as well as using them for improving the annotation process
maxmargin parsing we present a novel discriminative approach to parsing inspired by the largemargin criterion underlying support vector machines our formulation uses a factorization analogous to the standard dynamic programs for parsing in particular it allows one to efficiently learn a model which discriminates among the entire space of parse trees as opposed to reranking the top few candidates our models can condition on arbitrary features of input sentences thus incorporating an important kind of lexical information without the added algorithmic complexity of modeling headedness we provide an efficient algorithm for learning such models and show experimental evidence of the models improved performance over a natural baseline model and a lexicalized probabilistic contextfree grammar recent work has shown that discriminative techniques frequently achieve classification accuracy that is superior to generative techniques over a wide range of tasksthe empirical utility of models such as logistic regression and support vector machines in flat classification tasks like text categorization wordsense disambiguation and relevance routing has been repeatedly demonstratedfor sequence tasks like partofspeech tagging or namedentity extraction recent topperforming systems have also generally been based on discriminative sequence models like conditional markov models or conditional random fields a number of recent papers have considered discriminative approaches for natural language parsing broadly speaking these approaches fall into two categories reranking and dynamic programming approachesin reranking methods an initial parser is used to generate a number of candidate parsesa discriminative model is then used to choose between these candidatesin dynamic programming methods a large number of candidate parse trees are represented compactly in a parse tree forest or chartgiven sufficiently local features the decoding and parameter estimation problems can be solved using dynamic programming algorithmsfor example describe approaches based on conditional loglinear models where variants of the insideoutside algorithm can be used to efficiently calculate gradients of the loglikelihood function despite the exponential number of trees represented by the parse forestin this paper we describe a dynamic programming approach to discriminative parsing that is an alternative to maximum entropy estimationour method extends the maxmargin approach of taskar et al to the case of contextfree grammarsthe present method has several compelling advantagesunlike reranking methods which consider only a prepruned selection of good parses our method is an endtoend discriminative model over the full space of parsesthis distinction can be very significant as the set of nbest parses often does not contain the true parsefor example in the work of collins 41 of the correct parses were not in the candidate pool of 30best parsesunlike previous dynamic programming approaches which were based on maximum entropy estimation our method incorporates an articulated loss function which penalizes larger tree discrepancies more severely than smaller ones1 moreover like perceptronbased learning it requires only the calculation of viterbi trees rather than expectations over all trees in practice it converges in many fewer iterations than crflike approachesfor example while our approach generally converged in 2030 iterations clark and curran report experiments involving 479 iterations of training for one model and 1550 iterations for anotherthe primary contribution of this paper is the extension of the maxmargin approach of taskar et al to context free grammarswe show that this framework allows highaccuracy parsing in cubic time by exploiting novel kinds of lexical informationin the discriminative parsing task we want to learn a function f x y where x is a set of sentences and y is a set of valid parse trees according to a fixed grammar g g maps an input x e x to a set of candidate parses g c y2 we assume a loss function l x x y x y rthe function l measures the penalty for proposing the parse yˆ for x when y is the true parsethis penalty may be defined for example as the number of labeled spans on which the two trees do not agreein general we assume that l 0 for y ˆygiven labeled training examples for i 1 n we seek a function f with small expected loss on unseen sentencesthe functions we consider take the following linear discriminant form where denotes the vector inner product w e rd and φ is a featurevector representation of a parse tree φ x x y rd 3 note that this class of functions includes viterbi pcfg parsers where the featurevector consists of the counts of the productions used in the parse and the parameters w are the logprobabilities of those productionsthe traditional method of estimating the parameters of pcfgs assumes a generative grammar that defines p and maximizes the joint loglikelihood ei log p a alternative probabilistic approach is to estimate the parameters discriminatively by maximizing conditional loglikelihoodfor example the maximum entropy approach defines a conditional loglinear model where zw eyg exp and maximizes the conditional loglikelihood of the sample ei log p in this paper we advocate a different estimation criterion inspired by the maxmargin principle of svmsmaxmargin estimation has been used for parse reranking recently it has also been extended to graphical models and shown to outperform the standard maxlikelihood methodsthe main idea is to forego the probabilistic interpretation and directly ensure that for all i in the training datawe define the margin of the parameters w on the example i and parse y as the difference in value between the true parse yi and y where biy b and biyz bintuitively the size of the margin quantifies the confidence in rejecting the mistaken parse y using the function fw modulo the scale of the parameters wwe would like this rejection confidence to be larger when the mistake y is more severe iel is largewe can express this desideratum as an optimization problem where liy lthis quadratic program aims to separate each y g from the target parse yi by a margin that is proportional to the loss lafter a standard transformation in which maximizing the margin is reformulated as minimizing the scale of the weights we get the following program the addition of nonnegative slack variables si allows one to increase the global margin by paying a local penalty on some outlying examplesthe constant c dictates the desired tradeoff between margin size and outliersnote that this formulation has an exponential number of constraints one for each possible parse y for each sentence iwe address this issue in section 4in svms the optimization problem is solved by working with the dual of a quadratic program analogous to eq2for our problem just as for svms the dual has important computational advantages including the kernel trick which allows the efficient use of highdimensional features spaces endowed with efficient dot products moreover the dual view plays a crucial role in circumventing the exponential size of the primal problemin eq2 there is a constraint for each mistake y one might make on each example i which rules out that mistakefor each mistakeexclusion constraint the dual contains a variable αiyintuitively the magnitude of αiy is proportional to the attention we must pay to that mistake in order not to make itthe dual of eq2 is given by where iiy i indicates whether y is the true parse yigiven the dual solution α the solution to the primal problem w is simply a weighted linear combination of the feature vectors of the correct parse and mistaken parses this is the precise sense in which mistakes with large α contribute more strongly to the modelthere is a major problem with both the primal and the dual formulations above since each potential mistake must be ruled out the number of variables or constraints is proportional to g the number of possible parse treeseven in grammars without unary chains or empty elements the number of parses is generally exponential in the length of the sentence so we cannot expect to solve the above problem without any assumptions about the featurevector representation b and loss function l for that matter for arbitrary representations to find the best parse given a weight vector we would have no choice but to enumerate all trees and score themhowever our grammars and representations are generally structured to enable efficient inferencefor example we usually assign scores to local parts of the parse such as pcfg productionssuch factored models have shared substructure properties which permit dynamic programming decompositionsin this section we describe how this kind of decomposition can be done over the dual α distributionsthe idea of this decomposition has previously been used for sequences and other markov random fields in taskar et al but the present extension to cfgs is novelfor clarity of presentation we restrict the grammar to be in chomsky normal form where all rules in the grammar are of the form ha b ci or ha ai where a b and c are nonterminal symbols and a is some terminal symbolfor example figure 1 shows a tree in this formwe will represent each parse as a set of two types of partsparts of the first type are single constituent tuples consisting of a nonterminal a startpoint s and endpoint e and sentence i such as r in figure 1in this representation indices s and e refer to positions between words rather than to words themselvesthese parts correspond to the traditional notion of an edge in a tabular parserparts of the second type consist of cfruletuples the tuple specifies a particular rule a b c and its position including split point m within the sentence i such as q in figure 1 and corresponds to the traditional notion of a traversal in a tabular parsernote that parts for a basic pcfg model are not just rewrites but rather anchored itemsformally we assume some countable set of parts r we also assume a function r which maps each object e x x y to a finite subset of r thus r is the set of parts belonging to a particular parseequivalently the function r maps a derivation y to the set of parts which it includesbecause all rules are in binarybranching form r is constant across different derivations y for the same input sentence xwe assume that the feature vector for a sentence and parse tree decomposes into a sum of the feature vectors for its parts in cfgs the function o can be any function mapping a rule production and its position in the sentence x to some feature vector representationfor example o could include features which identify the rule used in the production or features which track the rule identity together with features of the words at positions s m e and neighboring positions in the sentence xin addition we assume that the loss function l also decomposes into a sum of local loss functions l over parts as follows one approach would be to define l to be 0 only if the nonterminal a spans words s e in the derivation y and 1 otherwisethis would lead to l tracking the number of constituent errors in ˆy where a constituent is a tuple such as another more strict definition would be to define l to be 0 if r of the type is in the derivation y and 1 otherwisethis definition would lead to l being the number of cfruletuples in yˆ which are not seen in y4 finally we define indicator variables i which are 1 if are r 0 otherwisewe also define sets are yegr for the training examples i 1 n thus r is the set of parts that is seen in at least one of the objects y gthe dual in eq3 involves variables αiy for all i 1 n y g and the objective is quadratic in these α variablesin addition it turns out that the set of dual variables αi αiy y g for each example i is constrained to be nonnegative and sum to 1it is interesting that while the parameters w lose their probabilistic interpretation the dual variables αi for each sentence actually form a kind of probability distributionfurthermore the objective can be expressed in terms of expectations with respect to these distributions we now consider how to efficiently solve the maxmargin optimization problem for a factored modelas shown in taskar et al the dual in eq3 can be reframed using marginal termswe will also find it useful to consider this alternative formulation of the dualgiven dual variables α we define the marginals µir for all i r as follows since the dual variables αi form probability distributions over parse trees for each sentence i the marginals µir represent the proportion of parses that would contain part r if they were drawn from a distribution αinote that the number of such marginal terms is the number of parts which is polynomial in the length of the sentencenow consider the dual objective q in eq3it can be shown that the original objective q can be expressed in terms of these 4the constituent loss function does not exactly correspond to the standard scoring metrics such as f1 or crossing brackets but shares the sensitivity to the number of differences between treeswe have not thoroughly investigated the exact interplay between the various loss choices and the various parsing metricswe used the constituent loss in our experiments marginals as qm where µ is the vector with components µir and qm is defined as where liare l φiare φ and iiare ithis follows from substituting the factored definitions of the feature representation φ and loss function l together with definition of marginalshaving expressed the objective in terms of a polynomial number of variables we now turn to the constraints on these variablesthe feasible set for α is now let δm be the space of marginal vectors which are feasible then our original optimization problem can be reframed as maxµeam qmfortunately in case of pcfgs the domain δm can be described compactly with a polynomial number of linear constraintsessentially we need to enforce the condition that the expected proportions of parses having particular parts should be consistent with each otherour marginals track constituent parts ha s e ii and cfruletuple parts ha b c s m e ii the consistency constraints are precisely the insideoutside probability relations where ni is the length of the sentencein addition we must ensure nonnegativity and normalization to 1 µiare 0 x µia0ni 1a the number of variables in our factored dual for cfgs is cubic in the length of the sentence while the number of constraints is quadraticthis polynomial size formulation should be contrasted with the earlier formulation in collins which has an exponential number of constraintswe have reduced the problem to a polynomial size qp which in principle can be solved using standard qp toolkitshowever although the number of variables and constraints in the factored dual is polynomial in the size of the data the number of coefficients in the quadratic term in the objective is very large quadratic in the number of sentences and dependent on the sixth power of sentence lengthhence in our experiments we use an online coordinate descent method analogous to the sequential minimal optimization used for svms and adapted to structured maxmargin estimation in taskar et al we omit the details of the structured smo procedure but the important fact about this kind of training is that similar to the basic perceptron approach it only requires picking up sentences one at a time checking what the best parse is according to the current primal and dual weights and adjusting the weightswe used the penn english treebank for all of our experimentswe report results here for each model and setting trained and tested on only the sentences of length 15 wordsaside from the length restriction we used the standard splits sections 221 for training 22 for development and 23 for final testing as a baseline we trained a cnf transformation of the unlexicalized model of klein and manning on this datathe resulting grammar had 3975 nonterminal symbols and contained two kinds of productions binary nonterminal rewrites and tagword rewrites5 the scores for the binary rewrites were estimated using unsmoothed relative frequency estimatorsthe tagging rewrites were estimated with a smoothed model of p also using the model from klein and manning figure 3 shows the performance of this model 8799 f1 on the test setfor the basic maxmargin model we used exactly the same set of allowed rewrites as in the generative case but estimated their weights according to the discriminative method of section 4tagword production weights were fixed to be the log of the generative p modelthat is the only change between generative and basic is the use of the discriminative maximummargin criterion in place of the generative maximum likelihood onethis change alone results in a small improvement on top of the basic model we first added lexical features of each span this gave a lexical modelfor a span of a sentence x the base lexical features were these base features were conjoined with the span length for spans of length 3 and below since short spans have highly distinct behaviors the features are lexical in the sense than they allow specific words 5unary rewrites were compiled into a single compound symbol so for example a subjectgapped sentence would have label like svpthese symbols were expanded back into their source unary chain before parses were evaluated and word pairs to influence the parse scores but are distinct from traditional lexical features in several waysfirst there is no notion of headword here nor is there any modeling of wordtoword attachmentrather these features pick up on lexical trends in constituent boundaries for example the trend that in the sentence the screen was a sea of red the span between the word was and the word of is unlikely to be a constituentthese nonhead lexical features capture a potentially very different source of constraint on tree structures than headargument pairs one having to do more with linear syntactic preferences than lexical selectionregardless of the relative merit of the two kinds of information one clear advantage of the present approach is that inference in the resulting model remains cubic since the dynamic program need not track items with distinguished headwordswith the addition of these features the accuracy jumped past the generative baseline to 8844as a concrete example of how these features can sway a decision consider the sentence the egyptian president said he would visit libya today to resume the talksthe generative model incorrectly considers libya today to be a base nphowever this analysis is counter to the trend of today being a oneword constituenttwo features relevant to this trend are and these features represent the preference of the word today for being the first and and last word in constituent spans of length 16 in the lexical model however these features have quite large positive weights 062 eachas a result this model makes this parse decision correctlyanother kind of feature that can usefully be incorporated into the classification process is the output of other auxiliary classifiersfor this kind of feature one must take care that its reliability on the training not be vastly greater than its reliability on the test setotherwise its weight will be artificially highto ensure that such features are as noisy on the training data as the test data we split the training into two foldswe then trained the auxiliary classifiers in jacknife fashion on each fold and using their predictions as features on the other foldthe auxiliary classifiers were then retrained on the entire training set and their predictions used as features on the development and test setswe used two such auxiliary classifiers giving a prediction feature for each span the first feature was the prediction of the generative baseline this feature added little information but made the learning phase fasterthe second feature was the output of a flat classifier which was trained to predict whether single spans in isolation were constituents or not based on a bundle of features including the list above but also the following the preceding first last and following tag in the span pairs of tags such as precedingfirst lastfollowing precedingfollowing firstlast and the entire tag sequencetag features on the test sets were taken from a pretagging of the sentence by the tagger described in toutanova et al while the flat classifier alone was quite poor the resulting maxmargin model scored 8912 f1to situate these numbers with respect to other models the parser in collins which is generative lexicalized and intricately smoothed scores 8869 over the same traintest configurationit is worth considering the cost of this kind of methodat training time discriminative methods are inherently expensive since they all involve iteratively checking current model performance on the training set which means parsing the training set in our experiments 1020 iterations were generally required for convergence there are several nice aspects of the approach described herefirst it is driven by the repeated extraction over the training examples of incorrect parses which the model currently prefers over the true parsesthe procedure that provides these parses need not sum over all parses nor even necessarily find the viterbi parses to functionthis allows a range of optimizations not possible for crflike approaches which must extract feature expectations from the entire set of parses7 nonetheless generative approaches 7one tradeoff is that this approach is more inherently sequential and harder to parallelize are vastly cheaper to train since they must only collect counts from the training seton the other hand the maxmargin approach does have the potential to incorporate many new kinds of features over the input and the current feature set allows limited lexicalization in cubic time unlike other lexicalized models we have presented a maximummargin approach to parsing which allows a discriminative svmlike objective to be applied to the parsing problemour framework permits the use of a rich variety of input features while still decomposing in a way that exploits the shared substructure of parse trees in the standard wayon a test set of 15 word sentences the featurerich model outperforms both its own natural generative baseline and the collins parser on flwhile like most discriminative models it is computeintensive to train it allows fast parsing remaining cubic despite the incorporation of lexical featuresthis tradeoff between the complexity accuracy and efficiency of a parsing model is an important area of future researchthis work was supported in part by the department of the interiordarpa under contract number nbchd030010 a microsoft graduate fellowship to the second author and national science foundation grant 0347631 to the third author
W04-3201
maxmargin parsingwe present a novel discriminative approach to parsing inspired by the largemargin criterion underlying support vector machinesour formulation uses a factorization analogous to the standard dynamic programs for parsingin particular it allows one to efficiently learn a model which discriminates among the entire space of parse trees as opposed to reranking the top few candidatesour models can condition on arbitrary features of input sentences thus incorporating an important kind of lexical information without the added algorithmic complexity of modeling headednesswe provide an efficient algorithm for learning such models and show experimental evidence of the models improved performance over a natural baseline model and a lexicalized probabilistic contextfree grammarwe suggested a method for maximal margin parsing which employs the dynamic programming approach to decoding and parameter estimation problems
verbocean mining the web for finegrained semantic verb relations broadcoverage repositories of semantic relations between verbs could benefit many nlp tasks we present a semiautomatic method for extracting finegrained semantic relations between verbs we detect similarity strength antonymy enablement and temporal happensbefore relations between pairs of strongly associated verbs using lexicosyntactic patterns over the web on a set of 29165 strongly associated verb pairs our extraction algorithm yielded 655 accuracy analysis of types shows that on the relation achieved 75 accuracy we provide the called for download at many nlp tasks such as question answering summarization and machine translation could benefit from broadcoverage semantic resources such as wordnet and evca these extremely useful resources have very high precision entries but have important limitations when used in realworld nlp tasks due to their limited coverage and prescriptive nature for example it may be valuable to know that if someone has bought an item they may sell it at a later timewordnet does not include the relation quotx buys yquot happensbefore quotx sells yquot since it is possible to sell something without having bought it verbs are the primary vehicle for describing events and expressing relations between entitieshence verb semantics could help in many natural language processing tasks that deal with events or relations between entitiesfor tasks which require canonicalization of natural language statements or derivation of plausible inferences from such statements a particularly valuable resource is one which relates verbs to one another and provides broad coverage of the verbs in the target languagein this paper we present an algorithm that semiautomatically discovers finegrained verb semantics by querying the web using simple lexicosyntactic patternsthe verb relations we discover are similarity strength antonymy enablement and temporal relationsidentifying these relations over 29165 verb pairs results in a broadcoverage resource we call verboceanour approach extends previously formulated ones that use surface patterns as indicators of semantic relations between nouns we extend these approaches in two ways our patterns indicate verb conjugation to increase their expressiveness and specificity and we use a measure similar to mutual information to account for both the frequency of the verbs whose semantic relations are being discovered as well as for the frequency of the patternin this section we describe application domains that can benefit from a resource of verb semanticswe then introduce some existing resources and describe previous attempts at mining semantics from textquestion answering is often approached by canonicalizing the question text and the answer text into logical formsthis approach is taken inter alia by a topperforming system in discussing future work on the system logical form matching component rus points to incorporating entailment and causation verb relations to improve the matcher performancein other work webber et al have argued that successful question answering depends on lexical reasoning and that lexical reasoning in turn requires finegrained verb semantics in addition to troponymy and antonymyin multidocument summarization knowing verb similarities is useful for sentence compression and for determining sentences that have the same meaning knowing that a particular action happens before another or is enabled by another is also useful to determine the order of the events for example to order summary sentences properly it may be useful to know that selling something can be preceded by either buying manufacturing or stealing itfurthermore knowing that a particular verb has a meaning stronger than another can help a system pick the most general sentencein lexical selection of verbs in machine translation and in work on document classification practitioners have argued for approaches that depend on widecoverage resources indicating verb similarity and membership of a verb in a certain classin work on translating verbs with many counterparts in the target language palmer and wu discuss inherent limitations of approaches which do not examine a verb class membership and put forth an approach based on verb similarityin document classification klavans and kan demonstrate that document type is correlated with the presence of many verbs of a certain evca class in discussing future work klavans and kan point to extending coverage of the manually constructed evca resource as a way of improving the performance of the systema widecoverage repository of verb relations including verbs linked by the similarity relation will provide a way to automatically extend the existing verb classes to cover more of the english lexiconsome existing broadcoverage resources on verbs have focused on organizing verbs into classes or annotating their frames or thematic rolesevca organizes verbs by similarity and participation nonparticipation in alternation patternsit contains 3200 verbs classified into 191 classesadditional manually constructed resources include propbank framenet verbnet and the resource on verb selectional restrictions developed by gomez our approach differs from the above in its focuswe relate verbs to each other rather than organize them into classes or identify their frames or thematic roleswordnet does provide relations between verbs but at a coarser levelwe provide finergrained relations such as strength enablement and temporal informationalso in contrast with wordnet we cover more than the prescriptive casesprevious web mining work has rarely addressed extracting many different semantic relations from websized corpusmost work on extracting semantic information from large corpora has largely focused on the extraction of isa relations between nounshearst was the first followed by recent largerscale and more fully automated efforts recently moldovan et al present a learning algorithm to detect 35 finegrained noun phrase relationsturney studied word relatedness and synonym extraction while lin et al present an algorithm that queries the web using lexical patterns for distinguishing noun synonymy and antonymyour approach addresses verbs and provides for a richer and finergrained set of semanticsreliability of estimating bigram counts on the web via search engines has been investigated by keller and lapata semantic networks have also been extracted from dictionaries and other machinereadable resourcesmindnet extracts a collection of triples of the type quotducks have wingsquot and quotduck capableof flyingquotthis resource however does not relate verbs to each other or provide verb semanticsin this section we introduce and motivate the specific relations that we extractwhilst the natural language literature is rich in theories of semantics largecoverage manually created semantic resources typically only organize verbs into a flat or shallow hierarchy of classes wordnet identifies synonymy antonymy troponymy and becauseas summarized in figure 1 fellbaum discusses a finergrained analysis of entailment while the wordnet database does not distinguish between eg backward presupposition from proper temporal inclusion in formulating our set of relations we have relied on the finergrained analysis explicitly breaking out the temporal precedence between entitiesin selecting the relations to identify we aimed at both covering the relations described in wordnet and covering the relations present in our collection of strongly associated verb pairswe relied on the strongly associated verb pairs described in section 44 for computational efficiencythe relations we identify were experimentally found to cover 99 out of 100 randomly selected verb pairsour algorithm identifies six semantic relations between verbsthese are summarized in table 1 along with their closest corresponding wordnet category and the symmetry of the relation similarityas fellbaum and the tradition of organizing verbs into similarity classes indicate verbs do not neatly fit into a unified isa hierarchyrather verbs are often similar or relatedsimilarity between action verbs for example can arise when they differ in connotations about manner or degree of actionexamples extracted by our system include maximize enhance produce create reduce restrictstrengthwhen two verbs are similar one may denote a more intense thorough comprehensive or absolute actionin the case of changeofstate verbs one may denote a more complete changewe identify this as the strength relationsample verb pairs extracted by our system in the order weak to strong are taint poison permit authorize surprise startle startle shocksome instances of strength sometimes map to wordnet troponymy relationstrength a subclass of similarity has not been identified in broadcoverage networks of verbs but may be of particular use in natural language generation and summarization applicationsantonymyalso known as semantic opposition antonymy between verbs has several distinct subtypesas discussed by fellbaum it can arise from switching thematic roles associated with the verb there is also antonymy between stative verbs and antonymy between sibling verbs which share a parent or an entailed verb antonymy also systematically interacts with the happensbefore relation in the case of restitutive opposition this subtype is exemplified by damage repair wrap unwrapin terms of the relations we recognize it can be stated that restitutiveopposition happensbefore and antonymexamples of antonymy extracted by our system include assemble dismantle ban allow regard condemn roast fryenablementthis relation holds between two verbs v1 and v2 when the pair can be glossed as v1 is accomplished by v2enablement is classified as a type of causal relation by barker and szpakowicz examples of enablement extracted by our system include assess review and accomplish completehappensbeforethis relation indicates that the two verbs refer to two temporally disjoint intervals or instanceswordnet because relation between a causative and a resultative verb would be tagged as instances of happensbefore by our systemexamples of the happensbefore relation identified by our system include marry divorce detain prosecute enroll graduate schedule reschedule tie untiewe discover the semantic relations described above by querying the web with google for lexicosyntactic patterns indicative of each relationour approach has two stagesfirst we identify pairs of highly associated verbs cooccurring on the web with sufficient frequency using previous work by lin and pantel as described in section 44next for each verb pair we tested lexicosyntactic patterns calculating a score for each possible semantic relation as described in section 42finally as described in section 43 we compare the strengths of the individual semantic relations and preferring the most specific and then strongest relations output a consistent set as the final outputas a guide to consistency we use a simple theory of semantics indicating which semantic relations are subtypes of other ones and which are compatible and which are mutually exclusivethe lexicosyntactic patterns were manually selected by examining pairs of verbs in known semantic relationsthey were refined to decrease capturing wrong parts of speech or incorrect semantic relationswe used 50 verb pairs and the overall process took about 25 hourswe use a total of 35 patterns which are listed in table 2 along with the estimated frequency of hitsy or at least x 1016905 yed or at least xed not only xed but yed not just xed but yed the probabilities in the denominator are difficult to calculate directly from search engine resultsfor a given lexicosyntactic pattern we need to estimate the frequency of the pattern instantiated with appropriately conjugated verbsfor verbs we need to estimate the frequency of the verbs but avoid counting other partsofspeech another issue is that some relations are symmetric while others are not for symmetric relations only the verbs can fill the lexicosyntactic pattern in either orderto address these issues we estimate sp using narrow and broad similarity overlap in their coverage and are treated as a single category similarity when postprocessednarrow similarity tests for rare patterns and hitsest for it had to be approximated rather than estimated from the smaller corpusnote that our patterns specify the tense of the verbs they acceptwhen instantiating these patterns we conjugate as neededfor example quotboth xed and yedquot instantiates on sing and dance as quotboth sung and dancedquotin this section we describe how the presence of a semantic relation is detectedwe test the relations with patterns exemplified in table 2we adopt an approach inspired by mutual information to measure the strength of association denoted sp between three entities a verb pair v1 and v2 and a lexicosyntactic pattern p as a result of tuning the system on a tuning set of 50 verb pairs 85additional test for asymmetric relationsfor the asymmetric relations we require not only that exceed a certain threshold but that there be strong asymmetry of the relation for symmetric relationshere hits denotes the number of documents containing the string s as returned by googlen is the number of words indexed by the search engine is a correction factor to obtain the frequency of the verb v in all tenses from the frequency of the pattern quotto vquotbased on several verbs we have estimated 85because pattern counts when instantiated with verbs could not be estimated directly we have computed the frequencies of the patterns in tagged word corpus and used it to estimate the expected number of hits for each patternwe estimated the a similar methodwe say that the semantic relation given a pair of semantic relations from the set we identify one of three cases can arise one relation is more specific the relations are compatible where presence of one does not imply or rule out presence of the other and the relations are incompatible it is not uncommon for our algorithm to identify presence of several relations with different strengthsto produce the most likely output we use semantics of compatibility of the relations to output the most likely onethe rules are as follows if the frequency was too low output that the statements are unrelated and stopif happensbefore is detected output presence of happensbefore if happensbefore is not detected ignore detection of enablement if strength is detected score of similarity is ignored of the relations strength similarity opposition and enablement which were detected output the one with highest spif nothing has been output to this point output unrelatedto exhaustively test the more than 64 million unordered verb pairs for wordnet more than 11000 verbs would be computationally intractableinstead we use a set of highly associated verb pairs output by a paraphrasing algorithm called dirt since we are able to test up to 4000 verb pairs per day on a single machine we are able to test several dozen associated verbs for each verb in wordnet in a matter of weekslin and pantel describe an algorithm called dirt that automatically learns paraphrase expressions from textit is a generalization of previous algorithms that use the distributional hypothesis for finding similar wordsinstead of applying the hypothesis to words lin and pantel applied it to paths in dependency treesessentially if two paths tend to link the same sets of words they hypothesized that the meanings of the corresponding paths are similarit is from paths of the form subjectverbobject that we extract our set of associated verb pairshence this paper is concerned only with relations between transitive verbsa path extracted from a parse tree is an expression that represents a binary relation between two nounsa set of paraphrases was generated for each pair of associated pathsfor example using a 15gb newspaper corpus here are the 20 most associated paths to quotx solves yquot generated by dirt this list of associated paths looks tantalizingly close to the kind of axioms that would prove useful in an inference systemhowever dirt only outputs pairs of paths that have some semantic relationwe used these as our set to extract finergrained relationsin this section we empirically evaluate the accuracy of verbocean1we studied 29165 pairs of verbsapplying dirt to a 15gb newspaper corpus2 we extracted 4000 paths that consisted of single verbs in the relation subjectverbobject whose verbs occurred in at least 150 documents on the webfor example from the 20 most associated paths to quotx solves yquot shown in section 44 the following verb pairs were extracted we classified each verb pair according to the semantic relations described in section 2if the system does not identify any semantic relation for a verb pair then the system tags the pair as having no relationto evaluate the accuracy of the system we randomly sampled 100 of these verb pairs and presented the classifications to two human judgesthe adjudicators were asked to judge whether or not the system classification was acceptable since the semantic relations are not disjoint multiple relations may be appropriately acceptable for a given verb pairthe judges were also asked to identify their preferred semantic relations table 3 shows five randomly selected pairs along with the judges responsesthe appendix shows sample relationships discovered by the systemtable 4 shows the accuracy of the systemthe baseline system consists of labeling each pair with the most common semantic relation similarity which occurs 33 timesthe tags correct column represents the percentage of verb pairs whose system output relations were deemed correctthe preferred tags correct column gives the percentage of verb pairs whose system output relations matched exactly the human preferred relationsthe kappa statistic for the task of judging system tags as correct and incorrect is κ 078 whereas the task of identifying the preferred semantic relation has κ 072for the latter task the two judges agreed on 73 of the 100 semantic relations73 gives an idea of an upper bound for humans on this taskon these 73 relations the system achieved a higher accuracy of 700the system is allowed to output the happensbefore relation in combination with other relationson the 17 happensbefore relations output by the system 676 were judged correctignoring the happensbefore relations we achieved a tags correct precision of 68table 5 shows the accuracy of the system on each of the relationsthe strongerthan relation is a subset of the similarity relationconsidering a coarser extraction where strongerthan relations are merged with similarity the task of judging system tags and the task of identifying the preferred semantic relation both jump to 682 accuracyalso the overall accuracy of the system climbs to 685as described in section 2 wordnet contains verb semantic relationsa significant percentage of our discovered relations are not covered by wordnet coarser classificationsof the 40 verb pairs whose system relation was tagged as correct by both judges in our accuracy experiments and whose tag was not no relation only 225 of them existed in a wordnet relationthe experience of extracting these semantic relations has clarified certain important challengeswhile relying on a search engine allows us to query a corpus of nearly a trillion words some issues arise the number of instances has to be approximated by the number of hits the number of hits for the same query may fluctuate over time and some needed counts are not directly availablewe addressed the latter issue by approximating these counts using a smaller corpuswe do not detect entailment with lexicosyntactic patternsin fact we propose that whether the entailment relation holds between v1 and v2 depends on the absence of another verb v1 in the same relationship with v2for example given the relation marry happensbefore divorce we can conclude that divorce entails marrybut given the relation buy happensbefore sell we cannot conclude entailment since manufacture can also happen before sellthis also applies to the enablement and strength relationscorpusbased methods including ours hold the promise of wide coverage but are weak on discriminating senseswhile we hope that applications will benefit from this resource as is an interesting next step would be to augment it with sense informationthere are several ways to improve the accuracy of the current algorithm and to detect relations between low frequency verb pairsone avenue would be to automatically learn or manually craft more patterns and to extend the pattern vocabulary another possibility would be to use more relaxed patterns when the part of speech confusion is not likely our approach can potentially be extended to multiword pathsdirt actually provides two orders of magnitude more relations than the 29165 single verb relations we extractedon the same 1gb corpus described in section 51 dirt extracted over 200k paths and 6m unique paraphrasesthese provide an opportunity to create a much larger corpus of semantic relations or to construct smaller indepth resources for selected subdomainsfor example we could extract that take a trip to is similar to travel to and that board a plane happens before deplaneif the entire database is viewed as a graph we currently leverage and enforce only local consistencyit would be useful to enforce global consistency egv1 strongerthan v2 and v2 strongerthan v3 indicates that v1 strongerthan v3 which may be leveraged to identify additional relations or inconsistent relations finally as discussed in section 53 entailment relations may be derivable by processing the complete graph of the identified semantic relationwe have demonstrated that certain finegrained semantic relations between verbs are present on the web and are extractable with a simple patternbased approachin addition to discovering relations identified in wordnet such as opposition and enablement we obtain strong results on strength relations on a set of 29165 associated verb pairs experimental results show an accuracy of 655 in assigning similarity strength antonymy enablement and happensbeforefurther work may refine extraction methods and further process the mined semantics to derive other relations such as entailmentwe hope to open the way to inferring implied but not stated assertions and to benefit applications such as question answering information retrieval and summarizationthe authors wish to thank the reviewers for their helpful comments and google inc for supporting high volume querying of their indexthis research was partly supported by nsf grant eia0205111
W04-3205
verbocean mining the web for finegrained semantic verb relationsbroadcoverage repositories of semantic relations between verbs could benefit many nlp taskswe present a semiautomatic method for extracting finegrained semantic relations between verbswe detect similarity strength antonymy enablement and temporal happensbefore relations between pairs of strongly associated verbs using lexicosyntactic patterns over the webon a set of 29165 strongly associated verb pairs our extraction algorithm yielded 655 accuracyanalysis of error types shows that on the relation strength we achieved 75 accuracywe provide the resource called verbocean for download at httpsemanticsisieduoceanwe introduce a 5class set designed specifically for characterizing verbverb semantic relationswe use patterns to extract a set of relations between verbs such as similarity strength and antonymy
scaling webbased acquisition of entailment relations paraphrase recognition is a critical step for natural language interpretation accordingly many nlp applications would benefit from high coverage knowledge bases of paraphrases however the scalability of stateoftheart paraphrase acquisition approaches is still limited we present a fully unsupervised learning algorithm for webbased extraction an extended model of paraphrases we focus on increased scalability and generality with respect to prior work eventually aiming at a full scale knowledge base our current implementation of the algorithm takes as its input a verb lexicon and for each verb searches the web for related syntactic entailment templates experiments show promising results with respect to the ultimate goal achieving much better scalability than prior webbased methods modeling semantic variability in language has drawn a lot of attention in recent yearsmany applications like qa ir ie and machine translation have to recognize that the same meaning can be expressed in the text in a huge variety of surface formssubstantial research has been dedicated to acquiring paraphrase patterns which represent various forms in which a certain meaning can be expressedfollowing we observe that a somewhat more general notion needed for applications is that of entailment relations these are directional relations between two expressions where the meaning of one can be entailed from the meaning of the otherfor example x acquired y entails x owns ythese relations provide a broad framework for representing and recognizing semantic variability as proposed in for example if a qa system has to answer the question who owns overture and the corpus includes the phrase yahoo acquired overture the system can use the known entailment relation to conclude that this phrase really indicates the desired answermore examples of entailment relations acquired by our method can be found in table 1 to perform such inferences at a broad scale applications need to possess a large knowledge base of entailment patternswe estimate such a kb should contain from between a handful to a few dozens of relations per meaning which may sum to a few hundred thousands of relations for a broad domain given that a typical lexicon includes tens of thousands of wordsour research goal is to approach unsupervised acquisition of such a full scale kbwe focus on developing methods that acquire entailment relations from the web the largest available resourceto this end substantial improvements are needed in order to promote scalability relative to current webbased approachesin particular we address two major goals reducing dramatically the complexity of required auxiliary inputs thus enabling to apply the methods at larger scales and generalizing the types of structures that can be acquiredthe algorithms described in this paper were applied for acquiring entailment relations for verbbased expressionsthey successfully discovered several relations on average per each randomly selected expressionthis section provides a qualitative view of prior work emphasizing the perspective of aiming at a fullscale paraphrase resourceas there are still no standard benchmarks current quantitative results are not comparable in a consistent waythe major idea in paraphrase acquisition is often to find linguistic structures here termed templates that share the same anchorsanchors are lexical elements describing the context of a sentencetemplates that are extracted from different sentences and connect the same anchors in these sentences are assumed to paraphrase each otherfor example the sentences yahoo bought overture and yahoo acquired overture share the anchors xyahoo yoverture suggesting that the templates x buy y and x acquire y paraphrase each otheralgorithms for paraphrase acquisition address two problems finding matching anchors and identifying template structure as reviewed in the next two subsectionsthe prominent approach for paraphrase learning searches sentences that share common sets of multiple anchors assuming they describe roughly the same fact or eventto facilitate finding many matching sentences highly redundant comparable corpora have been usedthese include multiple translations of the same text and corresponding articles from multiple news sources while facilitating accuracy we assume that comparable corpora cannot be a sole resource due to their limited availabilityavoiding a comparable corpus developed statistical methods that match verb paraphrases within a regular corpustheir limited scale results obtaining several hundred verb paraphrases from a 15 million word corpus suggest that much larger corpora are requirednaturally the largest available corpus is the websince exhaustive processing of the web is not feasible and attempted bootstrapping approaches which resemble the mutual bootstrapping method for information extraction of these methods start with a provided known set of anchors for a target meaningfor example the known anchor set mozart 1756 is given as input in order to find paraphrases for the template x born in yweb searching is then used to find occurrences of the input anchor set resulting in new templates that are supposed to specify the same relation as the original one these new templates are then exploited to get new anchor sets which are subsequently processed as the initial mozart 1756eventually the overall procedure results in an iterative process able to induce templates from anchor sets and vice versathe limitation of this approach is the requirement for one input anchor set per target meaningpreparing such input for all possible meanings in broad domains would be a huge taskas will be explained below our method avoids this limitation by finding all anchor sets automatically in an unsupervised mannerfinally present a notably different approach that relies on matching separately single anchorsthey limit the allowed structure of templates only to paths in dependency parses connecting two anchorsthe algorithm constructs for each possible template two feature vectors representing its cooccurrence statistics with the two anchorstwo templates with similar vectors are suggested as paraphrases matching of single anchors relies on the general distributional similarity principle and unlike the other methods does not require redundancy of sets of multiple anchorsconsequently a much larger number of paraphrases can be found in a regular corpuslin and pantel report experiments for 9 templates in which their system extracted 10 correct inference rules on average per input template from 1gb of news datayet this method also suffers from certain limitations it identifies only templates with prespecified structures accuracy seems more limited due to the weaker notion of similarity and coverage is limited to the scope of an available corpusto conclude several approaches exhaustively process different types of corpora obtaining varying scales of outputon the other hand the web is a huge promising resource but current webbased methods suffer serious scalability constraintsparaphrasing approaches learn different kinds of template structuresinteresting algorithms are presented in they learn linear patterns within similar contexts represented as finite state automatathree classes of syntactic template learning approaches are presented in the literature learning ofpredicate argument templates learning of syntactic chains and learning of subtrees the last approach is the most general with respect to the template formhowever its processing time increases exponentially with the size of the templatesas a conclusion state of the art approaches still learn templates of limited form and size thus restricting generality of the learning processmotivated by prior experience we identify two major goals for scaling webbased acquisition of entailment relations covering the broadest possible range of meanings while requiring minimal input and keeping template structures as general as possibleto address the first goal we require as input only a phrasal lexicon of the relevant domain broad coverage lexicons are widely available or may be constructed using known term acquisition techniques making it a feasible and scalable input requirementwe then aim to acquire entailment relations that include any of the lexicons entriesthe second goal is addressed by a novel algorithm for extracting the most general templates being justified by the datafor each lexicon entry denoted a pivot our extraction method performs two phases extract promising anchor sets for that pivot and from sentences containing the anchor sets extract templates for which an entailment relation holds with the pivot examples for verb pivots are acquire fall to preventwe will use the pivot prevent for examples through this sectionbefore presenting the acquisition method we first define its outputa template is a dependency parsetree fragment with variable slots at some tree nodes an entailment relation between two templates t1 and t2 holds if the meaning of t2 can be inferred from the meaning of t1 in some contexts but not necessarily all under the same variable instantiationfor example x s prevent 0 y entails x s_ reduce y risk because the sentence aspirin reduces heart attack risk can be inferred from aspirin prevents a first heart attackour output consists of pairs of templates for which an entailment relation holdsthe goal of this phase is to find a substantial number of promising anchor sets for each pivota good anchorset should satisfy a proper balance between specificity and generalityon one hand an anchor set should correspond to a sufficiently specific setting so that entailment would hold between its different occurrenceson the other hand it should be sufficiently frequent to appear with different entailing templatesfinding good anchor sets based on just the input pivot is a hard taskmost methods identify good repeated anchors in retrospect that is after processing a full corpus while previous webbased methods require at least one good anchor set as inputgiven our minimal input we needed refined criteria that identify a priori the relatively few promising anchor sets within a sample of pivot occurrencesthe ase algorithm performs 4 main stepsstep creates a complete template called the pivot template and denoted tp for the input pivot denoted p variable slots are added for the major types of syntactic relations that interact with p based on its syntactic typethese slots enable us to later match tp with other templatesfor verbs we add slots for a subject and for an object or a modifier step constructs asample corpus denoted s for the pivot templatestep utilizes a web search engine to initialize s by retrieving sentences containing p the sentences are parsed by the minipar dependency parser keeping only sentences that contain the complete syntactic template tp step identifies phrases that are statistically associated with tp in s we test all nounphrases in s discarding phrases that are too common on the web such as desirethen we select the n phrases with highest tf idf score1these phrases have a strong collocation relationship with the pivot p and are likely to indicate topical occurrences of p for example the phrases patient and american dental association which indicate contexts of preventing health problems were selected for the pivot preventfi1here tf idf freqs log where freqs is the number of occurrences in s containing x n is the total number of web documents and freqw is the number of web documents containing x nally step expands s by querying the web with the both p and each of the associated phrases adding the retrieved sentences to s as in step step extracts candidate anchor sets for tpfrom each sentence in s we try to generate one candidate set containing noun phrases whose web frequency is lower than maxphrasefstep extracts slot anchors phrases that instantiate the slot variables of tpeach anchor is marked with the corresponding slotfor example the anchors antibioticssubj miscarriage obj were extracted from the sentence antibiotics in pregnancy prevent miscarriagestep tries to extend each candidate set with one additional context anchor in order to improve its specificitythis anchor is chosen as the highest tf idf scoring phrase in the sentence if it existsin the previous example pregnancy is selectedstep filters out bad candidate anchor sets by two different criteriastep maintains only candidates with absolute web frequency within a threshold range minsetf maxsetf to guarantee an appropriate specificitygenerality levelstep guarantees sufficient association between the candidate anchor set c and tp by estimating where freqw is web frequency and p is the pivotwe maintain only candidates for which this probability falls within a threshold range setminp setmaxphigher probability often corresponds to a strong linguistic collocation between the candidate and tp without any semantic entailmentlower probability indicates coincidental cooccurrence without a consistent semantic relationthe remaining candidates in s become the input anchorsets for the template extraction phase for example aspirinsubj heart attackobj for preventthe template extraction algorithm accepts as its input a list of anchor sets extracted from ase for each pivot templatethen te generates a set of syntactic templates which are supposed to maintain an entailment relationship with the initial pivot templatete performs three main steps described in the following subsections for each input anchor set te acquires from the web a sample corpus of sentences containing itfor example a sentence from the sample corpus for aspirin heart attack is aspirin stops heart attackall of the sample sentences are then parsed with minipar which generates from each sentence a syntactic directed acyclic graph representing the dependency structure of the sentenceeach vertex in this graph is labeled with a word and some morphological information each graph edge is labeled with the syntactic relation between the words it connectste then substitutes each slot anchor in the parse graphs with its corresponding slot variabletherefore aspirin stops heart attack will be transformed into x stop ythis way all the anchors for a certain slot are unified under the same variable name in all sentencesthe parsed sentences related to all of the anchor sets are subsequently merged into a single set of parse graphs s p1 p2 pn the core of te is a general structure learning algorithm that is applied to the set of parse graphs s resulting from the previous stepgsl extracts singlerooted syntactic dags which are named spanning templates since they must span at least over na slot variables and should also appear in at least nr sentences from s gsl learns maximal most general templates they are spanning templates which at the same time cannot be generalized by further reduction and cannot be further extended keeping the same generality levelin order to properly define the notion of maximal most general templates we introduce some formal definitions and notationsdefinition for a spanning template t we define a sentence set denoted with σ as the set of all parsed sentences in s containing t for each pair of templates t1 and t2 we use the notation t1 t2 to denote that t1 is included as a subgraph or is equal to t2we use the notation t1 t2 when such inclusion holds strictlywe define t as the set of all spanning templates in the sample s definition a spanning template t e t is maximal most general if and only if both of the following conditions hold condition a ensures that the extracted templates do not contain spanning substructures that are more general condition b ensures that the template cannot be further enlarged without reducing its sentence setgsl performs template extraction in two main steps build a compact graph representation of all the parse graphs from s extract templates from the compact representationa compact graph representation is an aggregate graph which joins all the sentence graphs from s ensuring that all identical spanning substructures from different sentences are merged into a single onetherefore each vertex v in the aggregate graph is either a copy of a corresponding vertex from a sentence graph pi or it represents the merging of several identically labeled vertices from different sentences in s the set of such sentences is defined as the sentence set of v and is represented through the set of index numbers of related sentences in the third tree of figure 2we will denote with gi the compact graph representation of the first i sentences in s the parse trees p1 and p2 of two sentences and their related compact representation g2 are shown in figure 2building the compact graph representation the compact graph representation is built incrementallythe algorithm starts with an empty aggregate graph g0 and then merges the sentence graphs from s one at a time into the aggregate structurelet us denote the current aggregate graph with gi_1 and let pi be the parse graph which will be merged nextnote that the sentence set of pi is a single element set w during each iteration a new graph is created as the union of both input graphs gi gi_1 you pithen the following merging procedure is performed on the elements of gi ated and added to githe new vertex takes the same label and holds a sentence set which is formed from the sentence set of vg by adding i to itstill with reference to figure 2 the generalized vertices in g2 are x y and stopthe algorithm connects the generalized vertex vnew g with all the vertices which are connected with vg and vpas an optimization step we merge only vertices and edges that are included in equal spanning templatesextracting the templates gsl extracts all maximal most general templates from the final compact representation gn using the following subalgorithm in figure 2 the maximal most general template in obj as a last step names and numbers are filtered out from the templatesmoreover te removes those templates which are very long or which appear with just one anchor set and in less than four sentencesfinally the templates are sorted first by the number of anchor sets with which each template appeared and then by the number of sentences in which they appearedwe evaluated the results of the tease algorithm on a random lexicon of verbal forms and then assessed its performance on the extracted data through humanbased judgmentsthe test set for human evaluation was generated by picking out 53 random verbs from the 1000 most frequent ones found in a subset of the reuters corpus2for each verb entry in the lexicon we provided the judges with the corresponding pivot template and the list of related candidate entailment templates found by the systemthe judges were asked to evaluate entailment for a total of 752 templates extracted for 53 pivot lexicon entries table 1 shows a sample of the evaluated templates all of them are clearly good and were judged as correct ones included in the evaluation test setconcerning the ase algorithm threshold parameters3 were set as phrasemaxf107 setminf102 setmaxf105 setminp0066 and setmaxp0666an upper limit of 30 was imposed on the number of possible anchor sets used for each pivotsince this last value turned out to be very conservative with respect to system coverage we subsequently attempted to relax it to 50 further postprocessing was necessary over extracted data in order to remove syntactic variations referring to the same candidate template three possible judgment categories have been considered correct if an entailment relationship in at least one direction holds between the judged template and the pivot template in some nonbizarre context incorrect if there is no reasonable context and variable instantiation in which entailment holds no evaluation if the judge cannot come to a definite conclusioneach of the three assessors issued judgments for the 752 different templatescorrect templates resulted to be 283 313 and 295 with respect to the three judgesno evaluations were 2 0 and 16 while the remaining templates were judged incorrectfor each verb we calculate yield as the absolute number of correct templates found and precision as the percentage of good templates out of all extracted templatesobtained precision is 4415 averaged over the 53 verbs and the 3 judgesconsidering low majority on judges the precision value is 4239average yield was 55 templates per verbthese figures may be compared with average yield of 101 and average precision of 503 for the 9 pivot templates of the comparison suggests that it is possible to obtain from the web a similar range of precision as was obtained from a clean news corpusit also indicates that there is potential for acquiring additional templates per pivot which would require further research on broadening efficiently the search for additional web data per pivotagreement among judges is measured by the kappa value which is 055 between j1 and j2 057 between j2 and j3 and 063 between j1 and j3such kappa values correspond to moderate agreement for the first two pairs and substantial agreement for the third onein general unanimous agreement among all of the three judges has been reported on 519 out of 752 templates which corresponds to 69our algorithm obtained encouraging results extracting a considerable amount of interesting templates and showing inherent capability of discovering complex semantic relationsconcerning overall coverage we managed to find correct templates for 86 of the verbs nonetheless presented results show a substantial margin of possible improvementin fact yield values which are our first concern are inherently dependent on the breadth of web search performed by the ase algorithmdue to computational time the maximal number of anchor sets processed for each verb was held back to 30 significantly reducing the amount of retrieved datain order to further investigate ase potential we subsequently performed some extended experiment trials raising the number of anchor sets per pivot to 50this time we randomly chose a subset of 10 verbs out of the less frequent ones in the original main experimentresults for these verbs in the main experiment were an average yield of 3 and an average precision of 4519in contrast the extended experiments on these verbs achieved a 65 yield and 5995 precision these results are indeed promising and the substantial growth in yield clearly indicates that the tease algorithms can be further improvedwe thus suggest that the feasibility of our approach displays the inherent scalability of the tease process and its potential to acquire a large entailment relation kb using a full scale lexicona further improvement direction relates to template ranking and filteringwhile in this paper we considered anchor sets to have equal weights we are also carrying out experiments with weights based on crosscorrelation between anchor setswe have described a scalable webbased approach for entailment relation acquisition which requires only a standard phrasal lexicon as inputthis minimal level of input is much simpler than required by earlier webbased approaches while succeeding to maintain good performancethis result shows that it is possible to identify useful anchor sets in a fully unsupervised mannerthe acquired templates demonstrate a broad range of semantic relations varying from synonymy to more complicated entailmentthese templates go beyond trivial paraphrases demonstrating the generality and viability of the presented approachfrom our current experiments we can expect to learn about 5 relations per lexicon entry at least for the more frequent entriesmoreover looking at the extended test we can extrapolate a notably larger yield by broadening the search spacetogether with the fact that we expect to find entailment relations for about 85 of a lexicon it is a significant step towards scalability indicating that we will be able to extract a large scale kb for a large scale lexiconin future work we aim to improve the yield by increasing the size of the samplecorpus in a qualitative way as well as precision using statistical methods such as supervised learning for better anchor set identification and crosscorrelation between different pivotswe also plan to support noun phrases as input in addition to verb phrasesfinally we would like to extend the learning task to discover the correct entailment direction between acquired templates completing the knowledge required by practical applicationslike learning the context for which entailment relations are valid is beyond the scope of this paperas stated we learn entailment relations holding for some but not necessarily all contextsin future work we also plan to find the valid contexts for entailment relationsthe authors would like to thank oren glickman for helpful discussions and assistance in the evaluation bernardo magnini for his scientific supervision at itcirst alessandro vallin and danilo giampiccolo for their help in developing the human based evaluation and prof yossi matias for supervising the first authorthis work was partially supported by the moreweb project financed by provincia autonoma di trentoit was also partly carried out within the framework of the itcirst university of haifa collaboration projectfor data visualization and analysis the authors intensively used the clark system developed at the bulgarian academy of sciences
W04-3206
scaling webbased acquisition of entailment relationsparaphrase recognition is a critical step for natural language interpretationaccordingly many nlp applications would benefit from high coverage knowledge bases of paraphraseshowever the scalability of stateoftheart paraphrase acquisition approaches is still limitedwe present a fully unsupervised learning algorithm for webbased extraction of entailment relations an extended model of paraphraseswe focus on increased scalability and generality with respect to prior work eventually aiming at a full scale knowledge baseour current implementation of the algorithm takes as its input a verb lexicon and for each verb searches the web for related syntactic entailment templatesexperiments show promising results with respect to the ultimate goal achieving much better scalability than prior webbased methodswe describe the tease method for extracting entailing relation templates from the web
bilingual parsing with factored estimation using english to parse korean we describe how simple commonly understood statistical models such as statistical dependency parsers probabilistic contextfree grammars and wordtoword translation models can be effectively combined into a unified bilingual parser that jointly searches for the best english parse korean parse and word alignment where these hidden structures all constrain each other the model used for parsing is completely factored into the two parsers and the tm allowing separate parameter estimation we evaluate our bilingual parser on the penn korean treebank and against several baseline systems and show improvements parsing korean with very limited labeled data consider the problem of parsing a language l for which annotated resources like treebanks are scarcesuppose we have a small amount of text data with syntactic annotations and a fairly large corpus of parallel text for which the other language is not resourceimpoverishedhow might we exploit english parsers to improve syntactic analysis tools for this languageone idea is to project english analysis onto l data through wordaligned parallel textto do this we might use an english parser to analyze the english side of the parallel text and a wordalignment algorithm to induce word correspondencesby positing a coupling of english syntax with l syntax we can induce structure on the l side of the parallel text that is in some sense isomorphic to the english parsewe might take the projection idea a step farthera statistical english parser can tell us much more than the hypothesized best parseit can be used to find every parse admitted by a grammar and also scores of those parsessimilarly translation models which yield word alignments can be used in principle to score competing alignments and offer alternatives to a singlebest alignmentit might also be beneficial to include the predictions of an l parser trained on any available annotated l data however fewthis paper describes how simple commonly understood statistical modelsstatistical dependency parsers probabilistic contextfree grammars and word translation models can be effectively combined into a unified framework that jointly searches for the best english parse l parse and word alignment where these hidden structures are all constrained to be consistentthis inference task is carried out by a bilingual parserat present the model used for parsing is completely factored into the two parsers and the tm allowing separate parameter estimationfirst we discuss bilingual parsing and show how it can solve the problem of joint englishparse lparse and wordalignment inferencein 3 we describe parameter estimation for each of the factored models including novel applications of loglinear models to english dependency parsing and korean morphological analysis4 presents korean parsing results with various monolingual and bilingual algorithms including our bilingual parsing algorithmwe close by reviewing prior work in areas related to this paper the joint model used by our bilingual parser is an instance of a stochastic bilingual multitext grammar formally defined by melamed the 2mtg formalism generates two strings such that each syntactic constituentincluding individual wordsin one side of the bitext corresponds either to a constituent in the other side or to melamed defines bilexicalized mtg which is a synchronous extension of bilexical grammars such as those described in eisner and satta and applies the latters algorithmic speedups to l2mtgparsingour formalism is not a precise fit to either unlexicalized mtg or l2mtg since we posit lexical dependency structure only in one of the languages the primary rationale for this is that we are dealing with only a small quantity of labeled data in language l and therefore do not expect to be able to accurately estimate its lexical affinitiesfurther synchronous parsing is in practice computationally expensive and eliminating lexicalization on one side reduces the runtime of the parser from o to oour parsing algorithm is a simple transformation of melameds r2d parser that eliminates head information in all korean parser itemsthe model event space for our stochastic halfbilexicalized 2mtg consists of rewrite rules of the following two forms with english above and l below where uppercase symbols are nonterminals and lowercase symbols are words one approach to assigning a probability to such a rule is to make an independence assumption for example there are two powerful reasons to model the bilingual grammar in this factored wayfirst we know of no treealigned corpora from which bilingual rewrite probabilities could be estimated this rules out the possibility of supervised training of the joint rulessecond separating the probabilities allows separate estimation of the probabilitiesresulting in two wellunderstood parameter estimation tasks which can be carried out independently1 this factored modeling approach bears a strong resemblance to the factored monolingual parser of klein and manning which combined an english dependency model and an unlexicalized pcfgthe generative model used by klein and manning consisted of multiplying the two component models the model was therefore deficientwe go a step farther replacing the deficient generative model with a loglinear modelthe underlying parsing algorithm remains the same but the weights are no longer constrained to sum to onethe weights may be estimated using discriminative training or as if they were logprobabilities using smoothed maximum likelihood estimation because we use this model only for inference it is not necessary to compute a partition function for the combined loglinear modelin addition to the two monolingual syntax models we add a wordtoword translation model to the mixin this paper we use a translation model to induce only a single best word matching but in principle the translation model could be used to weight all possible wordword links and the parser would solve the joint alignmentparsing problem2 as a testbed for our experiments the penn korean treebank provides 5083 korean constituency trees along with english translations and their treesthe ktb also analyzes korean words into their component morphemes and morpheme tags which allowed us to train a morphological disambiguation modelto make the most of this small corpus we performed all our evaluations using fivefold crossvalidationdue to the computational expense of bilingual parsing we produced a subcorpus of the ktb limiting english sentence length to 10 words or 27 of the full datawe then randomized the order of sentences and divided the data into five equal test sets of 280 sentences each complementing each test set the remaining data were used for training sets of increasing size to simulate various levels of data scarcitywe now describe parameter estimation for the four component models that combine to make our full system our english syntax model is based on weighted bilexical dependenciesthe model predicts the generation of a child pair dependent upon its parent and the tag of the parents most recent child on the same side these events correspond quite closely to the parser described by eisners model c but instead of the rules receiving conditional probabilities we use a loglinear model and allow arbitrary weightsthe model does not predict pos tags it assumes they are given even in testnote that the dynamic program used for inference of bilexical parses is indifferent to the origin of the rule weights they could be logprobabilities or arbitrary numbers as in our modelthe parsing algorithm need not change to accommodate the new parameterizationin this model the probability of a pair is given by where 0 are the model parameters and f is a vector function such that fi is equal to the number of times a feature fires in parameter estimation consists of selecting weights 0 to maximize the conditional probability of the correct parses given observed sentences3 another important advantage of moving to loglinear models is the simple handling of data sparsenessthe feature templates used by our model are shown in table 2the first feature corresponds to the fullydescribed childgeneration event others are similar but less informativethese overlapping features offer a kind of backoff so that each childgeneration events weight receives a contribution from several granularities of descriptionfeature selection is done by simple thresholding if a feature is observed 5 times or more in the training set its weight is estimated otherwise its weight is locked at model formalism estimation role english syntax bilexical dependency discriminative estimation combines with korean grammar syntax for bilingual parsing korean morphology twosequence discriminative estimation best analysis used as input trigram model over a lattice to tm training and to parsing korean syntax pcfg smoothed mle combines with english syntax for bilingual parsing translation model ibm models 14 unsupervised estimation best analysis used as both directions input to bilingual parsing dency parsertx is a tag and wx is a wordp indicates the parent a the previous child and c the nextgenerated childd is the direction the last two templates correspond to stopping0if a feature is never seen in training data we give it the same weight as the minimumvalued feature from the training set to handle outofvocabulary words we treat any word seen for the first time in the final 300 sentences of the training corpus as oovthe model is smoothed using a gaussian prior with unit variance on every weightbecause the left and right children of a parent are independent of each other our model can be described as a weighted split head automaton grammar this allowed us to use eisner and sattas o parsing algorithm to speed up training4 this speedup could not however be applied to the bilingual parsing algorithm since a split parsing algorithm will preclude inference of certain configurations of word alignments that are allowed by a nonsplit parser we trained the parser on sentences of 15 words or fewer in the wsj treebank sections 01215 9949 dependency attachment accuracy was achieved on the training set and 7668 and 7500 were achieved on sections 22 and 23 respectivelyperformance on the english side of our ktb test set was 7182 this type of discriminative training has been applied to loglinear variants of hidden markov models and to lexicalfunctional grammar to our knowledge it has not been explored for contextfree models a review of discriminative approaches to parsing can be found in chiang a korean word typically consists of a head morpheme followed by a series of closedclass dependent morphemes such as case markers copula topicalizers and conjunctionssince most of the semantic content resides in the leading head morpheme we eliminate for word alignment all trailing morphemes which reduces the ktbs vocabulary size from 10052 to 3104existing morphological processing tools for many languages are often unweighted finitestate transducers that encode the possible analyses for a surface form wordone such tool klex is available for korean unfortunately while the unweighted fst describes the set of valid analyses it gives no way to choose among themwe treat this as a noisy channel korean morphemetag pairs are generated in sequence by some process then passed through a channel that turns them into korean words the channel is given by the fst but without any weightsto select the best output we model the source processwe model the sequence of morphemes and their tags as a loglinear trigram modeloverlapping trigram bigram and unigram features provide backoff information to deal with data sparseness for each training sentence we used the fstencoded morphological dictionary to construct a lattice of possible analysesthe lattice has a sausage form with all paths joining between each wordwe train the feature weights to maximize the weight of the correct path relative to all paths in the latticein contrast lafferty et al train to maximize the the probability of the tags given the wordsover training sentences maximize where ti is the correct tagging for sentence i mi is the correct morpheme sequencethere are a few complicationsfirst the coverage of the fst is of course not universal in fact it cannot analyze 466 of word types in the ktbwe tag such words as atomic common nouns second many of the analyses in the ktb are not admitted by the fst 2106 of correct analyses are not admitted by the fst 685 do not have an fst analysis matching in the first tag and morpheme 363 do not have an fst analysis matching the full tag sequence and 122 do not have an analysis matching the first tagthese do not include the 218 of tokens with no analysis at allwhen this happened in training we added the correct analysis to the latticeto perform inference on new data we construct a lattice from the fst and use a dynamic program to find the best path through the latticeunseen features are given the weight bmintable 4 shows performance on ambiguous tokens in training and test data because we are using small training sets parameter estimates for a lexicalized korean probabilistic grammar are likely to be highly unreliable due to sparse datatherefore we use an unlexicalized pcfgbecause the pos tags are given by the morphological analyzer the pcfg need not predict words only pos tagsrule probabilities were estimated with mlesince only the sentence nonterminal s was smoothed the grammar could parse any sequence of tags but was relatively sparse which kept bilingual runtime down6 when we combine the pcfg with the other models to do joint bilingual parsing we simply use the logs of the pcfg probabilities as if they were loglinear weightsa pcfg treated this way is a perfectly valid loglinear model the exponentials of its weights just happen to satisfy certain sumtoone constraintsin the spirit of joint optimization we might have also combined the korean morphology and syntax models into one inference taskwe did not do this largely out of concerns over computational expense this parser independent of the bilingual parser is evaluated in 4in our bilingual parser the english and korean parses are mediated through wordtoword translational correspondence linksunlike the syntax models the translation models were trained without the benefit of labeled datawe used the giza implementation of the ibm statistical translation models to obtain reliable word translation estimates we trained on a bilingual corpus in addition to the ktb training setthe foreign broadcast information service dataset contains about 99000 sentences of korean and 72000 of english translationfor our training we extracted a relatively small parallel corpus of about 19000 highconfidence sentence pairsas noted above koreans productive agglutinative morphology leads to sparse estimates of word frequencieswe therefore trained our translation models after replacing each korean word with its first morpheme stripped of its closedclass dependent morphemes as described in 32the size of the translation tables made optimal bilingual parsing prohibitive by exploding the number of possible analyseswe therefore resorted to using gizas hypothesized alignmentssince the ibm models only hypothesize onetomany alignments from target to source we trained using each side of the bitext as source and target in turnwe could then produce two kinds of alignment graphs by taking either the intersection or the union of the links in the two giza alignment graphsall words not in the resulting alignment graph are set to align to our bilingual parser deals only in onetoone alignments the intersection graph yields a mappingthe union graph yields a set of links which may permit different onetoone mappingsusing the union graph therefore allows for flexibility in the word alignments inferred by the bilingual parser but this comes at computational expense even with over 20000 sentence pairs of training data the hypothesized alignments are relatively sparsefor the intersection alignments an average of 23 of nonpunctuation korean words and 17 of nonpunctuation english words have a link to the other languagefor the union alignments this improves to 88 for korean and 22 for englisha starker measure of alignment sparsity is the accuracy of english dependency links projected onto koreanfollowing hwa et al we looked at dependency links in the true english parses from the ktb where both the dependent and the head were linked to words on the korean side using the intersection alignmentnote that hwa et al used not only the true english trees but also handproduced alignmentsif we hypothesize that if english words i and j are in a parentchild relationship then so are their linked korean words then we infer an incomplete dependency graph for the korean sentences whose precision is around 4953 but whose recall is standard deviations are shown over fivefold crossvalidationover 65 of word tokens are ambiguousthe accuracy of the first tag in each word affects the pcfg and the accuracy of the first morpheme affects the translation model an abysmal 25367having trained each part of the model we bring them together in a unified dynamic program to perform inference on the bilingual text as described in 2in order to experiment easily with different algorithms we implemented all the morphological disambiguation and parsing models in this paper in dyna a new language for weighted dynamic programming for parameter estimation we used the complementary dynamite tooljust as cky parsing starts with words in its chart the dynamic program chart for the bilingual parser is seeded with the links given in the hypothesized word alignmentall our current results are optimal under the model but as we scale up to more complex data we might introduce a heuristics or at the possible expense of optimality a beam search or pruning techniquesour agenda discipline is uniformcost search which guarantees that the first full parse discovered will be optimalif none of the weights are positivein our case we are maximizing sums of negative weights as if working with log probabilities8 when evaluating our parsing output against the test data from the ktb we do not claim credit for the single outermost bracketing or for unary productionssince unary productions do not translate well from language to language we collapse them to their lower nodeswe compare our bilingual parser to several baseline systemsthe first is the korean pcfg trained on the small ktb training sets as described in 33we also consider wus stochastic inversion transduction grammar as well as strictly left and rightbranching treeswe report the results of fivefold crossvalidation with the mean and standard deviation since it is unlexicalized the pcfg parses sequences of tags as output by the morphological analysis modelby contrast we can build translation tables for the sitg directly from surface wordsand thus not use any labeled training data at allor from the sequence of head morphemesexperiments showed however that the sitg using words consistently outperformed the sitg using morphemeswe also implemented wus treetransformation algorithm to turn full binarybranching sitg output into flatter treesfinally we can provide extra information to the sitg by giving it a set of english bracketings that it must respect when constructing the joint treeto get an upper bound on performance we used the true parses from the english side of the ktbonly the pcfg of course can be evaluated on labeled bracketing although labeled precision and recall on test data generally increase with more training data the slightly lower performance at the highest training set size may indicate overtraining of this simple modelunlabeled precision and recall show continued improvement with more korean training dataeven with help from the true english trees the unsupervised sitgs underperform pcfgs trained on as few as 32 sentences with the exception of unlabeled recall in one experimentit seems that even some small amount of knowledge of the language helps parsingcrossing brackets for the flattened sitg parses are understandably lowerthe output of our bilingual parser contains three types of constituents englishonly koreanonly and bilingualthe korean parse induced by the koreanonly and bilingual constituents is filtered so constituents with intermediate labels are eliminateda second filter we consider is to keep only the bilingual constituents corresponding to an english head words maximal spanthis filter will eliminate constituents whose english correspondent is a head word with some of its dependentssuch partial english constituents are byproducts of the parsing and do not correspond to the modeled syntaxwith good word alignments the english parser can help disambiguate korean phrase boundaries and overcome erroneous morphological analyses results without and with the second filter are shown in table 7because larger training datasets lead to larger pcfgs the grammar constant increasesour bilingual parser implementation is on the cusp of practicality when the grammar constant increased we were unable to parse longer sentencestherefore the results given for bilingual parsing are on reduced test sets where a length filter was applied sentences with e f t were removed for varying values of t while neither bilingual parser consistently beats the pcfg on its own they offer slight complementary improvements on small training datasets of 32 and 64 sentences the bilingual parser without the english head span filter gives a small recall improvement on average at similar precisionneither of these differences is significant when measured with a pairedsample ttestin contrast the parser with the english head span filter sacrifices significantly on recall for a small but significant gain in precision at the 001 levelcrossing brackets at all levels are significantly lower with the english head span filterwe can describe this effect as a filtering of korean constituents by the english model and word alignmentsconstituents that are not strongly evident on the english side are simply removedon small training datasets this effect is positive although good constituents are lost so that recall is poor compared to the pcfg precision and crossing brackets are improvedas one would expect as the amount of training data increases the advantage of using a bilingual parser vanishesthere is no benefit from falling back on the english parser and word alignments to help disambiguate the korean structuresince we have not pruned our search space in these experiments we can be confident that all variations are due to the influence of the translation and english syntax modelsour approach has this principal advantage the various morphology parsing and alignment components can be improved or replaced easily without needing to retrain the other modulesthe low dependency projection results in conjunction with our modest overall gains indicate that the alignmenttranslation model should receive the most attentionin all the bilingual experiments there is a small positive correlation for sentences at each length between the proportion of korean words aligned to english and measures ofparsing accuracyimproved english parserssuch as collins modelshave also been implemented in dyna the dynamic programming framework used here combining separately trained systems and then searching for an optimal solution is standard practice in statistical continuous speech recognition and statistical machine translation composition is even more of a staple in finitestate frameworks finally factored models involving parses have been used to guide searchcharniak et al combine separately trained parse production probabilities with translation probabilities to prune a parse forest hypothesized by the translation modelas discussed in 2 klein and manning guide their parsers search using a combination of separate unlexicalized pcfg and lexical dependency modelsthe extent to which assumptions about similarity of syntax across languages are empirically valid has received attention in a few pilot studiesfox has considered english and french and hwa et al investigate chinese and englishxia et al compare the rule templates of lexicalized tree adjoining grammars extracted from treebanks in english chinese and koreanin the context of machine translation dorr investigated divergences between two languages structuressome proposals have sidestepped the empirical issue entirelywu and alshawi et al used unsupervised learning on parallel text to induce syntactic analysis that was useful for their respective applications in phrasal translation extraction and speech translation though not necessarily similar to what a human annotator would selectnote a point of divergence of the sitg from our bilingual parsing system sitg only allows words but not higher structures to match null in the other language and thus requires that the trees in parallel sentences be isomorphicyamada and knight introduced treetostring alignment on japanese data and gildea performed treetotree alignment on the korean treebank allowing for nonisomorphic structures he applied this to wordtoword alignmentfinally inspired by these intuitive notions of translational correspondence cherry and lin include dependency features in a word alignment model to improve nonsyntactic baseline systemsin more formal work melamed proposes multitext grammars and algorithms for parsing themshieber and schabes describe a synchronous tree adjoining grammarwhile both of these formalisms require bilingual grammar rules eisner describes an algorithm for learning tree substitution grammars from unaligned treesworking on the penn korean treebank sarkar and han made a single trainingtest split and used 91 of the sentences to train a morphological disambiguator and lexicalized tree adjoining grammar based parsing systemfor a monolingual approach to training a parser with scarce resources see who apply cotraining and corrected cotraining to bootstrapping an english parser starting with 1000 parsed training senvalidationthe sitg system is evaluated on test data but is trained without labeled data the sitg with english trees uses true treebank english parses to constrain the search and thus represents an upper boundthe table shows means and standard deviations for fivefold crossvalidationthe best test results in each column are in bold validationboldfaced numbers in the bilingual parsers indicate significant improvements on the pcfg baseline using the pairedsample ttest at the 001 level tencesalthough this technique has interesting properties our combined optimization should be more stable since it does not involve iterative example selectionwe have presented a novel technique for merging simple separately trained models for korean parsing english dependency parsing and word translation and optimizing the joint result using dynamic programmingwe showed small but significant improvements for korean parsers trained on small amounts of labeled datawe would like to thank elliott drabek jason eisner eric goldlust philip resnik charles schafer david yarowsky and the reviewers for their comments and assistance and chunghye han narae han and anoop sarkar for their help with the korean resourcesthis work was supported under a national science foundation graduate research fellowship and a fannie and john hertz foundation fellowship
W04-3207
bilingual parsing with factored estimation using english to parse koreanwe describe how simple commonly understood statistical models such as statistical dependency parsers probabilistic contextfree grammars and wordtoword translation models can be effectively combined into a unified bilingual parser that jointly searches for the best english parse korean parse and word alignment where these hidden structures all constrain each otherthe model used for parsing is completely factored into the two parsers and the tm allowing separate parameter estimationwe evaluate our bilingual parser on the penn korean treebank and against several baseline systems and show improvements parsing korean with very limited labeled datawe proposed to merge an english parser a word alignment model and a korean pcfg parser trained from a small number of korean parse trees under a unified log linear model
mining verynonparallel corpora parallel sentence and lexicon extraction via bootstrapping and them we present a method capable of extracting parallel sentences from far more disparate verynonparallel corpora than previous comparable corpora methods by exploiting bootstrapping on top of ibm model 4 them step 1 of our method like previous methods uses similarity measures to find matching documents in a corpus first and then extracts parallel sentences as well as new word translations from these documents but unlike previous methods we extend this with an iterative bootstrapping framework based on the principle of which claims documents found to contain one pair of parallel sentences must contain others even if the documents are judged to be of low similarity we rematch documents based on extracted sentence pairs and refine the mining process iteratively until convergence this novel principle allows us to add parallel sentences from documents to the baseline set experimental results show that our proposed method is nearly 50 more effective than the baseline method without iteration we also show that our method is effective in boosting the performance of the ibm model 4 them lexical learner as the latter though stronger than model 1 used in previous work does not perform well on data from verynonparallel corpus parallel sentences are important resources for training and improving statistical machine translation and crosslingual information retrieval systemsvarious methods have been previously proposed to extract parallel sentences from multilingual corporasome of them are described in detail in the challenge of these tasks varies by the degree of parallelness of the input multilingual documentsfigure1parallel sentence and lexicon extraction via bootstrapping and them the most challenging task is to extract bilingual sentences and lexicon from verynonparallel datarecent work on extracting parallel sentences from comparable data and others on extracting paraphrasing sentences from monolingual corpora are based on the findtopicextractsentence principle which claims that parallel sentences only exist in document pairs with high similaritythey all use lexical information to match documents first before extracting sentences from these documentshowever the nonparallel corpora used so far in the previous work tend to be quite comparablezhao and vogel used a corpus of chinese and english versions of news stories from the xinhua news agency with roughly similar sentence order of contentthis corpus can be more accurately described as noisy parallel corpusbarzilay and elhadad mined paraphrasing sentences from weather reportsmunteanu et al used news articles published within the same 5day windowall these corpora have documents in the same matching topicsthey can be described as ontopic documentsin fact both zhao and vogel and barzilay and elhadad assume similar sentence orders and applied dynamic programming in their workin our work we try to find parallel sentences from far more disparate verynonparallel corpora than in any previous worksince many more multilingual texts available today contain documents that do not have matching documents in the other language we propose finding more parallel sentences from offtopic documents as well as ontopic documentsan example is the tdt corpus which is an aggregation of multiple news sources from different time periodswe suggest the findonegetmore principle which claims that as long as two documents are found to contain one pair of parallel sentence they must contain others as wellbased on this principle we propose an effective bootstrapping method to accomplish our task we also apply the ibm model 4 them lexical learning to find unknown word translations from the extracted parallel sentences from our systemthe ibm models are commonly used for word alignment in statistical mt systemsthis them method differs from some previous work which used a seedword lexicon to extract new word translations or word senses from comparable corpora there have been conflicting definitions of the term comparable corpora in the research communityin this paper we contrast and analyze different bilingual corpora ranging from the parallel noisy parallel comparable to verynonparallel corporaa parallel corpus is a sentencealigned corpus containing bilingual translations of the same documentthe hong kong laws corpus is a parallel corpus with manually aligned sentences and is used as a parallel sentence resource for statistical machine translation systemsthere are 313659 sentence pairs in chinese and englishalignment of parallel sentences from this type of database has been the focus of research throughout the last decade and can be accomplished by many offtheshelf publicly available alignment toolsa noisy parallel corpus sometimes also called a comparable corpus contains nonaligned sentences that are nevertheless mostly bilingual translations of the same document extracted bilingual word senses lexicon and parallel sentence pairs from such corporaa corpus such as hong kong news contains documents that are in fact rough translations of each other focused on the same thematic topics with some insertions and deletions of paragraphsanother type of comparable corpus is one that contains nonsentencealigned nontranslated bilingual documents that are topicalignedfor example newspaper articles from two sources in different languages within the same window of published dates can constitute a comparable corpusrapp grefenstette fung and lo and kaji derived bilingual lexicons or word senses from such corporamunteanu et al constructed a comparable corpus of arabic and english news stories by matching the publishing dates of the articlesfinally a verynonparallel corpus is one that contains far more disparate verynonparallel bilingual documents that could either be on the same topic or not the tdt3 corpus is such a corpusit contains transcriptions of various news stories from radio broadcasting or tv news report from 19982000 in english and chinesein this corpus there are about 7500 chinese and 12400 english documents covering more around 60 different topicsamong these 1200 chinese and 4500 english documents are manually marked as being intopicthe remaining documents are marked as offtopic as they are either only weakly relevant to a topic or irrelevant to all topics in the existing documentsfrom the intopic documents most are found to have high similaritya few of the chinese and english passages are almost translations of each othernevertheless the existence of a considerable amount of offtopic document gives rise to more variety of sentences in terms of content and structureoverall the tdt 3 corpus contains 110000 chinese sentences and 290000 english sentencessome of the bilingual sentences are translations of each other while some others are bilingual paraphrasesour proposed method is a first approach that can extract bilingual sentence pairs from this type of verynonparallel corpusto quantify the parallelness or comparability of bilingual corpora we propose using a lexical matching score computed from the bilingual word pairs occurring in the bilingual sentence pairsmatching bilingual sentence pairs are extracted from different corpora using existing and the proposed methodswe then identify bilingual word pairs that appear in the matched sentence pairs by using a bilingual lexicon the lexical matching score is then defined as the sum of the mutual information score of a known set of word pairs that appear in the corpus where f is the cooccurrence frequency of bilexicon pair in the matched sentence pairs f and f are the occurrence frequencies of chinese word wc and english word we in the bilingual corpus parallel corpus a comparable noisy parallel corpus and a verynonparallel corpus we can see that the more parallel or comparable the corpus the higher the overall lexical matching score isit is well known that existing work on sentence alignment from parallel corpus makes use of one or multiple of the following principles more words that are translations of each other tend to be translations themselvesconversely the context sentences of translated word pairs are similarfor noisy parallel corpora sentence alignment is based on embedded content wordsthe word alignment principles used in previous work are as follows different sentence alignment algorithms based on the above principles can be found in manning and schűtze somers wu and veronis these methods have also been applied recently in a sentence alignment shared task at naacl 20031we have also learned that as bilingual corpora become less parallel it is better to rely on lexical information rather than sentence length and position informationfor comparable corpora the alignment principle made in previous work is as follows with high similarity scores findtopicextractsentence we take a step further and propose a new principle for our taskexisting algorithms such as zhao and vogel barzilay and elhadad munteanu et al for extracting parallel or paraphrasing sentences from comparable documents are based on the findtopicextractsentence principle which looks for document pairs with high similarities and then look for parallel sentences only from these documentsbased on our proposed findonegetmore principle we suggest that there are other dissimilar documents that might contain more parallel sentenceswe can iterate this whole process for improved results using a bootstrapping methodfigure 2 outlines the algorithm in more detailin the following sections 5155 we describe the document preprocessing step followed by the four subsequent iterative steps of our algorithmthe documents are word segmented with the language data consortium chineseenglish dictionary 20then the chinese document is glossed using all the dictionary entrieswhen a chinese word has multiple possible translations in english it is disambiguated by a method extended from this initial step is based on the same findtopicextractsentence principle as in earlier worksthe aim of this step is to roughly match the chineseenglish documents pairs that have the same topic in order to extract parallel sentences from themsimilar to previous work comparability is defined by cosine similarity between document vectorsboth the glossed chinese document and english are represented in word vectors with term weightswe evaluated different combinations of term weighting of each word in the corpus term frequency inverse document frequency tfidf and the product of a function of tf and idfthe documents here are sentenceswe find that using idf alone gives the best sentence pair rankthis is probably due to the fact that frequencies of bilingual word pairs are not comparable in a verynonparallel corpuspairwise similarities are calculated for all possible chineseenglish document pairs and bilingual documents with similarities above a certain threshold are considered to be comparablefor verynonparallel corpora this documentmatching step also serves as topic alignmentagain based on the findtopicextractsentence principle we extract parallel sentences from the matched english and chinese documentseach sentence is again represented as word vectorsfor each extracted document pair pairwise cosine similarities are calculated for all possible chineseenglish sentence pairssentence pairs above a set threshold are considered parallel and extracted from the documentssentence similarity is based on the number of words in the two sentences that are translations of each otherthe better our bilingual lexicon is the more accurate the sentence similarity will bein the following section we discuss how to find new word translations54them lexical learning from matched sentence pairs this step updates the bilingual lexicon according to the intermediate results of parallel sentence extractionnew bilingual word pairs are learned from the extracted sentence pairs based on an them learning methodwe use the giza implementation of the ibm statistical translation lexicon model 4 for this purposethis model is based on the conditional probability of a source word being generated by the target word in the other language based on them estimation from aligned sentenceszhao and vogel showed that this model lends itself to adaptation and can provide better vocabulary coverage and better sentence alignment probability estimationin our work we use this model on the intermediate results of parallel sentence extraction ie on a set of aligned sentence pairs that may or may not truly correspond to each otherwe found that sentence pairs with high alignment scores are not necessarily more similar than othersthis might be due to the fact that them estimation at each intermediate step is not reliable since we only have a small amount of aligned sentences that are truly parallelthe them learner is therefore weak when applied to bilingual sentences from verynonparallel corpuswe decided to try using parallel corpora to initialize the them estimation as in zhao and vogel the results are discussed in section 6this step augments the earlier matched documents by the findonegetmore principlefrom the set of aligned sentence pairs we look for other documents judged to be dissimilar in the first step that contain one or more of these sentence pairswe further find other documents that are similar to each of the monolingual documents foundthis new set of documents is likely to be offtopic yet contains segments that are ontopicfollowing our new alignment principle we believe that these documents might still contain more parallel sentence candidates for subsequent iterationsthe algorithm then iterates to refine document matching and parallel sentence extractionthe ibm model parameters including sentence alignment score and word alignment scores are computed in each iterationthe parameter values eventually stay unchanged and the set of extracted bilingual sentence pairs also converges to a fixed sizethe system then stops and gives the last set of bilingual sentence pairs as the final outputwe evaluate our algorithm on a verynonparallel corpus of tdt3 data which contains various news stories transcription of radio broadcasting or tv news report from 19982000 in english and chinese channelswe compare the results of our proposed method against a baseline method that is based on the conventional findtopicextractsentence principle onlywe investigate the performance of the ibm model 4 them lexical learner on data from verynonparallel corpus and evaluate how our method can boost its performancethe results are described in the following subsectionssince previous works were carried out on different corpora in different language pairs we cannot directly compare our method against themhowever we implement a baseline method that follows the same findtopicextractsentence principle as in earlier workthe baseline method shares the same preprocessing document matching and sentence matching steps with our proposed methodhowever it does not iterate to update the comparable document set the parallel sentence set or the bilingual lexiconhuman evaluators manually check whether the matched sentence pairs are indeed parallelthe precision of the parallel sentences extracted is 428 for the top 2500 pairs ranked by sentence similarity scoresthere are 110000 chinese sentences and 290000 english sentences in tdt3 which lead to more than 30 billion possible sentence pairsfew of the sentence pairs turn out to be exact translations of each other but many are bilingual paraphrasesfor example in the following extracted sentence pair the english sentence has the extra phrase under the agreement which is missing from the chinese sentence the precision of parallel sentences extraction is 657 for the top 2500 pairs using our method which has a 50 improvement over the baselinein addition we also found that the precision of parallel sentence pair extraction increases steadily over each iteration until convergencein this section we discuss experimental results that lead to the claim that our proposed method can boost a weak ibm model 4 them lexical learner631them lexical learning is weak on bilingual sentences from verynonparallel corpora we compare the performances of the ibm model 4 them lexical learning on parallel data and verynonparallel data by looking at a common set of source words and their topn translation candidates extractedwe found that the ibm model 4 them learning performs much worse on tdt3 datafigure 3 shows that the them learner performs about 30 worse on average on the tdt3 datasince the ibm model parameters can be better estimated if the input sentences are more parallel we have tried to add parallel sentences to the extracted sentence pairs in each iteration step as proposed by zhao and vogel however our experiments showed that adding parallel corpus gives no improvement on the final outputthis is likely due to the parallel corpus is not in the same domain as the tdt corpus and there are simply not enough parallel sentences extracted at each step for the reliable estimation of model parameters64bootstrapping is significantly more useful than new word translations for mining parallel sentences it is important for us to gauge the effects of the two main ideas in our algorithm bootstrapping and them lexicon learning on the extraction parallel sentences from verynonparallel corporathe baseline experiment shows that without iteration the performance is at 428we carried out another set of experiment of using bootstrapping where the bilingual lexicon is not updated in each iterationthe bilingual sentence extraction accuracy of the top 2500 sentence pairs in this case dropped to 652 with only 1 relative degradationbased on the above we conclude that them lexical learning has little effect on the overall parallel sentence extraction outputthis is probably due to the fact that whereas them does find new word translations this has little effect on the overall glossing of the chinese document since such new words are rareprevious work on extracting bilingual or monolingual sentence pairs from comparable corpora has only been applied to documents that are within the same topic or have very similar publication datesone principle for previous methods is findtopicextractsentence which claims that parallel or similar sentences can only be found in document pairs with high similaritywe propose a new findonegetmore principle which claims that document pairs that contain at least one pair of matched sentences must contain others even if these document pairs do not have high similarity scoresbased on this we propose a novel bootstrapping method that successfully extracts parallel sentences from a far more disparate and verynonparallel corpus than reported in previous workthis verynonparallel corpus tdt3 data includes documents that are offtopic ie documents with no corresponding topic in the other languagethis is a completely unsupervised methodevaluation results show that our approach achieves 657 accuracy and a 50 relative improvement from baselinethis shows that the proposed method is promisingwe also find that the ibm model 4 lexical learner is weak on data from verynonparallel corpus and that its performance can be boosted by our multilevel bootstrapping method whereas using parallel corpus for adaptation is not nearly as usefulin contrast figure 3 shows that when we apply bootstrapping to the them lexical learner the bilingual lexicon extraction accuracy is improved by 20 on the average evaluated on topn translation candidates of the same source words showing that our proposed method can boost a weak them lexical learner even on data from verynonparallel corpusin addition we compare and contrast a number of bilingual corpora ranging from the parallel to comparable and to verynonparallel corporathe parallelness of each type of corpus is quantified by a lexical matching score calculated for the bilexicon pair distributed in the aligned bilingual sentence pairswe show that this scores increases as the parallelness or comparability of the corpus increasesfinally we would like to suggest that bootstrapping can in the future be used in conjunction with other sentence or word alignment learning methods to provide better mining resultsfor example methods for learning a classifier to determine sentence parallelness such as that proposed by munteanu et al can be incorporated into our bootstrapping framework
W04-3208
mining verynonparallel corpora parallel sentence and lexicon extraction via bootstrapping and emwe present a method capable of extracting parallel sentences from far more disparate verynonparallel corpora than previous comparable corpora methods by exploiting bootstrapping on top of ibm model 4 themstep 1 of our method like previous methods uses similarity measures to find matching documents in a corpus first and then extracts parallel sentences as well as new word translations from these documentsbut unlike previous methods we extend this with an iterative bootstrapping framework based on the principle of findonegetmore which claims that documents found to contain one pair of parallel sentences must contain others even if the documents are judged to be of low similaritywe rematch documents based on extracted sentence pairs and refine the mining process iteratively until convergencethis novel findonegetmore principle allows us to add more parallel sentences from dissimilar documents to the baseline setexperimental results show that our proposed method is nearly 50 more effective than the baseline method without iterationwe also show that our method is effective in boosting the performance of the ibm model 4 them lexical learner as the latter though stronger than model 1 used in previous work does not perform well on data from verynonparallel corpus
calibrating features for semantic role labeling this paper takes a critical look at the features used in the semantic role tagging literature and show that the information in the input generally a syntactic parse tree has yet to be fully exploited we propose an additional set of features and our experiments show that these features lead to fairly significant improvements in the tasks we performed we further show that different features are needed for different subtasks finally we show that by using a maximum entropy classifier and fewer features we achieved results comparable with the best previously reported results obtained with svm models we believe this is a clear indication that developing features that capture the right kind of information is crucial to advancing the stateoftheart in semantic analysis there has been growing interest in domainindependent semantic analysis fed off recent efforts in semantic annotationthe availability of semantically annotated corpora such as the proposition banks and framenet have enabled the development of a rapidly growing list of statistical semantic analyzers the shared task of the conll2004 is devoted to semantic role labeling most of these systems generally take as input a syntactic parse tree and use the syntactic information as features to tag the syntactic constituents with semantic role labelsalthough these systems have shown great promise we demonstrate that the features used in previous work have not fully exploited the information that a parse tree providesin this paper we prepose an additional set of features and show that these features lead to fairly significant improvements in the tasks we performedthis paper is organized as followsin the next section we briefly describe the annotation of the proposition bank the data for our automatic semantic role labeling experimentssection 3 describes the architecture of our systemwe take a critical look at the previously used features against each subtask and propose a new set of features in section 4section 5 presents experimental results that show the effectiveness of these new features and a comparison with previous resultswe conclude in section 62 the propbank and semantic role labeling the propbank adds a layer of semantic annotation to the treebank ii to capture generalizations that are not adequately represented in the treebank parse treesfor example in both john broke the window into a million pieces yesterday and the window broke into a million pieces yesterday the window plays the same role with regard to the verb break in both sentences even though they occur in different syntactic positionsthe propbank annotation captures this regularity by assigning a semantic role label to each argument of the verb independently of its syntactic positionthis means a fixed set of roles are specified for each verb and a different label is assigned to each rolein propbank annotations these roles are labeled with a sequence of integers starting with 01 and prefixed with argfor example the verb break has four such numbered arguments argo the breaker ar cl thing broken arc instrument and arcs piecesit is worth pointing out that even though the same numbers are used to label the semantic roles of all verbs these roles can only be interpreted in a verbspecific there are some exceptions mannerthat is an argument marked with the same number egarg may not share any semantic similarities for different verbsin addition to the numbered arguments which are considered to be core to a verb there are also elements that are less closely related to the verbthis roughly parallels the argumentadjunct dichotomy but the distinction may not be drawn along the same lines as in the theoretic linguistics literaturethese adjunctlike elements are labeled ar gm followed by a secondary tag indicating the type of adjunctfor example yesterday in those abovementioned sentences is not specific to the verb break and instead it applies to a wide variety of verbstherefore it will be marked as ar gm followed by a secondary tag tmp indicating the temporal nature of this constituentthe secondary tags are effectively a global classification of adjunctlike elementsthere are 12 secondary tags for arcms in the proposition bank dir loc mnr tmp ext rec prd prp dis adv mod nesome verbs require different sets of arguments for different senses and accurately characterizing the semantic roles of their arguments necessitates first distinguishing these sensesfor example the verb quotpassquot takes three arguments legislative body bill and law when it means quotvote and passquot while it takes only two arguments entity moving ahead and entity falling behind when it means quotovertakequoteach sense of this verb is likely to be realized in a set of distinct subcategorization frames and is therefore called a framesetsemantic role tagging there are different ways to formulate the semantic role tagging task based on the annotation of the propbank depending on what type information one wants to learn automaticallyfor comparison purposes we ignore the frameset information for now following the practice of gildea and palmer and othersfor each verb we will predict the core arguments argo5 as well as the secondary tags for arcmsthe total tagset will 2modals and negation markers are clearly not adjunctsthey are included because they are critical to the interpretation of the events be argo5 arga3 argm x secondary tagsthere are also constituents that are not semantic arguments to a given verb and we will label such constituents nullsemantic role tagging is thus an one of n classification taskalthough it is conceivable that one can simply treat this as a multicategory classification problem there are at least two reasons why such a simple approach will not work effectivelyone is that for a given verb the majority of the constituents in a syntactic tree are not its semantic argumentswhen negative samples overwhelm positive samples the current machinelearning algorithms will not be effectivethe second reason which is more subtle is that information that is effective in separating arguments from null elements may not be as effective in distinguishing different types of arguments and vice versa as we will show in our experimentsbased on these considerations we will adopt a threestage architecture stage 1 to save training time we use a simple algorithm to filter out constituents that are clearly not semantic arguments to the predicate in questionstage 2 we then classify the candidates derived from the first stage as either semantic arguments or nonargumentsstage 3 finally we run a multicategory classifier to classify the constituents that are labeled as arguments into one of the classes plus nullstep 1 designate the predicate as the current node and collect its sisters unless its sisters are coordinated with the predicateif a sister is a pp also collect its immediate children each argument of the verbthere are again two experiment conditionsin the first experiment the constituents that are arguments to a verb is already known and the task is only to assign the correct semantic role label to the constituentsin the second experiment this same task is performed on the output of the argument identification task presented in table 1the same experiments are repeated using automatic parses produced by the collins parserthe results are presented in table 2row 1 presents results of all arguments when functional tags of the argms are predicted while row 2 presents results of all arguments when functional tags are ignoredfinally row 3 presents results when only the core arguments are calculatedstandard accuracy cs cold standard fscore cp collins parser feature performance table 3 shows the performance of the new featuresthe baseline system uses the original features proposed in and each row shows the improvement over the baseline when that feature is added to the baseline featuresthe results are on known and unknown constituents respectively using cold standard treebank parsesit is clear that the syntactic frame feature results in the most improvement over the baseline with the head of the pp parent feature being a close secondit is also worth noting that although the feature combining position and voice results in an improvement when the constituents are known it actually results in a small loss when the constituents are unknownthis indicates that the slight change in the classification task could change the feature performancethe last three features are from and they also result in an improvement in performancerapid progress has been made in semantic role labeling since the propbank annotation became first available in 2002the progress can be attributed to better modeling techniques more relevant features and in a small measure cleaner annotationthe first system trained on the propbank is by giidea and palmer who reported 828 in accuracy on cold standard parses when the constituents that are semantic arguments are given 676 and 536 using cold standard and automatic parses respectively when the constituents for the arguments have to be first identifiedsince then various degrees of improvement have been reported as far as we know the best results so far are reported by where a wide range of features including features extracted from named entities verb clusters and verb senses temporal cue words dynamic context are tested with an svm classifiertheir system achieved an accuracy of 930 on known constituents and 894 on unknown constituents using cold standard parsesthey did not report results that use automatic parses with this version of the data but using a previous version of the data they reported an fscore of 794 using automatic parses by carefully designing features that can all be directly extracted from the treebank parse trees our system achieved very comparable results using a maxent classifier and a much smaller feature set 9295 on known constituents 8851 on unknown constituents and 7621 when the collins parser is usedthe results on known constituents are almost identical and the larger difference when automatic parses are used could be attributed to the different parsers as we used output from an earlier version of the collins parserthis paper takes a critical look at the features used in the semantic role tagging literature and show that the information in the input generally a syntactic parse tree has yet to be fully exploitedwe propose an additional set of features and our experiments show that these features lead to fairly significant improvements in the tasks we performedwe further show that different features are needed for different subtasksfinally we show that using a maximum entropy classifier and fewer features we achieved results that are comparable to the best previously reported results obtained with svm modelswe believe this is a clear indication that developing features that capture the right kind of information is crucial to advancing the stateoftheart in semantic analysiswe also believe that the features we proposed here are to a large extent complementary to those proposed in a recent work by pradhan et al and we intend to incorporate them in our systemwe would like to thank scott cotton for providing the propbank api 6 which greatly simplifies the implementation of our systemthis work is funded in part by the dod via grant mda90402c0412 and in part by the nsf itr via grant 13013034541984xxxx20001070
W04-3212
calibrating features for semantic role labelingthis paper takes a critical look at the features used in the semantic role tagging literature and show that the information in the input generally a syntactic parse tree has yet to be fully exploitedwe propose an additional set of features and our experiments show that these features lead to fairly significant improvements in the tasks we performedwe further show that different features are needed for different subtasksfinally we show that by using a maximum entropy classifier and fewer features we achieved results comparable with the best previously reported results obtained with svm modelswe believe this is a clear indication that developing features that capture the right kind of information is crucial to advancing the stateoftheart in semantic analysis
unsupervised semantic role labeling we present an unsupervised method for labelling the arguments of verbs with their semantic roles our bootstrapping algorithm makes initial unambiguous role assignments and then iteratively updates the probability model on which future assignments are based a novel aspect of our approach is the use of verb slot and noun class information as the basis for backing off in our probability model we achieve 5065 reduction in the error rate over an informed baseline indicating the potential of our approach for a task that has heretofore relied on large amounts of manually generated training data semantic annotation of text corpora is needed to support tasks such as information extraction and questionanswering in particular labelling the semantic roles of the arguments of a verb as in and provides crucial information about the relations among event participantsbecause of the importance of this task a number of recent methods have been proposed for automatic semantic role labelling these supervised methods are limited by their reliance on the manually roletagged corpora of framenet or propbank as training data which are expensive to produce are limited in size and may not be representativewe have developed a novel method of unsupervised semantic role labelling that avoids the need for expensive manual labelling of text and enables the use of a large representative corpusto achieve this we take a bootstrapping approach which initially makes only the role assignments that are unambiguous according to a verb lexiconwe then iteratively create a probability model based on the currently annotated semantic roles use this probability model to assign roles that are deemed to have sufficient evidence and add the newly labelled arguments to our annotated setas we iterate we gradually both grow the size of the annotated set and relax the evidence thresholds for the probability model until all arguments have been assigned rolesto our knowledge this is the first unsupervised semantic role labelling system applied to general semantic roles in a domaingeneral corpusin a similar vein of work riloff and colleagues used bootstrapping to learn case frames for verbs but their approach has been applied in very narrow topic domains with topicspecific rolesin other work gildea has explored unsupervised methods to discover roleslot mappings for verbs but not to apply this knowledge to label text with rolesour approach also differs from earlier work in its novel use of classes of information in backing off to less specific role probabilities if warranted we base our decisions on the probability of a role given the verb the syntactic slot and the noun occurring in that slotfor example the assignment to the first argument of sentence above may be based on experiencer subject when backing off from this probability we use statistics over more general classes of information such as conditioning over the semantic class of the verb instead of the verb itselffor this example psychological state verbsour approach yields a very simple probability model which emphasizes classbased generalizationsthe first step in our algorithm is to use the verb lexicon to determine the argument slots and the roles available for themin section 2 we discuss the lexicon we use and our initial steps of syntactic frame matching and unambiguous role assignmentthis unambiguous data is leveraged by using those role assignments as the basis for the initial estimates for the probability model described in section 3section 4 presents the algorithm which brings these two components together iteratively updating the probability estimates as more and more data is labelledin section 5 we describe details of the materials and methods used for the experiments presented in section 6our results show a large improvement over an informed baselinethis kind of unsupervised approach to role labelling is quite new and we conclude with a discussion of limitations and ongoing work in section 7previous work has divided the semantic role labelling task into the identification of the arguments to be labelled and the tagging of each argument with a role our algorithm addresses both these stepsalso the unsupervised nature of the approach highlights an intermediate step of determining the set of possible roles for each argumentbecause we need to constrain the role set as much as possible and cannot draw on extensive training data this latter step takes on greater significance in our workwe first describe the lexicon that specifies the syntactic arguments and possible roles for the verbs and then discuss our process of argument and role set identificationin semantic role labelling a lexicon is used which lists the possible roles for each syntactic argument of each predicatesupervised approaches to this task have thus far used the predicate lexicon of framenet or the verb lexicon of propbank since each has an associated labelled corpus for trainingwe instead make use of verbnet a manually developed hierarchical verb lexicon based on the verb classification of levin for each of 191 verb classes including around 3000 verbs in total verbnet specifies the syntactic frames along with the semantic role assigned to each slot of a framethroughout the paper we use the term frame to refer to a syntactic framethe set of syntactic arguments of a verbpossibly labelled with roles as exemplified in the verbnet entry in table 1while framenet uses semantic roles specific to a particular situation and propbank uses roles specific to a verb verbnet uses an intermediate level of thematic roles these general thematic roles are commonly assumed in linguistic theory and have some advantages in terms of capturing commonalities of argument relations across a wide range of predicatesit is worth noting that although there are fewer of these thematic roles than the more situationspecific roles of framenet the role labelling task is not necessarily easier there may be more data per role but possibly less discriminating data since each role applies to more general relations found very little difference in performancewe devise a frame matching procedure that uses the verb lexicon to determine for each instance of a verb the argument slots and their possible thematic rolesthe potential argument slots are subject object indirect object and ppobject where the latter is specialized by the individual preposition1 given chunked sentences with our verbs the frame matcher uses verbnet both to restrict the list of candidate roles for each slot and to eliminate some of the pp slots that are likely not argumentsto initialize the candidate roles precisely we only choose roles from frames in the verbs lexical entry that are the best syntactic matches with the chunker outputwe align the slots of each frame with the chunked slots and compute the portion frame of frame slots that can be mapped to a chunked slot and the portion chunks of chunked slots that can be mapped to the framethe score for each frame is computed by frame chunks and only frames having the highest score contribute candidate roles to the chunked slotsan example scoring is shown in table 2this frame matching step is very restrictive and greatly reduces potential role ambiguitymany syntactic slots receive only a single candidate role providing the initial unambiguous data for our bootstrapping algorithmsome slots receive no candidate roles which is an error for argument slots but which is correct for adjunctsthe reduction of candidate roles in general is very helpful in lightening the load on the probability model but note that it may also because the correct role to be omittedin future work we plan to explore other possible methods of selecting roles from the frames such as choosing candidates from all frames or setting a threshold value on the matching scoreonce slots are initialized as above our algorithm uses an iteratively updated probability model for role labellingthe probability model predicts the role for a slot given certain conditioning informationwe use a backoff approach with three levels of specificity of probabilitiesif a candidate role fails to meet the threshold of evidence for a given level we backoff to the next levelfor any given slot we use the most specific level that reaches the evidence threshold for any of the candidateswe only use information at a single level to compare candidates for a single slotwe assume the probability of a role for a slot is independent of other slots we do not ensure a consistent role assignment across an instance of a verbour most specific probability uses the exact combination of verb slot and noun filling that slot yielding 2 2we use only the head noun of potential arguments not the full np in our probability modelour combination of slot plus head word provides similar information to that captured by the features of gildea and jurafsky or thompson et al for our first backoff level we introduce a novel way to generalize over the verb slot and noun information of here we use a linear interpolation of three probabilities each of which drops one source of conditioning information from the most specific probability and generalizes a second source of conditioning information to a classbased conditioning eventspecifically we use the following probability formula where is slot class is noun class is verb class and the individual probabilities are equally weighted note that all three component probabilities make use of the verb or its class informationin the noun component is dropped and the slot is generalized to the appropriate slot classin the slot component is dropped and the noun is generalized to the appropriate noun classalthough it may seem counterintuitive to drop the slot this helps us capture generalizations over alternations in which the same semantic argument may appear in different syntactic slots in again the noun component is dropped but in this case the verb is generalized to the appropriate verb classeach type of class is described in the following subsectionthe last backoff level simply uses the probability of the role given the slot class the backoff model is summarized in figure 1we use maximum likelihood estimates for each of the probability formulasfor slots true generalization to a class only occurs for the prepositional slots all of which are mapped to a single pp slot classall other slots subject object and indirect objecteach form their own singleton slot classthus differs from by dropping the noun and by treating all prepositional slots as the same slotthis formula allows us to generalize over a slot regardless of the particular noun and preposition if there is one used in the instanceclasses of nouns in the model are given by the wordnet hierarchydetermining the appropriate level of generalization for a noun is an open problem currently we use a cut through wordnet including all the top categories except for the category entity the latter because of its generality is replaced in the cut by its immediate children given a noun argument all of its ancestors that appear in this cut are used as the class for the noununknown words placed in a separate categorythis yields a noun classification system that is very coarse and that does not distinguish between senses but which is simple and computationally feasible thus captures consistent relations between a verb and a class of nouns regardless of the slot in which the noun occursverb classes have been shown to be very important in capturing generalizations across verb behaviour in computational systems in semantic role labelling using verbnet they are particularly relevant since the classes are based on a commonality of rolelabelled syntactic frames the class of a verb in our model is its verbnet class that is compatible with the current framewhen multiple classes are compatible we apportion the counts uniformly among themfor probability then we generalize over all verbs in a class of the target verb giving us much more extensive data over relevant role assignments to a particular slotwe have described the frame matcher that produces a set of slots with candidate role lists and our backoff probability modelall that remains is to specify the parameters that guide the iterative use of the probability model to assign rolesthe evidence count for each of the conditional probabilities refers to the number of times we have observed the conjunction of its conditioning eventsfor example for this is the number of times the particular combination of verb slot and noun have been observedfor a probability to be used its evidence count must reach a given threshold the goodness of a role assignment is determined by taking the log of the ratio between the probabilities of the top two candidates for a slot a role is only assigned if the log likelihood ratio is defined and meets a threshold in this case the candidate role with highest probability is assigned to the slotin the algorithm the log ratio threshold is initially set high and gradually reduced until it reaches 0in the case of remaining ties we assign the role for which is highestbecause our evidence count and log ratio restrictions may not be met even when we have a very good candidate for a slot we reduce the evidence count threshold to the minimum value of 1 when the log ratio threshold reaches 13 by this point we assume competitor candidates have been given sufficient opportunity to amass the relevant countsalgorithm 1 shows the bootstrapping algorithmalgorithm 1 bootstrapping algorithm 1 perform frame matching to determine the slots to be labelled along with their candidate lists of roles2 let be the set of annotated slots let be the set of unannotated slots initially all slotslet be the set of newly annotated slots 3 add to each slot whose role assignment is unambiguouswhose candidate list has one elementset to and set to compute the probability model using counts over the items in add to all slots in for which at least two candidates meet the evidence count threshold for a given probability level and the log ratio between the two highest probability candidates meets the log ratio thresholdfor each slot in assign the highest probability roleset to and set to decrement the log ratio thresholdfor the initial set of experiments we chose 54 target verbs from three toplevel verbnet classes preparing263 transfer mesg371 and contribute132we looked for classes that contained a large number of medium to high frequency verbs displaying a variety of interesting properties such as having ambiguous semantic roles given certain syntactic constructions or having ambiguous semantic role assignments that could be distinguished by knowledge of verb classfrom the set of target verbs we derived an extended verb set that comprises all of the original target verbs as well as any verb that shares a class with one of those target verbsthis gives us a set of 1159 verbs to observe in total and increases the likelihood that some verb class information is available for each of the possible classes of the target verbsobserving the entire extended set also provides more data for our probability estimators that do not use verb class informationwe have made several changes to the semantic roles as given by verbnetfirst selectional restrictions such as animate are removed since our coarse model of noun class does not allow us to reliably determine whether such restrictions are metsecond a few semantic distinctions that are made in verbnet appeared to be too finegrained to capture so we map these to a more coarsegrained subset of the verbnet rolesfor instance the role actor is merged with agent and patient with themewe are left with a set of 16 roles agent amount attribute beneficiary because destination experiencer instrument location material predicate recipient source stimulus theme timeof these 13 actually occur in our target verb classesour corpus consists of a random selection of 20 of the sentences in the british national corpus this corpus is processed by the chunker of abney from whose output we can identify the probable head words of verb arguments with some degree of errorfor instance distant subjects are often not found and pps identified as arguments are often adjunctsto reduce the number of adjuncts we ignore dates and any pps that are not known to introduce an argument to one of the verbs in our extended setwe extracted two sets of sentences a validation set consisting of 5 random examples of each target verb and a test set consisting of 10 random examples of each target verbthe data sets were chunked as above and the role for each potential argument slot was labelled by two human annotators choosing from the simplified role set allowed by each verb according to verbneta slot could also be labelled as an adjunct or as bad agreement between the two annotators was high yielding a kappa statistic of 083after performing the labelling task individually the annotators reconciled their responses to yield a set of human judgements used for evaluationin our development experiments we tried an evidence count threshold of either the mean or median over all counts of a particular conjunction of conditioning eventsthe more lenient median setting worked slightly better on the validation set and was retained for our test experimentswe also experimented with initial starting values of 2 3 and 8 for the log likelihood ratio thresholdan initial setting of 8 showed an improvement in performance as lower values enabled too many early role assignments so we used the value of 8 in our test experimentsin all experiments a decrement of 5 was used to gradually reduce the log likelihood ratio thresholdof over 960k slots we extracted from the corpus 120k occurred with one of 54 target verbsof these our validation data consisted of 278 slots and our test data of 554 slotswe focus on the analysis of test data the pattern on the validation data was nearly identical in all respectsthe target slots fall into several categories depending on the human judgements argument slots adjunct slots and bad slots we report detailed analysis over the slots identified as argumentswe also report overall accuracy if adjunct and bad slots are included in the slots to be labelledthis comparison is similar to that made by gildea and jurafsky and others either using arguments as delimited in the framenet corpus or having to automatically locate argument boundaries4 furthermore we report results over individual slot classes as well as over all slotswe report results after the unambiguous data is assigned and at the end of the algorithm when no more slots can be labelledat either of these steps it is possible for some slots to have been assigned and some to remain unassignedrather than performing a simple precisionrecall analysis we report a finer grained elaboration that gives a more precise picture of the resultsfor the assigned slots we report percent correct and percent incorrectfor the unassigned slots we report percent possible and percent impossible all these percent figures are out of all argument slots and out of all slots see table 3correctness is determined by the human judgements on the chunked slots as reported aboveusing our notion of slot class we compare our results to a baseline that assigns all slots the role with the highest probability for that slot class when using general thematic roles this is a more informed baseline than as used in other workwe are using a very different verb lexicon corpus and human standard than in previous researchthe closest work is that of gildea and jurafsky which maps framenet roles to a set of 18 thematic roles very similar to our roles and also operates on a subset of the bnc we mention the performance of their method where appropriate belowhowever our results are compared to human annotation of chunked data while theirs are compared to manually annotated full sentencesour percentage correct values therefore do not take into account argument constituents that are simply missed by the chunkertable 3 summarizes our resultsin this section we focus on argument slots as identified by our human judges there are a number of things to notefirst our performance on these slots is very high 901 correct at the end of the algorithm with 70 incorrect and delimited arguments others train as well as test only on such argumentsin our approach all previously annotated slots are used in the iterative training of the probability modelthus even when we report results on argument slots only adjunct and bad slots may have induced errors in their labelling only 29 left unassignedthis is a 56 reduction in error rate over the baselinesecond we see that even after the initial unambiguous role assignment step the algorithm achieves close to the baseline percent correctfurthermore over 96 of the initially assigned roles are correctthis means that much of the work in narrowing down the candidate lists is actually being preformed during frame matchingit is noteworthy that such a simple method of choosing the initial candidates can be so useful and it would seem that even supervised methods might benefit from employing such an explicit use of the lexicon to narrow down role candidates for a slotafter unambiguous role assignment about 21 of the test data remains unassigned of these 116 slots 100 have a nonnull candidate listthese 100 are assigned by our iterative probability model so we are especially interested in the results on themwe find that 76 of these 100 are assigned correctly and 24 are assigned incorrectly yielding a 76 accuracy for the probability model portion of our algorithm on identified argument slotsmoreover we also find that all specificity levels of the probability model are employed in making these decisionsabout a third of the decisions are made by each levelthis indicates that while there is sufficient data in many cases to warrant using the exact probability formula the classbased generalizations we propose prove to be very useful to the algorithmas a point of comparison the supervised method of gildea and jurafsky achieved 821 accuracy on identified arguments using general thematic roleshowever they had a larger and more varied target set consisting of 1462 predicates from 67 framenet frames which makes their task harder than ourswe are aware that our test set is small compared to supervised approaches which have a large amount of labelled data availablehowever our almost identical results across the validation and test sets indicates consistent behaviour that may generalize to a larger test set at least on similar classes of verbswhen using general thematic roles with a small set of verb classes the probability used for the baseline works very well for subjects and objects indeed when we examine each of the slot classes individually we find that for subjects and objects the percent correct achieved by the algorithm is indistinguishable from the baseline for pp objects on the other hand the baseline is only around 11 correct while we achieve 785 correct a 76 reduction in error rateclearly when more roles are available even becomes a weak predictor5 we could just assign the default role for subjects and objects when using general thematic roles but we think this is too simplisticfirst when we broaden our range of verb classes subjects and objects will have more possible rolesas we have seen with pps when more roles are available the performance of a default role degradessecond although we achieve the same correctness as the baseline our algorithm does not simply assign the dominant role in these casessome subjects are assigned theme while some objects are assigned recipient or sourcethese roles would never be possible in these slots if a default assignment were followedwe also consider our performance given frame matching and chunking errors which can lead to adjuncts or even bad constituents being labelledonly arguments should be labelled while nonarguments should remain unlabelledof 98 slots judged to be adjuncts 19 erroneously are given labelsincluding the adjunct slots our percent correct goes from 901 to 887of the 20 bad slots 12 were labelledincluding these correctness is reduced slightly further to 872 as shown in the second panel of results in table 3the error rate reduction here of 65 is higher than on arguments only because the baseline always labels adjuncts and bad slots achieved 636 accuracy when having to identify arguments for thematic roles though note again that this is on a much larger and more general test setalso although we take into account errors on identified chunks that are not arguments we are are not counting chunker errors of missing argumentsas others have shown semantic role labelling is more accurate with better preprocessing of the datahowever we also think our algorithm may be extendable to deal with many of the adjunct cases we observedoften adjuncts express time or location while not argument roles these do express generalizable semantic relationsin future work we plan to explore the notion of expanding our frame matching step to go beyond verbnet by initializing potential adjuncts with appropriate rolesusing an unsupervised algorithm for semantic role labelling we have achieved 90 correct on identified arguments well over an informed baseline of 77 and have achieved 87 correct on all slots on pp objects our conservative role assignment shows promise at leaving adjuncts unlabelledhowever pp objects also have the lowest performance more work is required on our frame matching approach to determine appropriate roles for pp objects given the specification in the lexicon which often overconstrains the allowable prepositions for a slotalthough these results are promising they are only a first step in demonstrating the potential of the approachwe need to test more verbs from a wider variety of verb classes to determine the generalizability of our findingsusing framenet would also have the advantage of providing large amounts of labelled test data for our evaluationwe also hope to integrate some processing of adjunct roles rather than limiting ourselves to the specified argumentsa unique aspect of our method is the probability model which is novel in its generalizations over verb slot and noun classes for role labellinghowever these have room for improvementour noun classes are coarse and prepositions clearly have the potential to be divided into more informative subclasses such as spatial or time relationsour ongoing work is investigating better class models to make the backoff process even more effectivewe gratefully acknowledge the support of nserc of canadawe also thank martha palmer for providing us with the verbnet data eric joanis for help with the chunker vivian tsang and ryan north for helpful discussion and two anonymous reviewers
W04-3213
unsupervised semantic role labelingwe present an unsupervised method for labelling the arguments of verbs with their semantic rolesour bootstrapping algorithm makes initial unambiguous role assignments and then iteratively updates the probability model on which future assignments are baseda novel aspect of our approach is the use of verb slot and noun class information as the basis for backing off in our probability modelwe achieve 5065 reduction in the error rate over an informed baseline indicating the potential of our approach for a task that has heretofore relied on large amounts of manually generated training datawe present an unsupervised method for labeling the arguments of verbs with their semantic roleswe perform unsupervised semantic role labeling by using handcrafted verb lexicons to replace supervised semantic role training data
monolingual machine translation for paraphrase generation we apply statistical machine translation tools to generate novel paraphrases of input sentences in the same language the system is trained on large volumes of sentence pairs automatically extracted from clustered news articles available on the world wide web alignment error rate is measured to gauge the quality of the resulting corpus a monotone phrasal decoder generates contextual replacements human evaluation shows that this system outperforms baseline paraphrase generation techniques and in a departure from previous work offers better coverage and scalability than the current bestofbreed paraphrasing approaches the ability to categorize distinct word sequences as meaning the same thing is vital to applications as diverse as search summarization dialog and question answeringrecent research has treated paraphrase acquisition and generation as a machine learning problem we approach this problem as one of statistical machine translation within the noisy channel model of brown et al that is we seek to identify the optimal paraphrase t of a sentence s by finding t and s being sentences in the same languagewe describe and evaluate an smtbased paraphrase generation system that utilizes a monotone phrasal decoder to generate meaningpreserving paraphrases across multiple domainsby adopting at the outset a paradigm geared toward generating sentences this approach overcomes many problems encountered by taskspecific approachesin particular we show that smt techniques can be extended to paraphrase given sufficient monolingual parallel data1 we show that a huge corpus of comparable and alignable sentence pairs can be culled from readymade topicaltemporal clusters of news articles gathered on a daily basis from thousands of sources on the world wide web thereby permitting the system to operate outside the narrow domains typical of existing systemsuntil recently efforts in paraphrase were not strongly focused on generation and relied primarily on narrow data sourcesone data source has been multiple translations of classic literary works pang et al obtain parallel monolingual texts from a set of 100 multiplytranslated news articleswhile translationbased approaches to obtaining data do address the problem of how to identify two strings as meaning the same thing they are limited in scalability owing to the difficulty of obtaining large quantities of multiplytranslated source documentsother researchers have sought to identify patterns in large unannotated monolingual corporalin pantel derive inference rules by parsing text fragments and extracting semantically similar pathsshinyama et al identify dependency paths in two collections of newspaper articlesin each case however the information extracted is limited to a small set of patternsbarzilay lee exploit the metainformation implicit in dual collections of newswire articles but focus on learning sentencelevel patterns that provide a basis for generationmultisequence alignment is used to identify sentences that share formal propertiesthis yields a set of clusters each characterized by a word lattice that captures ngrambased structural similarities between sentenceslattices are in turn mapped to templates that can be used to produce novel transforms of input sentencestheir methodology provides striking results within a limited domain characterized by a high frequency of stereotypical sentence typeshowever as we show below the approach may be of limited generality even within the training domainour training corpus like those of shinyama et al and barzilay lee consists of different news stories reporting the same eventwhile previous work with comparable news corpora has been limited to just two news sources we set out to harness the ongoing explosion in internet news coveragethousands of news sources worldwide are competing to cover the same stories in real timedespite different authorship these stories cover the same events and therefore have significant content overlap especially in reports of the basic factsin other cases news agencies introduce minor edits into a single original ap or reuters storywe believe that our work constitutes the first to attempt to exploit these massively multiple data sources for paraphrase learning and generationwe began by identifying sets of preclustered urls that point to news articles on the web gathered from publicly available sites such as httpnewsyahoocom httpnewsgooglecom and httpuknewsbotmsncomtheir clustering algorithms appear to consider the full text of each news article in addition to temporal cues to produce sets of topicallytemporally related articlesstory content is captured by downloading the html and isolating the textual contenta supervised hmm was trained to distinguish story content from surrounding advertisements etc2 over the course of about 8 months we collected 11162 clusters comprising 177095 articles and averaging 158 articles per clusterthe quality of 2 we handtagged 1150 articles to indicate which portions of the text were story content and which were advertisements image captions or other unwanted materialwe evaluated several classifiers on a 7030 test train split and found that an hmm trained on a handful of features was most effective in identifying content lines these clusters is generally goodimpressionistically discrete events like sudden disasters business announcements and deaths tend to yield tightly focused clusters while ongoing stories like the sars crisis tend to produce very large and unfocused clustersto extract likely paraphrase sentence pairs from these clusters we used edit distance over words comparing all sentences pairwise within a cluster to find the minimal number of word insertions and deletions transforming the first sentence into the secondeach sentence was normalized to lower case and the pairs were filtered to reject a total of 139k nonidentical sentence pairs were obtainedmean levenshtein distance was 517 mean sentence length was 186 wordsto this corpus we applied the word alignment algorithms available in giza a freely available implementation of ibm models 15 and the hmm alignment along with various improvements and modifications motivated by experimentation by och ney in order to capture the manytomany alignments that identify correspondences between idioms and other phrasal chunks we align in the forward direction and again in the backward direction heuristically recombining each unidirectional word alignment into a single bidirectional alignment figure 1 shows an example of a monolingual alignment produced by gizaeach line represents a unidirectional link directionality is indicated by a tick mark on the target side of the linkwe held out a set of news clusters from our training data and extracted a set of 250 sentence pairs for blind evaluationrandomly extracted on the basis of an edit distance of 5 n 20 the goldstandard sentence pairs were checked by an independent human evaluator to ensure that they contained paraphrases before they were hand wordalignedto evaluate the alignments we adhered to the standards established in melamed and och ney following och neys methodology two annotators each created an initial annotation for each dataset subcategorizing alignments as either sure or possible differences were highlighted and the annotators were asked to review their choices on these differencesfinally we combined the two annotations into a single gold standard if both annotators agreed that an alignment should be sure then the alignment was marked as sure in the goldstandard otherwise the alignment was marked as possibleto compute precision recall and alignment error rate for the twin datasets we used exactly the formulae listed in och ney let a be the set of alignments in the comparison s be the set of sure alignments in the gold standard and p be the union of the sure and possible alignments in the gold standardthen we have measured in terms of aer4 final interrater agreement between the two annotators on the 250 sentences was 931table 1 shows the results of evaluating alignment after trainng the giza modelalthough the overall aer of 1158 is higher than the best bilingual mt systems the training data is inherently noisy having more in common with analogous corpora than conventional mt parallel corpora in that the paraphrases are not constrained by the source text structurethe identical word aer of 1057 is unsurprising given that the domain is unrestricted and the alignment algorithm does not employ direct string matching to leverage word identity5 the nonidentical word aer of 2088 may appear problematic in a system that aims to generate paraphrases as we shall see however this turns out not to be the caseablation experiments not described here indicate that additional data will improve aerrecent work in smt has shown that simple phrasebased mt systems can outperform more sophisticated wordbased systems therefore we adopt a phrasal decoder patterned closely after that of vogel et al we view the source and target sentences s and t as word sequences s1sm and t1tna word alignment a of s and t can be expressed as a function from each of the source and target tokens to a unique cept isomorphically a cept represents an aligned subset of the source and target tokensthen for a given sentence pair and word alignment we define a phrase pair as a subset of the cepts in which both the source and target tokens are contiguous6 we gathered all phrase pairs occurring in at least one aligned sentence somewhere in our training corpus into a single replacement databasethis database of lexicalized phrase pairs termed phrasal replacements serves as the backbone of our channel modelas in we assigned probabilities to these phrasal replacements via ibm model 1in more detail we first gathered lexical translation probabilities of the form p by running five iterations of model 1 on the training corpusthis allows for computing the probability of a sequence of source words s given a sequence of target words t as the sum over all possible alignments of the model 1 probabilities provides a more detailed derivation of this identityalthough simple this approach has proven effective in smt for several reasonsfirst and foremost phrasal scoring by model 1 avoids the sparsity problems associated with estimating each phrasal replacement probability with mle secondly it appears to boost translation quality in more sophisticated translation systems by inducing lexical triggering collocations and other noncompositional phrases receive a higher probability as a whole than they would as independent single word replacementsone further simplification was madegiven that our domain is restricted to the generation of monolingual paraphrase interesting output can be produced without tackling the difficult problem of interphrase reordering7 therefore along the lines of tillmann et al we rely on only monotone phrasal alignments although we do allow intraphrasal reorderingwhile this means certain common structural alternations cannot be generated we are still able to express a broad range of phenomena pings to be both unwieldy in practice and very often indicative of poor a word alignment7 even in the realm of mt such an assumption can produce competitive results in addition we were hesitant to incur the exponential increase in running time associated with those movement models in the tradition of brown el al especially since these offset models fail to capture important linguistic generalizations our channel model then is determined solely by the phrasal replacements involvedwe first assume a monotone decomposition of the sentence pair into phrase pairs and the probability p is then defined as the product of the each phrasal replacement probabilitythe target language model was a trigram model using interpolated kneserney smoothing trained over all 14 million sentences in our news corpusto generate paraphrases of a given input a standard smt decoding approach was used this is described in more detail belowprior to decoding however the input sentence underwent preprocessing text was lowercased tokenized and a few classes of namedentities were identified using regular expressionsto begin the decoding process we first constructed a lattice of all possible paraphrases of the source sentence based on our phrasal translation databasefigure 2 presents an examplethe lattice was realized as a set of s 1 vertices v0vs and a set of edges between those vertices each edge was labeled with a sequence of words and a real numberthus a edge connecting vertex vi to vj labeled with the sequence of words w1wk and the real number p indicates that the source words si1 to sj can be replaced by words w1wk with probability p our replacement database was stored as a trie with words as edges hence populating the lattice takes worst case o timefinally since source and target languages are identical we added an identity mapping for each source word si an edge from vi1 to vi with label si and a uniform probability youthis allows for handling unseen wordsa high you value permits more conservative paraphraseswe found the optimal path through the lattice as scored by the product of the replacement model and the trigram language modelthis algorithm reduces easily to the viterbi algorithm such a dynamic programming approach guarantees an efficient optimal search where n is the maximal target length and k is the maximal number of replacements for any wordin addition fast algorithms exist for computing the nbest lists over a lattice finally the resultant paraphrases were cleaned up in a postprocessing phase to ensure output was not trivially distinguishable from other systems during human evaluationall generic named entity tokens were reinstantiated with their source values and case was restored using a model like that used in vita et al barzilay lee have released a common dataset that provides a basis for comparing different paraphrase generation systemsit consists of 59 sentences regarding acts of violence in the middle eastthese are accompanied by paraphrases generated by their multisequence alignment system and a baseline employing wordnet along with human judgments for each output by 23 ratersthe msa wordnet baseline was created by selecting a subset of the words in each test sentenceproportional to the number of words replaced by msa in the same sentenceand replacing each with an arbitrary word from its most frequent wordnet synsetsince our smt approach depends quite heavily on a target language model we presented an alternate wordnet baseline using a target language model8 in combination with the language model described in section 34 we used a very simple replacement model each appropriately inflected member of the most frequent synset was proposed as a possible replacement with uniform probabilitythis was intended to isolate the contribution of the language model from the replacement modelgiven that our alignments while aggregated into phrases are fundamentally wordaligned one question that arises is whether the information we learn is different in character than that learned from much simpler techniquesto explore this hypothesis we introduced an additional baseline that used statistical clustering to produce an automated unsupervised synonym list again with a trigram language modelwe used standard bigram clustering techniques to produce 4096 clusters of our 65225 vocabulary itemswe have experimented with several methods for extracting a parallel sentencealigned corpus from news clusters using word alignment error rate or aer as an evaluation metrica brief summary of these experiments is provided in table 1to evaluate the quality of generation we followed the lead of barzilay lee we started with the 59 sentences and corresponding paraphrases from msa and wordnet since the size of this data set made it difficult to obtain statistically significant results we also included 141 randomly selected sentences from heldout clusterswe then produced paraphrases with each of the following systems and compared them with msa and wn for the sake of consistency we did not use the judgments provided by barzilay and lee instead we had two raters judge whether the output from each system was a paraphrase of the input sentencethe raters were presented with an input sentence and an output paraphrase from each system in random order to prevent bias toward any particular judgmentsince on our first pass we found interrater agreement to be somewhat low we asked the raters to make a second pass of judgments on those where they disagreed this significantly improved agreement the results of this final evaluation are summarized in table 2table 2 shows that pr can produce rewordings that are evaluated as plausible paraphrases more frequently than those generated by either baseline techniques or msathe wordnet baseline performs quite poorly even in combination with a trigram language model the language model does not contribute significantly to resolving lexical selectionthe performance of cl is likewise abysmalagain a language model does nothing to helpthe poor performance of these synonymbased techniques indicates that they have little value except as a baselinethe pr model generates plausible paraphrases for the overwhelming majority of test sentences indicating that even the relatively high aer for nonidentical words is not an obstacle to successful generationmoreover pr was able to generate a paraphrase for all 200 sentences the correlation between acceptability and pr sentence rank validates both the ranking algorithm and the evaluation methodologyin table 2 the pr model scores significantly better than msa in terms of the percentage of paraphrase candidates accepted by ratersmoreover pr generates at least five distinct paraphrases for each test sentencesuch perfect coverage on this dataset is perhaps fortuitous but is nonetheless indicative of scalabilityby contrast barzilay lee report being able to generate paraphrases for only 59 out of 484 sentences in their training set a total of 12one potential concern is that pr paraphrases usually involve simple substitutions of words and short phrases whereas msa outputs more complex paraphrases this is reflected in table 3 which provides a breakdown of four dimensions of interest as provided by one of our independent evaluatorssome 47 of msa paraphrases involve significant reordering such as an activepassive alternation whereas the monotone pr decoder precludes anything other than minor transpositions within phrasal replacementsshould these facts be interpreted to mean that msa with its more dramatic rewrites is ultimately more ambitious than prwe believe that the opposite is truea close look at msa suggests that it is similar in spirit to examplebased machine translation techniques that rely on pairing entire sentences in source and target languages with the translation step limited to local adjustments of the target sentence when an input sentence closely matches a template results can be stunninghowever msa achieves its richness of substitution at the cost of generalityinspection reveals that 15 of the 59 msa paraphrases or 254 are based on a single highfrequency domainspecific template unless one is prepared to assume that similar templates can be found for most sentence types scalability and domain extensibility appear beyond the reach of msain addition since msa templates pair entire sentences the technique can produce semantically different output when there is a mismatch in information content among template training sentencesconsider the third and fourth rows of table 3 which indicate the extent of embellishment and lossiness found in msa paraphrases and the topranked pr paraphrasesparticularly noteworthy is the lossiness of msa seen in row 4figure 3 illustrates a case where the msa paraphrase yields a significant reduction in information while pr is more conservative in its replacementswhile the substitutions obtained by the pr model remain for the present relatively modest they are not trivialchanging a single content word is a legitimate form of paraphrase and the ability to paraphrase across an arbitrarily large sentence set and arbitrary domains is a desideratum of paraphrase researchwe have demonstrated that the smtmotivated pr method is capable of generating acceptable paraphrases for the overwhelming majority of sentences in a broad domainmuch work obviously remains to be doneour results remain constrained by data sparsity despite the large initial training setsone major agenda item therefore will be acquisition of larger data setsin addition to obtaining greater absolute quantities of data in the form of clustered articles we also seek to extract aligned sentence pairs that instantiate a richer set of phenomenarelying on edit distance to identify likely paraphrases has the unfortunate result of excluding interesting sentence pairs that are similar in meaning though different in formfor example we are currently experimenting with data extracted from the first two sentences in each article which by journalistic convention tend to summarize content while noisier than the edit distance data initial results suggest that these can be a rich source of information about larger phrasal substitutions and syntactic reorderingalthough we have not attempted to address the issue of paraphrase identification here we are currently exploring machine learning techniques based in part on features of document structure and other linguistic features that should allow us to bootstrap initial alignments to develop more datathis will we hope eventually allow us to address such issues as paraphrase identification for irto exploit richer data sets we will also seek to address the monotone limitation of our decoder that further limits the complexity of our paraphrase outputwe will be experimenting with more sophisticated decoder models designed to handle reordering and mappings to discontinuous elementswe also plan to pursue better metrics for paraphrase evaluationwe presented a novel approach to the problem of generating sentencelevel paraphrases in a broad semantic domainwe accomplished this by using methods from the field of smt which is oriented toward learning and generating exactly the sorts of alternations encountered in monolingual paraphrasewe showed that this approach can be used to generate paraphrases that are preferred by humans to sentencelevel paraphrases produced by other techniqueswhile the alternations our system produces are currently limited in character the field of smt offers a host of possible enhancementsincluding reordering modelsaffording a natural path for future improvementsa second important contribution of this work is a method for building and tracking the quality of large alignable monolingual corpora from structured news data on the webin the past the lack of such a data source has hampered paraphrase research our approach removes this obstaclewe are grateful to mo corstonoliver jeff stevenson amy muia and orin hargraves of the butler hill group for their work in annotating the data used in the experimentsthis paper has also benefited from discussions with ken church mark johnson and steve richardsonwe greatly appreciate the careful comments of three anonymous reviewerswe remain however solely responsible for this content
W04-3219
monolingual machine translation for paraphrase generationwe apply statistical machine translation tools to generate novel paraphrases of input sentences in the same languagethe system is trained on large volumes of sentence pairs automatically extracted from clustered news articles available on the world wide webalignment error rate is measured to gauge the quality of the resulting corpusa monotone phrasal decoder generates contextual replacementshuman evaluation shows that this system outperforms baseline paraphrase generation techniques and in a departure from previous work offers better coverage and scalability than the current bestofbreed paraphrasing approacheswe built a paraphrase generation model from a monolingual comparable corpus based on a statistical machine translation framework where the language model assesses the grammaticality of the translations ie generated expressionswe present an endtoend paraphrasing system inspired by phrasebased machine translation that can both ac quire paraphrases and use them to generate new strings
applying conditional random fields to japanese morphological analysis this paper presents japanese morphological analysis based on conditional random fields previous work in crfs assumed that observation sequence boundaries were fixed however word boundaries are not clear in japanese and hence a straightforward application of crfs is not possible we show how crfs can be applied to situations where word boundary ambiguity exists crfs offer a solution to the longstanding problems in corpusbased or statistical japanese morphological analysis first flexible feature designs for hierarchical tagsets become possible second influences of label and length bias are minimized we experiment crfs on the standard testbed corpus used for japanese morphological analysis and evaluate our results using the same experimental dataset as the hmms and memms previously reported in this task our results confirm that crfs not only solve the longstanding problems but also improve the performance over hmms and memms conditional random fields applied to sequential labeling problems are conditional models trained to discriminate the correct sequence from all other candidate sequences without making independence assumption for featuresthey are considered to be the stateoftheart framework to dateempirical successes with crfs have been reported recently in partofspeech tagging shallow parsing named entity recognition chinese word segmentation and information extraction previous applications with crfs assumed that observation sequence boundaries are fixed and the main focus was to predict label sequence however word boundaries are not clear in nonsegmented languagesone has to identify word segmentation as well as to predict partofspeech in morphological analysis of nonsegmented languagesin this paper we show how crfs can be applied to situations where word boundary ambiguity existscrfs offer a solution to the problems in japanese morphological analysis with hidden markov models or with maximum entropy markov models first as hmms are generative it is hard to employ overlapping features stemmed from hierarchical tagsets and nonindependent features of the inputs such as surrounding words word suffixes and character typesthese features have usually been ignored in hmms despite their effectiveness in unknown word guessingsecond as mentioned in the literature memms could evade neither from label bias nor from length bias easy sequences with low entropy are likely to be selected during decoding in memmsthe consequence is serious especially in japanese morphological analysis due to hierarchical tagsets as well as word boundary ambiguitythe key advantage of crfs is their flexibility to include a variety of features while avoiding these biasin what follows we describe our motivations of applying crfs to japanese morphological analysis then crfs and their parameter estimation are provided finally we discuss experimental results and give conclusions with possible future directions word boundary ambiguity cannot be ignored when dealing with nonsegmented languagesa simple approach would be to let a character be a token so that boundary ambiguity never occur input however bi tagging is not a standard method in 20year history of corpusbased japanese morphological analysisthis is because bi tagging cannot directly reflect lexicons which contain prior knowledge about word segmentationwe cannot ignore a lexicon since over 90 accuracy can be achieved even using the longest prefix matching with the lexiconmoreover bi tagging produces a number of redundant candidates which makes the decoding speed slowertraditionally in japanese morphological analysis we assume that a lexicon which lists a pair of a word and its corresponding partofspeech is availablethe lexicon gives a tractable way to build a lattice from an input sentencea lattice represents all candidate paths or all candidate sequences of tokens where each token denotes a word with its partofspeech 1figure 1 shows an example where a total of 6 candidate paths are encoded and the optimal path is marked with bold typeas we see the set of labels to predict and the set of states in the lattice are different unlike english partofspeech tagging that word boundary ambiguity does not existformally the task of japanese morphological analysis can be defined as followslet x be an input unsegmented sentencelet y be a path a sequence of tokens where each token is a pair of word wi and its partofspeech tiin other words y where y is the number of tokens in the path ylet y be a set of candidate paths in a lattice built from the input sentence x and a lexiconthe goal is to select a correct path yˆ from all candidate paths in the ythe distinct property of japanese morphological analysis is that the number of tokens y varies since the set of labels and the set of states are not the samejapanese partofspeech tagsets used in the two major japanese morphological analyzers chasen2 and juman3 take the form of a hierarchical structurefor example ipa tagset4 used in chasen consists of three categories partofspeech conjugation form and conjugate type the cform and ctype are assigned only to words that conjugate such as verbs and adjectivesthe partofspeech has at most four levels of subcategoriesthe top level has 15 different categories such as noun verb etcnoun is subdivided into common noun proper noun and so onproper noun is again subdivided into person organization or place etcthe bottom level can be thought as the word level with which we can completely discriminate all words as different posif we distinguish each branch of the hierarchical tree as a different label the total number amounts to about 500 which is much larger than the typical english pos tagset such as penn treebankthe major effort has been devoted how to interpolate each level of the hierarchical structure as well as to exploit atomic features such as word suffixes and character typesif we only use the bottom level we suffer from the data sparseness problemon the other hand if we use the top level we lack in granularity of pos to capture fine differencesfor instance some suffixes appear after names and are helpful to detect words with name posin addition the conjugation form must be distinguished appearing only in the succeeding position in a bigram since it is dominated by the word appearing in the nextasahara et al extended hmms so as to incorporate 1 positionwise grouping 2 wordlevel statistics and 3 smoothing of word and pos level statistics however the proposed method failed to capture nonindependent features such as suffixes and character types and selected smoothing parameters in an adhoc wayit is known that maximum entropy markov models or other discriminative models with independently trained nextstate classifiers potentially suffer from the label bias and length biasin japanese morphological analysis they are extremely serious problemsthis is because as shown in figure 1 the branching variance is considerably high and the number of tokens varies according to the output pathan example of the label bias is illustrated in figure 2 where the path is searched by sequential combinations of maximum entropy models ie if memms learn the correct path ad with independently trained maximum entropy models the path be will have a higher probability and then be selected in decodingthis is because the token b has only the single outgoing token e and the transition probability for be is always 10generally speaking the complexities of transitions vary according to the tokens and the transition probabilities with lowentropy will be estimated high in decodingthis problem occurs because the training is performed only using the correct path ignoring all other transitionsmoreover we cannot ignore the influence of the length bias eitherby the length bias we mean that short paths consisting of a small number of tokens are preferred to long patheven if the transition probability of each token is small the total probability of the path will be amplified when the path is short 2length bias occurs in japanese morphological analysis because the number of output tokens y varies by use of prior lexiconsuchimoto et al attempted a variant of memms for japanese morphological analysis with a number of features including suffixes and character types although the performance of unknown words were improved that of known words degraded due to the label and length biaswrong segmentation had been reported in sentences which are analyzed correctly by naive rulebased or hmmsbased analyzersconditional random fields overcome the problems described in section 22crfs are discriminative models and can thus capture many correlated features of the inputsthis allows flexible feature designs for hierarchical tagsetscrfs have a single exponential model for the joint probability of the entire paths given the input sentence while memms consist of a sequential combination of exponential models each of which estimates a conditional probability of next tokens given the current statethis minimizes the influences of the label and length biasas explained in section 21 there is word boundary ambiguity in japanese and we choose to use a lattice instead of bi taggingthis implies that the set of labels and the set of states are different and the number of tokens y varies according to a pathin order to accomodate this we define crfs for japanese morphological analysis as the conditional probability of an output path y given an input sequence x where zx is a normalization factor over all candidate paths ie fk is an arbitrary feature function over ith token hwi tii and its previous token hwi1 ti1i 5 λk is a learned weight or parameter associated with feature function fknote that our formulation of crfs is different from the widelyused formulations the previous applications of crfs assign a conditional probability for a label sequence y y1 yt given an input sequence x x1 xt as in our formulation crfs deal with word boundary ambiguitythus the the size of output sequence t is not fixed through all candidates y ythe index i is not tied with the input x as in the original crfs but unique to the output y yhere we introduce the global feature vecthe global feature vector p can also be represented as p zx1 expthe most probable path yˆ for the input sentence x is then given by to maximize lλ we have to maximize the difference between the inner product of the correct path a f and those of all other candidates a f y ycrfs is thus trained to discriminate the correct path from all other candidates which reduces the influences of the label and length bias in encodingat the optimal point the firstderivative of the loglikelihood becomes 0 thus where ok ej fk is the count of feature k observed in the training data t and ek ej epfk is the expectation of feature k over the model distribution p and t the expectation can efficiently be calculated using a variant of the forwardbackward algorithm where fk is an abbreviation for fk b is a set of all bigram sequences observed in the lattice for x and α and β are the forwardbackward costs given by the following recursive definitions which can be found with the viterbi algorithman interesting note is that the decoding process of crfs can be reduced into a simple linear combinations over all global features where lt and rt denote a set of tokens each of which connects to the token hw ti from the left and the right respectivelynote that initial costs of two virtual tokens α and β are set to be 1a normalization constant is then given by zx αwe attempt two types of regularizations in order to avoid overfittingthey are a gaussian prior and a laplacian prior crfs are trained using the standard maximum likelihood estimation ie maximizing the loglikelihood lλ of a given training set t hxjyjin j1 below we refer to crfs with l1norm and l2norm regularization as l1crfs and l2crfs respectivelythe parameter c e r is a hyperparameter of crfs determined by a cross validationl1crfs can be reformulated into the constrained optimization problem below by letting ak ak ak at the optimal point the following karushkuhuntucker conditions satisfy ak c 12 0 ak c 12 0 and c 12these conditions mean that both ak and ak are set to be 0 when c 12a nonzero weight is assigned to ak only when c 12l2crfs in contrast give the optimal solution when δlλ δλk c ak 0omitting the proof 0 can be shown and l2crfs thus give a nonsparse solution where all ak have nonzero weightsthe relationship between two reguralizations have been studied in machine learning community reported that l1regularizer should be chosen for a problem where most of given features are irrelevanton the other hand l2regularizer should be chosen when most of given features are relevantan advantage of l1based regularizer is that it often leads to sparse solutions where most of ak are exactly 0the features assigned zero weight are thought as irrelevant features to classificationsthe l2based regularizer also seen in svms produces a nonsparse solution where all of ak have nonzero weightsall features are used with l2crfsthe optimal solutions of l2crfs can be obtained by using traditional iterative scaling algorithms or more efficient quasinewton methods for l1crfs constrained optimizers can be usedwe use two widelyused japanese annotated corpora in the research community kyoto university corpus ver 20 and rwcp text corpus for our experiments on crfsnote that each corpus has a different pos tagset and details are summarized in table 1one of the advantages of crfs is that they are flexible enough to capture many correlated features including overlapping and nonindependent featureswe thus use as many features as possible which could not be used in hmmstable 2 summarizes the set of feature templates used in the kc datathe templates for rwcp are essentially the same as those of kc except for the maximum level of pos subcatgeorieswordlevel templates are employed when the words are lexicalized ie those that belong to particle auxiliary verb or suffix6for an unknown word length of the word up to 2 suffixesprefixes and character types are used as the featureswe use all features observed in the lattice without any cutoff thresholdstable 1 also includes the number of features in both data setswe evaluate performance with the standard fscore defined as follows where recall of correct tokens of tokens in test corpus of correct tokens p recision of tokens in system output in the evaluations of fscores three criteria of correctness are used seg top and all the hyperparameters c for l1crfs and l2crfs are selected by crossvalidationexperiments are implemented in c and executed on linux with xeon 28 ghz dual processors and 40 gbyte of main memorytables 3 and 4 show experimental results using kc and rwcp respectivelythe three fscores for our crfs and a baseline bigram hmms are listedin table 3 the results of a variant of maximum entropy markov models and a rulebased analyzer are also shownto make a fare comparison we use exactly the same data as in table 4 the result of an extended hidden markov models trained and tested with the same corpus is also shownehmms is applied to the current implementation of chasendetails of ehmms are described in section 432we directly evaluated the difference of these systems using mcnemars testsince there are no standard methods to evaluate the significance of f scores we convert the outputs into the characterbased bi labels and then employ a mcnemars paired test on the labeling disagreementsthis evaluation was also used in the results of mcnemars test suggest that l2crfs is significantly better than other systems including l1crfs8the overall results support our empirical success of morphological analysis based on crfsuchimoto el al proposed a variant of memms trained with a number of features although they improved the accuracy for unknown words they fail to segment some sentences which are correctly segmented with hmms or rulebased analyzersfigure 3 illustrates the sentences which are incorrectly segmented by uchimotos memmsthe correct paths are indicated by bold boxesuchimoto et al concluded that these errors were caused by nonstandard entries in the lexiconin figure 3 ロマ ンは and ない心 are unusual spellings and they are normally written as ロマン派 and 内心 respectivelyhowever we conjecture that these errors are caused by the influence of the length biasto support our claim these sentences are correctly segmented by crfs hmms and rulebased analyzers using the same lexicon as by the length bias short paths are preferred to long pathsthus single token ロマンは or ない心 is likely to be selected compared to multiple tokens ロマン は or な い 心moreover ロマン and ロマンは have exactly the same pos and transition probabilities of these tokens become almost equalconsequentially there is no choice but to select a short path in order to maximize the whole sentence probabilitytable 5 summarizes the number of errors in hmms crfs and memms using the kc data settwo types of errors lerror and serror are given in this table lerror means that a system incorrectly outputs a longer token than the correct token respectivelyby length bias long tokens are preferred to short tokensthus larger number of lerrors implies that the result is highly influenced by the length biaswhile the relative rates of lerror and serror are almost the same in hmms and crfs the number of lerrors with memms amounts to 416 which is 70 of total errors and is even larger than that of naive hmms this result supports our claim that memms is not sufficient to be applied to japanese morphological analysis where the length bias is inevitableasahara et al extended the original hmms by 1 positionwise grouping of pos tags 2 wordlevel statistics and 3 smoothing of word and pos level statistics all of these techniques are designed to capture hierarchical structures of pos tagsetsfor instance in the positionwise grouping optimal levels of pos hierarchies are changed according to the contextsbest hierarchies for each context are selected by handcrafted rules or automatic errordriven procedurescrfs can realize such extensions naturally and straightforwardlyin crfs positionwise grouping and wordpos smoothing are simply integrated into a design of feature functionsparameters λk for each feature are automatically configured by general maximum likelihood estimationas shown in table 2 we can employ a number of templates to capture pos hierarchiesfurthermore some overlapping features can be used which was not possible in the extended hmmsl2crfs perform slightly better than l1crfs which indicates that most of given features are relevant to both of two datasetsthe numbers of active features used in l1crfs are much smaller than those in l2crfs 580032 vs l1crfs 90163 101757 l1crfs are worth being examined if there are some practical constraints in this paper we present how conditional random fields can be applied to japanese morphological analysis in which word boundary ambiguity existsby virtue of crfs 1 a number of correlated features for hierarchical tagsets can be incorporated which was not possible in hmms and 2 influences of label and length bias are minimized which caused errors in memmswe compare results between crfs memms and hmms in two japanese annotated corpora and crfs outperform the other approachesalthough we discuss japanese morphological analysis the proposed approach can be applicable to other nonsegmented languages such as chinese or thaithere exist some phenomena which cannot be analyzed only with bigram features in japanese morphological analysisto improve accuracy trigram or more general ngram features would be usefulcrfs have capability of handling such featureshowever the numbers of features and nodes in the lattice increase exponentially as longer contexts are capturedto deal with longer contexts we need a practical feature selection which effectively trades between accuracy and efficiencyfor this challenge mccallum proposes an interesting research avenue to explore we would like to thank kiyotaka uchimoto and masayuki asahara who explained the details of their japanese morphological analyzers
W04-3230
applying conditional random fields to japanese morphological analysisthis paper presents japanese morphological analysis based on conditional random fields previous work in crfs assumed that observation sequence boundaries were fixedhowever word boundaries are not clear in japanese and hence a straightforward application of crfs is not possiblewe show how crfs can be applied to situations where word boundary ambiguity existscrfs offer a solution to the longstanding problems in corpusbased or statistical japanese morphological analysisfirst flexible feature designs for hierarchical tagsets become possiblesecond influences of label and length bias are minimizedwe experiment crfs on the standard testbed corpus used for japanese morphological analysis and evaluate our results using the same experimental dataset as the hmms and memms previously reported in this taskour results confirm that crfs not only solve the longstanding problems but also improve the performance over hmms and memmswe studied japanese word segmentation and pos tagging using conditional random fields and rule based unknown word processing
chinese partofspeech tagging oneatatime or allatonce wordbased or characterbased chinese partofspeech tagging assigns one pos tag to each word in a chinese sentence however since words are not demarcated in a chinese sentence chinese pos tagging requires word segmentation as a prerequisite we could perform chinese pos tagging strictly after word segmentation approach or perform both word segmentation and pos tagging in a combined single step simultaneously also we could choose to assign pos tags on a wordbyword basis making use of word features in the surrounding context or on a characterbycharacter basis with character features this paper presents an indepth study on such issues of processing architecture and feature representation for chinese pos tagging within a maximum entropy framework we found that while the allatonce characterbased approach is the best the oneatatime characterbased approach is a worthwhile compromise performing only slightly worse in terms of accuracy but taking shorter time to train and run as part of our investigation we also built a stateoftheart chinese word segmenter which outperforms the best sighan 2003 word segmenters in the closed track on 3 out of 4 test corpora most corpusbased language processing research has focused on the english languagetheoretically we should be able to just port corpusbased machine learning techniques across different languages since the techniques are largely language independenthowever in practice the special characteristics of different languages introduce complicationsfor chinese in particular words are not demarcated in a chinese sentenceas such we need to perform word segmentation before we can proceed with other tasks such as partofspeech tagging and parsing since one pos tag is assigned to each chinese word and the leaves of a parse tree for a chinese sentence are wordsto build a chinese pos tagger the following questions naturally arise this paper presents an indepth study on such issues of processing architecture and feature representation for chinese pos tagging within a maximum entropy frameworkwe analyze the performance of the different approaches in our attempt to find the best approachto our knowledge our work is the first to systematically investigate such issues in chinese pos taggingas a first step in our investigation we built a chinese word segmenter capable of performing word segmentation without using pos tag informationsince errors in word segmentation will propagate to the subsequent pos tagging phase in the oneatatime approach in order for our study to give relevant findings it is important that the word segmenter we use gives stateoftheart accuracythe word segmenter we built is similar to the maximum entropy word segmenter of our word segmenter uses a maximum entropy framework and is trained on manually segmented sentencesit classifies each chinese character given the features derived from its surrounding contexteach character can be assigned one of 4 possible boundary tags b for a character that begins a word and is followed by another character m for a character that occurs in the middle of a word e for a character that ends a word and s for a character that occurs as a singlecharacter wordbesides implementing a subset of the features described in we also came up with three additional types of features below which improved the accuracy of word segmentationthe default feature boundary tag feature of the previous character and boundary tag feature of the character two before the current character used in were dropped from our word segmenter as they did not improve word segmentation accuracy in our experimentsin the following feature templates used in our word segmenter c refers to a chinese character while w refers to a chinese wordtemplates refer to a context of five characters c0 denotes the current character cn for example given the character sequence 新华社 记者 when considering the character 社 template results in the following features c_2 新 c_1 华 c0 社 c1 记 c2 者 to be set to 1 template results in the features w0 c0 this feature captures the word context in which the current character is foundfor example the character 社 within the word 新华社 will have the feature w0 c0新华社_社 set to 1this feature helps in recognizing seen wordspu a punctuation symbol is usually a good indication of a word boundarythis feature checks whether the current character is a punctuation symbol ttttt this feature is especially helpful in predicting the word segmentation of dates and numbers whose exact characters may not have been seen in the training textfour type classes are defined numbers represent class 1 dates represent class 2 english letters represent class 3 and other characters represent class 4for example when considering the character 年 in the character sequence 九〇年代w the feature t t 11243 will be set to 1 during testing the probability of a boundary tag sequence assignment t1 tn given a character sequence c1 cn is determined by using the maximum entropy classifier to compute the probability that a boundary tag ti is assigned to each individual character ciif we were to just assign each character the boundary tag with the highest probability it is possible that the classifier produces a sequence of invalid tags to eliminate such possibilities we implemented a dynamic programming algorithm which considers only valid boundary tag sequences given an input character sequenceat each character position i the algorithm considers each last word candidate ending at position i and consisting of k characters in length to determine the boundary tag assignment to the last word w with k characters the first character of w is assigned boundary tag b the last character of w is assigned tag e and the intervening characters are assigned tag min this way the dynamic programming algorithm only considers valid tag sequences and we are also able to make use of the w0 c0 feature during testingafter word segmentation is done by the maximum entropy classifier a postprocessing step is applied to correct inconsistently segmented words made up of 3 or more charactersa word w is defined to be inconsistently segmented if the concatenation of 2 to 6 consecutive words elsewhere in the segmented output document matches w in the postprocessing step the segmentation of the characters of these consecutive words is changed so that they are segmented as a single wordto illustrate if the concatenation of 2 consecutive words 巴赛 罗纳 in the segmented output document matches another word 巴赛罗纳 then 巴赛 罗纳 will be resegmented as 巴赛罗纳 to evaluate the accuracy of our word segmenter we carried out 10fold cross validation on the 250kword penn chinese treebank version 30the java opennlp maximum entropy package from sourceforge1 was used in our implementation and training was done with a feature cutoff of 2 and 100 iterationsthe accuracy of word segmentation is measured by recall precision and fmeasure recall is the proportion of correctly segmented words in the goldstandard segmentation and precision is the proportion of correctly segmented words in word segmenters outputfigure 1 gives the word segmentation fmeasure of our word segmenter based on 10fold cv on the 250kword ctbour word segmenter achieves an average fmeasure of 951this accuracy compares favorably with which reported 946 word segmentation fmeasure using his full parser without additional lexical features and about 9492 word segmentation fmeasure using only word boundaries information no pos tags or constituent labels but with lexical features derived from a 58kentry word listthe average training time taken to train on 90 of the 250kword ctb was 12 minutes while testing on 10 of ctb took about 1 minutethe running times reported in this paper were all obtained on an intel xeon 24ghz computer with 2gb ramfigure 1 ctb 10fold cv word segmentation fmeasure for our word segmenter as further evaluation we tested our word segmenter on all the 4 test corpora hong kong cityu and peking university of the closed track of the 2003 aclsighansponsored first international chinese word segmentation bakeoff for each of the 4 corpora we trained our word segmenter on only the official released training data of that corpustraining was conducted with feature cutoff of 2 and 100 iterations except for the as corpus where we used cutoff 3 since the as training corpus was too big to train with cutoff 2figure 2 shows our word segmenters fmeasure compared to those reported by all the 2003 sighan participants in the four closed tracks our word segmenter achieved higher fmeasure than the best reported fmeasure in the sighan bakeoff on the asc hkc and pkc corpusfor ctbc due to the exceptionally high outofvocabulary rate of the test data our word segmenters fmeasure ranked in the third position used additional named entity knowledgedata in their word segmenterwe also compared the fmeasure of our word segmenter on ctbo the open category of the ctb corpus where participants were free to use any available resources and were not restricted to only the official released training data of ctbon this ctbo task we used as additional training data the as training corpus provided by sighan after converting the as training corpus to gb encodingwe found that with this additional as training data added to the original 3 last ranked participant of sighan ctb with fmeasure 732 is not shown in figure 2 due to space constraint official released ctb training data of sighan our word segmenter achieved an fmeasure of 922 higher than the best reported fmeasure in the ctb open taskwith sufficient training data our word segmenter can perform very wellin our evaluation we also found that the additional features we introduced in section 22 and the postprocessing step consistently improved average word segmentation fmeasure when evaluated on the 4 sighan test corpora in the closed trackthe additional features improved fmeasure by an average of about 04 and the postprocessing step added on top of the use of all features further improved fmeasure by 03 now that we have successfully built a stateoftheart chinese word segmenter we are ready to explore issues of processing architecture and feature representation for chinese pos taggingan english pos tagger based on maximum entropy modeling was built by as a first attempt we investigated whether simply porting the method used by for english pos tagging would work equally well for chineseapplying it in the context of chinese pos tagging ratnaparkhis method assumes that words are presegmented and it assigns pos tags on a wordbyword basis making use of word features in the surrounding contextthis gives rise to a oneatatime wordbased pos taggernote that in a oneatatime approach the wordsegmented input sentence given to the pos tagger may contain word segmentation errors which can lower the pos tagging accuracythe following feature templates were chosenw refers to a word while pos refers to the pos tag assignedthe feature pu checks if all characters in the current word are punctuation charactersfeature encodes the class of characters that constitute the surrounding words of the word segmenter in section 21four type classes are defined a word is of class 1 if it is a number class 2 if the word is made up of only numeric characters followed by h 月or 年 class 3 when the word is made up of only english characters the testing procedure is similar to the beam search algorithm of which tags each word one by one and maintains as it sees a new word the n most probable pos tag sequence candidates up to that point in the sentencefor our experiment we have chosen n to be 3the 250kword ctb corpus tagged with 32 different pos tags was employed in our evaluation of pos taggers in this studywe ran 10fold cv on the ctb corpus using our word segmenters output for each of the 10 runs as the input sentences to the pos taggerpos tagging accuracy is simply calculated as figure 3 pos tagging accuracy using oneatatime wordbased pos tagger the pos tagging accuracy is plotted in figure 3the average pos tagging accuracy achieved for the 10 experiments was only 841 far lower than the 96 achievable by english pos taggers on the english penn treebank tag setthe average training time was 25 minutes while testing took about 20 secondsas an experiment we also conducted pos tagging using only the features and in section 31 similar to and we obtained an average pos tagging accuracy of 831 for that set of featuresthe features that worked well for english pos tagging did not seem to apply to chinese in the maximum entropy frameworklanguage differences between chinese and english have no doubt made the direct porting of an english pos tagging method to chinese ineffectivesince oneatatime wordbased pos tagging did not yield good accuracy we proceeded to investigate other combinations of processing architecture and feature representationwe observed that character features were successfully used to build our word segmenter and that of similarly character features were used to build a maximum entropy chinese parser by where his parser could perform word segmentation pos tagging and parsing in an integrated unified approachwe hypothesized that assigning pos tags on a characterbycharacter basis making use of character features in the surrounding context may yield good accuracyso we next investigate such a oneatatime characterbased pos taggerthe features that were used for our word segmenter in section 21 were yet again applied with two additional features and to aid pos tag prediction this feature refers to the pos tag of the previous character before the current wordfor example in the character sequence 74 al ma when considering the character a the feature pos pn is set to 1 c2w0 pos for the same example given above when considering the character a the feature pospos p_pn is set to 1 the testing algorithm is similar to that described in section 32 except that the probability of a word being assigned a pos tag t is estimated by the product of the probability of its individual characters being assigned the same pos tag t for example when estimating the probability of wtwu being tagged nr we find the product of the probability of wt being tagged nr w being tagged nr and you being tagged nrthat is we enforce the constraint that all characters within a segmented word in the presegmented input sentence must have the same pos tag10fold cv for ctb is repeated for this pos taggerfigure 4 shows the detailed pos tagging accuracywith a oneatatime characterbased pos tagger the average pos tagging accuracy improved to 917 76 higher than that achieved by the oneatatime wordbased pos taggerthe average training timing was 55 minutes while testing took about 50 secondsfigure 4 pos tagging accuracy using oneatatime characterbased pos tagger when a paired ttest was carried out to compare characterbased and wordbased oneatatime approaches the characterbased approach was found to be significantly better than the wordbased approach at the level of significance 001assuming a oneatatime processing architecture chinese pos tagging using a characterbased approach gives higher accuracy compared to a wordbased approachencouraged by the success of character features we next explored whether a change in processing architecture from oneatatime to allatonce while still retaining the use of character features could give further improvement to pos tagging accuracyin this approach both word segmentation and pos tagging will be performed in a combined single step simultaneouslyeach character is assigned both a boundary tag and a pos tag for example b_nn thus given 4 possible boundary tags and 32 unique pos tags present in the training corpus each character can potentially be assigned one of classesthe features we used are identical to those employed in the characterbased pos tagger described in section 41 except that features and are replaced with those listed belowin the following templates b refers to the boundary tag assignedfor example given the character sequence v al ma when considering the character a template results in the feature bposs_pn to be set to 1note that this approach is essentially that used by since his parser performs both word segmentation and pos tagging in one unified approachthe features we used are similar to his tag features except that we did not use features with three consecutive characters since we found that the use of these features did not improve accuracywe also added additional features beam search algorithm is used with n 3 during the testing phase10fold cv on ctb was carried out again using unsegmented test sentences as input to the programfigure 5 shows the word segmentation fmeasure while figure 6 shows the pos tagging accuracy achieved by this approachwith an allatonce characterbased approach an average word segmentation fmeasure of 952 and an average pos tagging accuracy of 919 was achievedthe average training timing was 3 hours while testing took about 20 minutesthere is a slight improvement in word segmentation and pos tagging accuracy using this approach compared to the oneatatime characterbased approachwhen a paired ttest was carried out at the level of significance 001 the allatonce approach was found to be significantly better than the oneatatime approach for pos tagging accuracy although the difference was insignificant for word segmentationhowever the time required for training and testing is increased significantly for the allatonce approachwhen efficiency is a major consideration or if high quality handsegmented text is available the oneatatime characterbased approach could indeed be a worthwhile compromise performing only slightly worse than the allatonce approachtable 1 summarizes the methods investigated in this papertotal testing time includes both word segmentation and pos tagging on 10 of ctb datanote that an allatonce wordbased approach is not applicable as word segmentation requires character features to determine the word boundarieswordbased or characterbasedthe findings that a characterbased approach is better than a wordbased approach for chinese pos tagging is not too surprisingunlike in english where each english letter by itself does not possess any meaning many chinese characters have well defined meaningsfor example the single chinese character 0 means knowand when a character appears as part of a word the word derives part of its meaning from the component charactersfor example 0v means knowledge 3u0 means ignorant 08 means wellknown etcin addition since the outofvocabulary rate for chinese words is much higher than the oov rate for chinese characters in the presence of an unknown word using the component characters in the word to help predict the correct pos tag is a good heuristiconeatatime or allatoncethe allatonce approach which considers all aspects of available information in an integrated unified compared with the allatonce characterbased approach previously proposed framework can make better informed decisions but incurs a higher computational costmuch previous research on chinese language processing focused on word segmentation relatively less work has been done on chinese pos taggingkwong and tsou discussed the implications of pos ambiguity in chinese and the possible approaches to tackle this problem when tagging a corpus for nlp taskszhou and su investigated an approach to build a chinese analyzer that integrated word segmentation pos tagging and parsing based on a hidden markov modeljing et al focused on chinese named entity recognition considering issues like characterbased versus wordbased approachesto our knowledge our work is the first to systematically investigate issues of processing architecture and feature representation for chinese pos taggingour maximum entropy word segmenter is similar to that of but the additional features we used and the postprocessing step gave improved word segmentation accuracythe research most similar to ours is luo presented a maximum entropy characterbased parser which as a consequence of parsing also performed word segmentation and pos taggingthe allatonce characterbased approach reported in this paper is essentially the approach proposed by luowhile our investigation reveals that such an approach gives good accuracy our findings however indicate that a oneatatime characterbased approach to pos tagging gave quite comparable accuracy with the benefit of incurring much reduced computational costlanguage differences between english and chinese have made direct porting of an english pos tagging method to chinese ineffectivein chinese individual characters encode information that aids in pos taggingusing a characterbased approach for chinese pos tagging is more effective than a wordbased approachour study has also revealed that the oneatatime characterbased approach gives relatively good pos tagging accuracy with a much improved training and testing timethis research is partially supported by a research grant r252000125112 from national university of singapore academic research fund
W04-3236
chinese partofspeech tagging oneatatime or allatonce wordbased or characterbasedchinese partofspeech tagging assigns one pos tag to each word in a chinese sentencehowever since words are not demarcated in a chinese sentence chinese pos tagging requires word segmentation as a prerequisitewe could perform chinese pos tagging strictly after word segmentation or perform both word segmentation and pos tagging in a combined single step simultaneously also we could choose to assign pos tags on a wordbyword basis making use of word features in the surrounding context or on a characterbycharacter basis with character features this paper presents an indepth study on such issues of processing architecture and feature representation for chinese pos tagging within a maximum entropy frameworkwe found that while the allatonce characterbased approach is the best the oneatatime characterbased approach is a worthwhile compromise performing only slightly worse in terms of accuracy but taking shorter time to train and runas part of our investigation we also built a stateoftheart chinese word segmenter which outperforms the best sighan 2003 word segmenters in the closed track on 3 out of 4 test corpora
adaptation of maximum entropy capitalizer little data can help a lot a novel technique for maximum a posteriori adaptation of maximum entropy and maximum entropy markov models is presented the technique is applied to the problem of recovering the correct capitalization of uniformly cased text a background capitalizer trained on 20mwds of wall street journal text from 1987 is adapted to two broadcast news test sets one containing abc primetime live text and the other npr morning newscnn morning edition text from 1996 the indomain performance of the wsj capitalizer is 45 better than that of the 1gram baseline when evaluated on a test set drawn from wsj 1994 when evaluating on the mismatched outofdomain test data the 1gram baseline is outperformed by 60 the improvement brought by the adaptation technique using a very small amount of matched bn data 2570kwds is about 2025 relative overall automatic capitalization error rate 14is achieved on bn data automatic capitalization is a practically relevant problem speech recognition output needs to be capitalized also modern word processors perform capitalization among other text proofing algorithms such as spelling correction and grammar checkingcapitalization can be also used as a preprocessing step in named entity extraction or machine translationwe study the impact of using increasing amounts of training data as well as using a small amount of adaptation data on this simple problem that is well suited to datadriven approaches since vast amounts of training data are easily obtainable by simply wiping the case information in textas in previous approaches the problem is framed as an instance of the class of sequence labeling problemsa case frequently encountered in practice is that of using mismatched outofdomain in this particular case we used broadcast news test datafor example one may wish to use a capitalization engine developed on newswire text for email or office documentsthis typically affects negatively the performance of a given model and more sophisticated models tend to be more brittlein the capitalization case we have studied the relative performance improvement of the memm capitalizer over the 1gram baseline drops from indomain wsj performance of 45 to 3540 when used on the slightly mismatched bn datain order to take advantage of the adaptation data in our scenario a maximum aposteriori adaptation technique for maximum entropy models is developedthe adaptation procedure proves to be quite effective in further reducing the capitalization error of the wsj memm capitalizer on bn test datait is also quite general and could improve performance of maxent models in any scenario where model adaptation is desirablea further relative improvement of about 20 is obtained by adapting the wsj model to broadcast news textoverall the memm capitalizer adapted to bn data achieves 60 relative improvement in accuracy over the 1gram baselinethe paper is organized as follows the next section frames automatic capitalization as a sequence labeling problem presents previous approaches as well as the widespread and highly suboptimal 1gram capitalization technique that is used as a baseline in most experiments in this work and othersthe memm sequence labeling technique is briefly reviewed in section 3section 4 describes the map adaptation technique used for the capitalization of outofdomain textthe detailed mathematical derivation is presented in appendix athe experimental results are presented in section 5 followed by conclusions and suggestions for future workautomatic capitalization can be seen as a sequence tagging problem each lowercase word receives a tag that describes its capitalization formsimilar to the work in we tag each word in a sentence with one of the tags for training a given capitalizer one needs to convert running text into uniform case text accompanied by the above capitalization tagsfor example primetime continues on abc period now comma from los angeles comma diane sawyer period the text is assumed to be already segmented into sentencesany sequence labeling algorithm can then be trained for tagging lowercase word sequences with capitalization tagsat test time the uniform case text to be capitalized is first segmented into sentences1 after which each sentence is taggeda widespread algorithm used for capitalization is the 1gram tagger for every word in a given vocabulary use the most frequent tag encountered in a large amount of training dataas a special case for automatic capitalization the most frequent tag for the first word in a sentence is overridden by cap thus capitalizing on the fact that the first word in a sentence is most likely capitalized2due to its popularity both our work and that of uses the 1gram capitalizer as a baselinethe work in indicates that the same 1gram algorithm is used in microsoft word 2000 and is consequently used as a baseline for evaluating the performance of their algorithm as wellwe share the approach to capitalization as sequence tagging with that of in their approach a language model is built on pairs and then used to disambiguate over all possible tag assignments to a sentence using dynamic programming techniquesthe same idea is explored in in the larger context of automatic punctuation generation and capitalization from speech recognition outputa second approach they consider for capitalization is the use a rulebased tagger as described by which they show to outperform the case sensitive language modeling approach and be quite robust to speech recognition errors and punctuation generation errorsdeparting from their work our approach builds on a standard technique for sequence tagging namely memms which has been successfully applied to partofspeech tagging the memm approach models the tag sequence t conditionally on the word sequence w which has a few substantial advantages over the 1gram tagging approach more recently certain drawbacks of memm models have been addressed by the conditional random field approach which slightly outperforms memms on a standard partofspeech tagging taskin a similar vein the work of explores the use of discriminatively trained hmms for sequence labeling problems a fair baseline for such cases that is often overlooked in favor of the inadequate maximum likelihood hmmsthe work on adapting the memm model parameters using map smoothing builds on the gaussian prior model used for smoothing maxent models as presented in we are not aware of any previous work on map adaptation of maxent models using a prior be it gaussian or a different one such as the exponential prior of although we do not have a formal derivation the adaptation technique should easily extend to the crf scenarioa final remark contrasts rulebased approaches to sequence tagging such as with the probabilistic approach taken in having a weight on each feature in the maxent model and a sound probabilistic model allows for a principled way of adapting the model to a new domain performing such adaptation in a rulebased model is unclear if at all possiblea simple approach to sequence labeling is the maximum entropy markov modelthe model assigns a probability p to any possible tag sequence model is builtthe approach we took is the one in which uses xi wi wi1 wi1 ti1 ti2we note that the probability model is causal in the sequencing of tags which allows for efficient algorithms that search for the most likely tag sequence t arg maxt p as well as ensures a properly normalized conditional probability model pthe probability p is modeled using a maximum entropy modelthe next section briefly describes the training procedure for details the reader is referred to the sufficient statistics that are extracted from the training data are tuples the tag assigned in context xi wi wi1 wi1 ti1 ti2 and denotes the count with which this event has been observed in the training databy way of example the event associated with the first word in the example in section 2 is mxc 1 currentwordprimetime previouswordbdw nextwordcontinues t1bdw t12bdwbdw prefix1p prefix2pr prefix3pri suffix1e suffix2me suffix3ime the maximum entropy probability model p uses features which are indicator functions of the type assuming a set of features f whose cardinality is f the probability assignment is made according to where a a1 af e rf is the set of realvalued model parameterswe used a simple count cutoff feature selection algorithm which counts the number of occurrences of all features in a predefined set after which it discards the features whose count is less than a prespecified thresholdthe parameter of the feature selection algorithm is the threshold value a value of 0 will keep all features encountered in the training datathe model parameters a are estimated such that the model assigns maximum loglikelihood to the training data subject to a gaussian prior centered at 0 a n that ensures smoothing as shown in and rederived in appendix a for the nonzero mean case the update equations are in our experiments the variances are tied to σi σ whose value is determined by line search on development data such that it yields the best tagging accuracyin the adaptation scenario we already have a maxent model trained on the background data and we wish to make best use of the adaptation data by balancing the twoa simple way to accomplish this is to use map adaptation using a prior distribution on the model parametersa gaussian prior for the model parameters a has been previously used in for smoothing maxent modelsthe prior has 0 mean and diagonal covariance a nin the adaptation scenario the prior distribution used is centered at the parameter values a0 estimated from the background data instead of 0 a nthe regularized loglikelihood of the adaptation training data becomes the adaptation is performed in stages fadapt fbackground4 introduced in the model receive 0 weightthe resulting model is thus equivalent with the background model train the model such that the regularized loglikelihood of the adaptation training data is maximizedthe prior mean is set at a0 abackground 0 denotes concatenation between the parameter vector for the background model and a 0valued vector of length fadapt fbackground corresponding to the weights for the new featuresas shown in appendix a the update equations are very similar to the 0mean case the effect of the prior is to keep the model parameters λi close to the background onesthe cost of moving away from the mean for each feature fi is specified by the magnitude of the variance σi a small variance σi will keep the weight λi close to its mean a large variance σi will make the regularized loglikelihood insensitive to the prior on λi allowing the use of the best value λi for modeling the adaptation dataanother observation is that not only the features observed in the adaptation data get updated even if epfi 0 the weight λi for feature fi will still get updated if the feature fi triggers for a context x encountered in the adaptation data and some predicted value y not necessarily present in the adaptation data in context xin our experiments the variances were tied to σi σ whose value was determined by line search on development data drawn from the adaptation datathe common variance σ will thus balance optimally the loglikelihood of the adaptation data with the a0 mean values obtained from the background dataother tying schemes are possible separate values could be used for the fadapt fbackground and fbackground feature sets respectivelywe did not experiment with various tying schemes although this is a promising research directionanother possibility to adapt the background model is to do minimum kl divergence training between the background exponential model b assumed fixed and an exponential model a built using the fbackground you fadapt feature setit can be shown that if we smooth the a model with a gaussian prior on the feature weights that is centered at 0 following the approach in for smoothing maximum entropy models then the mindiv update equations for estimating a on the adaptation data are identical to the map adaptation procedure we proposed5however we wish to point out that the equivalence holds only if the feature set for the new model a is fbackground you fadaptthe straightforward application of mindiv training by using only the fadapt feature set for a will not result in an equivalent procedure to oursin fact the difference in performance between this latter approach and ours could be quite large since the cardinality of fbackground is typically several orders of magnitude larger than that of fadapt and our approach also updates the weights corresponding to features in fbackground fadaptfurther experiments are needed to compare the performance of the two approachesthe baseline 1gram and the background memm capitalizer were trained on various amounts of wsj data from 1987 files ws87_001126the indomain test data used was file ws94_000 as for the adaptation experiments two different sets of bn data were used whose sizes are summarized in table 1 we have proceeded building both 1gram and memm capitalizers using various amounts of background training datathe model sizes for the 1gram and memm capitalizer are presented in table 2count cutoff feature selection has been used for the memm capitalizer with the threshold set at 5 so the memm model size is a function of the training datathe 1gram capitalizer used a vocabulary of the most likely 100k wds derived from the training datawe first evaluated the indomain and outofdomain relative performance of the 1gram and the memm capitalizers as a function of the amount of training datathe results are presented in table 3the memm capitalizer performs about 45 better domain and outofdomain data for various amounts of training data than the 1gram one when trained and evaluated on wall street journal textthe relative performance improvement of the memm capitalizer over the 1gram baseline drops to 3540 when using outofdomain broadcast news databoth models benefit from using more training datawe have then adapted the best memm model built on 20mwds on the two bn data sets and compared performance against the 1gram and the unadapted memm modelsthere are a number of parameters to be tuned on development datatable 4 presents the variation in model size with different count cutoff values for the feature selection procedure on the adaptation dataas can be seen very few features are added to the background modeltable 5 presents the variation in loglikelihood and capitalization accuracy on the cnn adaptation training and development data respectivelythe adaptation procedure was found cutoff threshold used for feature selection on cnntrn adaptation data the entry corresponding to the cutoff threshold of 106 represents the number of features in the background model to be insensitive to the number of reestimation iterations and more surprisingly to the number of features added to the background model from the adaptation data as shown in 5the most sensitive parameter is the prior variance σ2 as shown in figure 1 its value is chosen to maximize classification accuracy on development dataas expected low values of σ2 result in no adaptation at all whereas high values of σ2 fit the training data very well and result in a dramatic increase of training data loglikelihood and accuracies approaching 100 count cutoff and σ2 variance values loglikelihood and accuracy on adaptation data cnntrn as well as accuracy on heldout data cnndev the background model results are the entries corresponding to the cutoff threshold of finally table 6 presents the results on test data for 1gram background and adapted memmas can be seen the background memm outperforms the 1gram model on both bn test sets by about 3540 relativeadaptation improves performance even further by another 2025 relativeoverall the adapted models achieve 60 relative reduction in capitalization error over the 1gram baseline on both bn test setsan intuitively satisfying result is the fact that the crosstest set performance is worse than the adapted onethe memm tagger is very effective in reducing both indomain and outofdomain capitalization error by 3545 relative over a 1gram capitalization modelwe have also presented a general technique for adapting maxent probability modelsit was shown to be very effective in adapting a background memm capitalization model improving the accuracy by 2025 relativean overall 5060 reduction in capitalization error over the standard 1gram baseline is achieveda surprising result is that the adaptation performance gain is not due to adding more domainspecific features but rather making better use of the background features for modeling the indomain dataas expected adding more background training data improves performance but a very small amount of domain specific data also helps significantly if one can make use of it in an effective waythe there is no data like more data ruleofthumb could be amended by especially if it is the right dataas future work we plan to investigate the best way to blend increasing amounts of lessspecific background training data with specific indomain data for this and other problemsanother interesting research direction is to explore the usefulness of the map adaptation of maxent models for other problems among which we wish to include language modeling partofspeech tagging parsing machine translation information extraction text routingspecial thanks to adwait ratnaparkhi for making available the code for his memm tagger and maxent trainer
W04-3237
adaptation of maximum entropy capitalizer little data can help a lota novel technique for maximum a posteriori adaptation of maximum entropy and maximum entropy markov models is presentedthe technique is applied to the problem of automatically capitalizing uniformly cased textautomatic capitalization is a practically relevant problem speech recognition output needs to be capitalized also modern word processors perform capitalization among other text proofing algorithms such as spelling correction and grammar checkingcapitalization can be also used as a preprocessing step in named entity extraction or machine translationa background capitalizer trained on 20 m words of wall street journal text from 1987 is adapted to two broadcast news test sets one containing abc primetime live text and the other npr morning newscnn morning edition text from 1996the indomain performance of the wsj capitalizer is 45 better relative to the 1gram baseline when evaluated on a test set drawn from wsj 1994 when evaluating on the mismatched outofdomain test data the 1gram baseline is outperformed by 60 relative the improvement brought by the adaptation technique using a very small amount of matched bn data 2570k words is about 2025 relativeoverall automatic capitalization error rate of 14 is achieved on bn datathe performance gain obtained by employing our adaptation technique using a tiny amount of outofdomain training data on top of the background data is striking as little as 014 m words of indomain data brings more improvement than using 10 times more background training data we proposed method for transfer learning in maximum entropy models involves modifying the mu of this gaussian priorwe use the parameters of the source domain maximum entropy classifier as the means of a gaussian prior when training a new model on the target data
a boosting algorithm for classification of semistructured text the focus of research in text classification has expanded from simple topic identification to more challenging tasks such as opinionmodality identification unfortunately the latter goals exceed the ability of the traditional bagofword representation approach and a richer more structural representation is required accordingly learning algorithms must be created that can handle the structures observed in texts in this paper we propose a boosting algorithm that captures substructures embedded in texts the proposal consists of i decision stumps that use subtrees as features and ii the boosting algorithm which employs the subtreebased decision stumps as weak learners we also discuss the relation between our algorithm and svms with tree kernel two experiments on opinionmodality classification confirm that subtree features are important text classification plays an important role in organizing the online texts available on the world wide web internet news and emailsuntil recently a number of machine learning algorithms have been applied to this problem and have been proven successful in many domains in the traditional text classification tasks one has to identify predefined text topics such as politics finance sports or entertainmentfor learning algorithms to identify these topics a text is usually represented as a bagofwords where a text is regarded as a multiset of words and the word order or syntactic relations appearing in the original text is ignoredeven though the bagofwords representation is naive and does not convey the meaning of the original text reasonable accuracy can be obtainedthis is because each word occurring in the text is highly relevant to the predefined topics to be identifiedat present ntt communication science laboratories 24 hikaridai seikacho soraku kyoto 6190237 japan takucslabkeclnttcojp given that a number of successes have been reported in the field of traditional text classification the focus of recent research has expanded from simple topic identification to more challenging tasks such as opinionmodality identificationexample includes categorization of customer emails and reviews by types of claims modalities or subjectivities for the latter the traditional bagofwords representation is not sufficient and a richer structural representation is requireda straightforward way to extend the traditional bagofwords representation is to heuristically add new types of features to the original bagofwords features such as fixedlength ngrams or fixedlength syntactic relations these adhoc solutions might give us reasonable performance however they are highly taskdependent and require careful design to create the optimal feature set for each taskgenerally speaking by using text processing systems a text can be converted into a semistructured text annotated with partsofspeech basephrase information or syntactic relationsthis information is useful in identifying opinions or modalities contained in the textwe think that it is more useful to propose a learning algorithm that can automatically capture relevant structural information observed in text rather than to heuristically add this information as new featuresfrom these points of view this paper proposes a classification algorithm that captures substructures embedded in textto simplify the problem we first assume that a text to be classified is represented as a labeled ordered tree which is a general data structure and a simple abstraction of textnote that word sequence basephrase annotation dependency tree and an xml document can be modeled as a labeled ordered treethe algorithm proposed here has the following characteristics i it performs learning and classification using structural information of text ii it uses a set of all subtrees for the feature set without any constraints iii even though the size of the candidate feature set becomes quite large it automatically selects a compact and relevant feature set based on boostingthis paper is organized as followsfirst we describe the details of our boosting algorithm in which the subtreebased decision stumps are applied as weak learnerssecond we show an implementation issue related to constructing an efficient learning algorithmwe also discuss the relation between our algorithm and svms with tree kernel two experiments on the opinion and modality classification tasks are employed to confirm that subtree features are importantwe first assume that a text to be classified is represented as a labeled ordered treethe focused problem can be formalized as a general problem called the tree classification problemthe tree classification problem is to induce a mapping f x 1 from given training examples t hxi yiili1 where xi x is a labeled ordered tree and yi 1 is a class label associated with each training data the important characteristic is that the input example xi is represented not as a numerical feature vector but a labeled ordered treelet us introduce a labeled ordered tree its definition and notations firstwe denote the number of nodes in t as tfigure 1 shows an example of a labeled ordered tree and its subtree and nonsubtreedecision stumps are simple classifiers where the final decision is made by only a single hypothesis or featureboostexter uses wordbased decision stumps for topicbased text classificationto classify trees we here extend the decision stump definition as followsdefinition 3 decision stumps for trees let t and x be labeled ordered trees and y be a class label a decision stump classifier for trees is given by the parameter for classification is the tuple ht yi hereafter referred to as the rule of the decision stumpsthe decision stumps are trained to find rule hˆt ˆyi that minimizes the error rate for the given training data t hxi yiili1 in this paper we will use gain instead of error rate for claritythe decision stumps classifiers for trees are too inaccurate to be applied to real applications since the final decision relies on the existence of a single treehowever accuracies can be boosted by the boosting algorithm boosting repeatedly calls a given weak learner to finally produce hypothesis f which is a linear combination of k hypotheses produced by the prior weak learners ie a weak learner is built at each iteration k with different distributions or weights d the weights are calculated in such a way that hard examples are focused on more than easier examplesto use the decision stumps as the weak learner of boosting we redefine the gain function as follows there exist many boosting algorithm variants however the original and the best known algorithm is adaboost we here use arcgv instead of adaboost since arcgv asymptotically maximizes the margin and shows faster convergence to the optimal solution than adaboostin this section we introduce an efficient and practical algorithm to find the optimal rule from given training datathis problem is formally defined as followsproblem 1 find optimal rule let t be training data where xi is a labeled ordered tree yi e 1 is a class label associated with xi and di is a normalized weight assigned to xigiven t find the optimal rule that maximizes the gain ie the most naive and exhaustive method in which we first enumerate all subtrees f and then calculate the gains for all subtrees is usually impractical since the number of subtrees is exponential to its sizewe thus adopt an alternative strategy to avoid such exhaustive enumerationthe method to find the optimal rule is modeled as a variant of the branchandbound algorithm and is summarized in the following strategies we will describe these steps more precisely in the following subsectionsabe and zaki independently proposed an efficient method rightmostextension to enumerate all subtrees from a given tree first the algorithm starts with a set of trees consisting of single nodes and then expands a given tree of size by attaching a new node to this tree to obtain trees of size k however it would be inefficient to expand nodes at arbitrary positions of the tree as duplicated enumeration is inevitablethe algorithm rightmost extension avoids such duplicated enumerations by restricting the position of attachmentwe here give the definition of rightmost extension to describe this restriction in detaildefinition 4 rightmost extension let t and t be labeled ordered treeswe say t is a rightmost extension of t if and only if t and t satisfy the following three conditions consider figure 2 which illustrates example tree t with the labels drawn from the set g a b cfor the sake of convenience each node in this figure has its original number the rightmostpath of the tree t is and occurs at positions 1 4 and 6 respectivelythe set of rightmost extended trees is then enumerated by simply adding a single node to a node on the rightmost pathsince there are three nodes on the rightmost path and the size of the label set is 3 a total of 9 trees are enumerated from the original tree t note that rightmost extension preserves the prefix ordering of nodes in t by repeating the process of rightmostextension recursively we can create a search space in which all trees drawn from the set g are enumeratedfigure 3 shows a snapshot of such a search spacerightmost extension defines a canonical search space in which one can enumerate all subtrees from a given set of treeswe here consider an upper bound of the gain that allows subspace pruning in this canonical search spacethe following theorem an extension of morhishita gives a convenient way of computing a tight upper bound on gain for any supertree t of t we can efficiently prune the search space spanned by right most extension using the upper bound of gain youduring the traverse of the subtree lattice built by the recursive process of rightmost extension we always maintain the temporally suboptimal gain τ among all gains calculated previouslyif µ 02when using the bagofwords feature no significant differences in accuracy are observed between boosting and svmswhen structural information is used in training and classification boosting performs slightly better than svms with tree kernelthe differences are significant when we use dependency features in the mod tasksvms show worse performance depending on tasks and categories when a convolution kernel is applied to sparse data kernel dot products between almost the same instances become much larger than those between different instancesthis is because the number of common features between similar instances exponentially increases with sizethis sometimes leads to overfitting in training where a test instance very close to an instance in training data is correctly classified and other instances are classified as a default classthis problem can be tackled by several heuristic approaches i employing a decay factor to reduce the weights of large substructures ii substituting kernel dot products for the gaussian function to smooth the original kernel dot products these approaches may achieve better accuracy however they yield neither the fast classification nor the interpretable feature space targeted by this papermoreover we cannot give a fair comparison in terms of the same feature spacethe selection of optimal hyperparameters such as decay factors in the first approach and smoothing parameters in the second approach is also left to as an open questionwe employed a mcnemars paired test on the labeling disagreementsunderlined results indicate that there is a significant difference against the baseline if there is a statistical difference between boosting and svms with the same feature representation better results are asteriskedin the previous section we described the merits of our boosting algorithmwe experimentally verified these merits from the results of the phs taskas illustrated in section 4 our method can automatically select relevant and compact features from a number of feature candidatesin the phs task a total 1793 features were selected while the set sizes of distinct unigram bigram and trigram appearing in the data were 4211 24206 and 43658 respectivelyeven though all subtrees are used as feature candidates boosting selects a small and highly relevant subset of featureswhen we explicitly enumerate the subtrees used in tree kernel the number of active features might amount to ten thousand or moretable 2 shows examples of extracted support features t and weight wt in in the phs taskafeatures including the word にくい in general にくい is an adjective expressing negative opinionsmost of features including にくい are assigned a negative weight however only one feature 切れに くい has a positive weightthis result strongly reflects the domain knowledge phs bfeatures including the word 使う 使う is a neutral expression for opinion classificationshowever the weight varies according to the surrounding context 1 使い たい positive 2 使い やす い positive 3 使い やす か った negative 4 の ほうが 使い やすい negativec features including the word 充電 features reflecting the domain knowledge are extracted 1 充電 時間 が 短い positive 2 充電 時間 長い negativethese features are interesting since we cannot determine the correct label by using just the bagofwords features such as recharge short or long alonetable 3 illustrates an example of actual classificationfor the input sentence 液晶が大きくて 綺麗 見やすい the system outputs the features applied to this classification along with their weights wtthis information allows us to analyze how the system classifies the input sentence in a category and what kind of features are used in the classificationwe cannot perform these analyses with tree kernel since it defines their feature space implicitlythe testing speed of our boosting algorithm is much higher than that of svms with tree kernelin the phs task the speeds of boosting and svms are 0531 sec5741 instances and 25542 sec5741 instances respectively 6we can say that boosting is about 480 times faster than svms with tree kerneleven though the potential size of search space is huge the pruning criterion proposed in this paper effectively prunes the search spacethe pruning conditions in fig4 are fulfilled with more than 90 probabititythe training speed of our method is 1384 sec5741 instances when we set k 60 000 it takes only 0023 sec to invoke the weak learner find optimal rulein this paper we focused on an algorithm for the classification of semistructured text in which a sentence is represented as a labeled ordered tree7our proposal consists of i decision stumps that use subtrees as features and ii boosting algorithm in which the subtreebased decision stumps are applied as weak learnerstwo experiments on opinionmodality classification tasks confirmed that subtree features are importantone natural extension is to adopt confidence rated predictions to the subtreebased weak learnersthis extension is also found in boostexter and shows better performance than binaryvalued learnersin our experiments ngram features showed comparable performance to dependency featureswe would like to apply our method to other applications where instances are represented in a tree and their subtrees play an important role in classifications and information extraction
W04-3239
a boosting algorithm for classification of semistructured textthe focus of research in text classification has expanded from simple topic identification to more challenging tasks such as opinionmodality identificationunfortunately the latter goals exceed the ability of the traditional bagofword representation approach and a richer more structural representation is requiredaccordingly learning algorithms must be created that can handle the structures observed in textsin this paper we propose a boosting algorithm that captures substructures embedded in textsthe proposal consists of i decision stumps that use subtrees as features and ii the boosting algorithm which employs the subtreebased decision stumps as weak learnerswe also discuss the relation between our algorithm and svms with tree kerneltwo experiments on opinionmodality classification confirm that subtree features are importantwe adopt the bact learning algorithm to effectively learn subtrees useful for both antecedent identification and zero pronoun detection
lexpagerank prestige in multidocument text summarization multidocument extractive summarization relies on the concept of sentence centrality to identify the most important sentences in a document centrality is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudosentence we are now considering an approach for computing sentence importance based on the concept of eigenvector centrality that we call lexpagerank in this model a sentence connectivity matrix is constructed based on cosine similarity if the cosine similarity between two sentences exceeds a particular predefined threshold a corresponding edge is added to the connectivity matrix we provide an evaluation of our method on duc 2004 data the results show that our approach outperforms centroidbased summarization and is quite successful compared to other summarization systems text summarization is the process of automatically creating a compressed version of a given text that provides useful information for the userin this paper we focus on multidocument generic text summarization where the goal is to produce a summary of multiple documents about the same but unspecified topicour summarization approach is to assess the centrality of each sentence in a cluster and include the most important ones in the summaryin section 2 we present centroidbased summarization a wellknown method for judging sentence centralitythen we introduce two new measures for centrality degree and lexpagerank inspired from the prestige concept in social networks and based on our new approachwe compare our new methods and centroidbased summarization using a featurebased generic summarization toolkit mead and show that new features outperform centroid in most of the casestest data for our experiments is taken from document understanding conferences 2004 summarization evaluation to compare our system also with other stateoftheart summarization systemsextractive summarization produces summaries by choosing a subset of the sentences in the original documentsthis process can be viewed as choosing the most central sentences in a cluster that give the necessary and enough amount of information related to the main theme of the clustercentrality of a sentence is often defined in terms of the centrality of the words that it containsa common way of assessing word centrality is to look at the centroidthe centroid of a cluster is a pseudodocument which consists of words that have frequencyidf scores above a predefined thresholdin centroidbased summarization the sentences that contain more words from the centroid of the cluster are considered as centralformally the centroid score of a sentence is the cosine of the angle between the centroid vector of the whole cluster and the individual centroid of the sentencethis is a measure of how close the sentence is to the centroid of the clustercentroidbased summarization has given promising results in the past in this section we propose a new method to measure sentence centrality based on prestige in social networks which has also inspired many ideas in the computer networks and information retrievala cluster of documents can be viewed as a network of sentences that are related to each othersome sentences are more similar to each other while some others may share only a little information with the rest of the sentenceswe hypothesize that the sentences that are similar to many of the other sentences in a cluster are more central to the topicthere are two points to clarify in this definition of centralityfirst is how to define similarity between two sentencessecond is how to compute the overall prestige of a sentence given its similarity to other sentencesfor the similarity metric we use cosinea cluster may be represented by a cosine similarity matrix where each entry in the matrix is the similarity between the corresponding sentence pairfigure 1 shows a subset of a cluster used in duc 2004 and the corresponding cosine similarity matrixsentence id d s indicates the th sentence in the th documentin the following sections we discuss two methods to compute sentence prestige using this matrixin a cluster of related documents many of the sentences are expected to be somewhat similar to each other since they are all about the same topicthis can be seen in figure 1 where the majority of the values in the similarity matrix are nonzerosince we are interested in significant similarities we can eliminate some low values in this matrix by defining a threshold so that the cluster can be viewed as an graph where each sentence of the cluster is a node and significantly similar sentences are connected to each otherfigure 2 shows the graphs that correspond to the adjacency matrix derived by assuming the pair of sentences that have a similarity above and respectively in figure 1 are similar to each otherwe define degree centrality as the degree of each node in the similarity graphas seen in table 1 the choice of cosine threshold dramatically influences the interpretation of centralitytoo low thresholds may mistakenly take weak similarities into consideration while too high thresholds may lose much of the similarity relations in a clusterwhen computing degree centrality we have treated each edge as a vote to determine the overall prestige value of each nodethis is a totally democratic method where each vote counts the samehowever this may have a negative effect in the quality of the summaries in some cases where several unwanted sentences vote for each and raise their prestigesas an extreme example consider a noisy cluster where all the documents are related to each other but only one of them is about a somewhat different topicobviously we would not want any of the sentences in the unrelated document to be included in a generic summary of the clusterhowever assume that the unrelated document contains some sentences that are very prestigious considering only the votes in that documentthese sentences will get artificially high centrality scores by the local votes from a specific set of sentencesthis situation can be avoided by considering where the votes come from and taking the prestige of the voting node into account in weighting each voteour approach is inspired by a similar idea used in computing web page prestigesone of the most successful applications of prestige is pagerank the underlying technology behind the google search enginepagerank is a method proposed for assigning a prestige score to each page in the web independent of a specific queryin pagerank the score of a page is determined depending on the number of pages that link to that page as well as the individual scores of the linking pagesmore formally the pagerank of a page is given as follows pr where are pages that link to c is the number of outgoing links from page and is the damping factor which can be set between and this recursively defined value can be computed by forming the binary adjacency matrix of the web where if there is a link from page to page normalizing this matrix so that row sums equal to and finding the principal eigenvector of the normalized matrixpagerank for th page equals to the th entry in the eigenvectorprincipal eigenvector of a matrix can be computed with a simple iterative power methodthis method can be directly applied to the cosine similarity graph to find the most prestigious sentences in a documentwe use pagerank to weight each vote so that a vote that comes from a more prestigious sentence has a greater value in the centrality of a sentencenote that unlike the original pagerank method the graph is undirected since cosine similarity is a symmetric relationhowever today sunday that iraq refuses to back down from its decision to stop cooperating with disarmament inspectors before its demands are met2 d2s1 iraqi vice president taha yassin ramadan announced today thursday that iraq rejects cooperating with the united nations except on the issue of lifting the blockade imposed upon it since the year 19903 d2s2 ramadan told reporters in baghdad that iraq cannot deal positively with whoever represents the security council unless there was a clear stance on the issue of lifting the blockade off of it4 d2s3 baghdad had decided late last october to completely cease cooperating with the inspectors of the united nations special commission in charge of disarming iraqs weapons and whose work became very limited since the fifth of august and announced it will not resume its cooperation with the commission even if it were subjected to a military operation5 d3s1 the russian foreign minister igor ivanov warned today wednesday against using force against iraq which will destroy according to him seven years of difficult diplomatic work and will complicate the regional situation in the area6 d3s2 ivanov contended that carrying out air strikes against iraq who refuses to cooperate with the united nations inspectors will end the tremendous work achieved by the international group during the past seven years and will complicate the situation in the region 7 d3s3 nevertheless ivanov stressed that baghdad must resume working with the special commission in charge of disarming the iraqi weapons of mass destruction 8 d4s1 the special representative of the united nations secretarygeneral in baghdad prakash shah announced today wednesday after meeting with the iraqi deputy prime minister tariq aziz that iraq refuses to back down from its decision to cut off cooperation with the disarmament inspectors9 d5s1 british prime minister tony blair said today sunday that the crisis between the international community and iraq did not end and that britain is still ready prepared and able to strike iraq 10 d5s2 in a gathering with the press held at the prime ministers office blair contended that the crisis with iraq will not end until iraq has absolutely and unconditionally respected its commitments towards the united nations11 d5s3 a spokesman for tony blair had indicated that the british prime minister gave permission to british air force tornado planes stationed in kuwait to join the aerial bombardment against iraq this does not make any difference in the computation of the principal eigenvectorwe call this new measure of sentence similarity lexical pagerank or lexpageranktable 3 shows the lexpagerank scores for the graphs in figure 2 setting the damping factor to for comparison centroid score for each sentence is also shown in the tableall the numbers are normalized so that the highest ranked sentence gets the score it is obvious from the figures that threshold choice affects the lexpagerank rankings of some sentencesthe graphbased centrality approach we have introduced has several advantages over centroidfirst of all it accounts for information subsumption among sentencesif the information content of a sentence subsumes another sentence in a cluster it is naturally preferred to include the one that contains more information in the summarythe degree of a node in the cosine similarity graph is an indication of how much common information the sentence has with other sentencessentence d4s1 in figure 1 gets the highest score since it almost subsumes the information in the first two sentences of the cluster and has some common information with othersanother advantage is that it prevents unnaturally high idf scores from boosting up the score of a sentence that is unrelated to the topicalthough the frequency of the words are taken into account while computing the centroid score a sentence that contains many rare words with high idf values may get a high centroid score even if the words do not occur elsewhere in the clusterwe used duc 2004 data in our experimentsthere are 2 generic summarization tasks in duc 2004 which are appropriate for the purpose of testing our new feature lexpageranktask 2 involves summarization of 50 tdt english clustersthe goal of task 4 is to produce summaries of machine translation output of 24 arabic tdt documentsfor evaluation we used the new automatic summary evaluation metric rouge1 which was used for the first time in duc 2004rouge is a recallbased metric for fixedlength summaries which is based on ngram cooccurenceit reports separate scores for 1 2 3 and 4gram and also for longest common subsequence cooccurencesamong these different scores unigrambased rouge score has been shown to agree with human judgements most we show three of the rouge metrics in our experiment results rouge1 rouge2 and rougew there are 8 different human judges for duc 2004 task 2 and 4 for duc 2004 task 4however a subset of exactly 4 different human judges produced model summaries for any given clusterrouge requires a limit on the length of the summaries to be able to make a fair evaluationto stick with the duc 2004 specifications and to be able to compare our system with human summaries and as well as with other duc participants we produced 665byte summaries for each cluster and computed rouge scores against human summariesmead2 is a publicly available toolkit for extractive multidocument summarizationalthough it comes as a centroidbased summarization system by default its feature set can be extended to implement other methodsthe mead summarizer consists of three componentsduring the first step the feature extractor each sentence in the input document is converted into a feature vector using the userdefined featuressecond the feature vector is converted to a scalar value using the combinerat the last stage known as the reranker the scores for sentences included in related pairs are adjusted upwards or downwards based on the type of relation between the sentences in the pairreranker penalizes the sentences that are similar to the sentences already included in the summary so that a better information coverage is achievedthree default features that comes with the mead distribution are centroid position and lengthposition is the normalized value of the position of a sentence in the document such that the first sentence of a document gets the maximum position value of 1 and the last sentence gets the value 0length is not a real feature score but a cutoff value that ignores the sentences shorter than the given thresholdseveral rerankers are implemented in meadwe observed the best results with maximal marginal relevance reranker and the default reranker of the system based on crosssentence informational subsumption all of our experiments shown in section 43 use csis rerankera mead policy is a combination of three components the command lines for all features the formula for converting the feature vector to a scalar and the command line for the rerankera sample policy might be the one shown in figure 4this example indicates the three default mead features and our new lexpagerank feature used in our experimentsour lexpagerank implementation requires the cosine similarity threshold in the example as an argumenteach number next to a feature name shows the relative weight of that feature the reranker in the example is a wordbased mmr reranker with a cosine similarity threshold 05we implemented the degree and lexpagerank methods and integrated into the mead system as new featureswe normalize each feature so that the sentence with the maximum score gets the value 1we ran mead with several policies with different feature weights and combinations of featureswe fixed length cutoff at 9 and the weight of the position feature at 1 in all of the policieswe did not try a weight higher than 20 for any of the features since our earlier observations on mead showed that too high feature weights results in poor summariestable 2 and table 3 show the rouge scores we have got in the experiments with using lexpagerank degree and centroid in tasks 2 and 4 respectively sorted by rouge1 scoreslprxty indicates a policy in which the weight for lexpagerank is and is used as thresholddegreexty is similar except that degree of a node in the similarity graph is used instead of its lexpagerank scorefinally cx shows a policy with centroid weight we also include two baselines for each data setrandom indicates a method where we have picked random sentences from the cluster to produce a summarywe have performed five random runs for each data setthe results in the tables are for the median runssecond baseline shown as leadbased in the tables is using only the position feature without any centrality methodthis is tantamount to producing leadbased summaries which is a widely used and very challenging baseline in the text summarization community the top scores we have got in all data sets come from our new methodsthe results provide strong evidence that degree and lexpagerank are better than centroid in multidocument generic text summarizationhowever it is hard to say that degree and lexpagerank are significantly different from each otherthis is an indication that degree may already be a good enough measure to assess the centrality of a node in the similarity graphconsidering the relatively low complexity of degree centrality it still serves as a plausible alternative when one needs a simple implementationcomputation of degree can be done on the fly as a side product of lexpagerank just before the power method is applied on the similarity graphanother interesting observation in the results is the effect of thresholdmost of the top rouge scores belong to the runs with the threshold and the runs with threshold are worse than the others most of the timethis is due to the information loss in the similarity graphs as we move to higher thresholds as discussed in section 3as a comparison with the other summarization systems we present the official scores for the top five duc 2004 participants and the human summaries in table 4 and table 5 for tasks 2 and 4 respectivelyour top few results for each task are either better than or statistically indifferent from the best system in the official runs considering the 95 confidence intervalwe have presented a novel approach to define sentence centrality based on graphbased prestige scoring of sentencesconstructing the similarity graph of sentences provides us with a better view of important sentences compared to the centroid approach which is prone to overgeneralization of the information in a document clusterwe have introduced two different methods degree and lexpagerank for computing prestige in similarity graphsthe results of applying these methods on extractive summarization is quite promisingeven the simplest approach we have taken degree centrality is a good enough heuristic to perform better than leadbased and centroidbased summaries
W04-3247
lexpagerank prestige in multidocument text summarizationmultidocument extractive summarization relies on the concept of sentence centrality to identify the most important sentences in a documentcentrality is typically defined in terms of the presence of particular important words or in terms of similarity to a centroid pseudosentencewe are now considering an approach for computing sentence importance based on the concept of eigenvector centrality that we call lexpagerankin this model a sentence connectivity matrix is constructed based on cosine similarityif the cosine similarity between two sentences exceeds a particular predefined threshold a corresponding edge is added to the connectivity matrixwe provide an evaluation of our method on duc 2004 datathe results show that our approach outperforms centroidbased summarization and is quite successful compared to other summarization systemswe propose lex pagerank which is an approach for computing sentence importance based on the concept of eigenvector centrality
statistical significance tests for machine translation evaluation if two translation systems differ differ in performance on a test set can we trust that this indicates a difference in true system quality to answer this question we describe bootstrap resampling methods to compute statistical significance of test results and validate them on the concrete example of the even for small test sizes of only 300 sentences our methods may give us assurances that test result differences are real recently the field of machine translation has been changed by the emergence both of effective statistical methods to automatically train machine translation systems from translated text sources and of reliable automatic evaluation methodsmachine translation systems can now be built and evaluated from black box tools and parallel corpora with no human involvement at allthe evaluation of machine translation systems has changed dramatically in the last few yearsinstead of reporting human judgment of translation quality researchers now rely on automatic measures most notably the bleu score which measures ngram overlap with reference translationssince it has been shown that the bleu score correlates with human judgment an improvement in bleu is taken as evidence for improvement in translation qualitybuilding the tools for any translation system involves many iterations of changes and performance testingit is important to have a method at hand that gives us assurances that the observed increase in the test score on a test set reflects true improvement in system qualityin other words we need to be able to gauge if the increase in score is statistically significantsince complex metrics such as bleu do not lend themselves to an analytical technique for assessing statistical significance we propose bootstrap resampling methodswe also provide empirical evidence that the estimated significance levels are accurate by comparing different systems on a large number of test sets of various sizesin this paper after providing some background we will examine some properties of the widely used bleu metric discuss experimental design introduce bootstrap resampling methods for statistical significance estimation and report on experimental results that demonstrate the accuracy of the methodsstatistical machine translation was introduced by work at ibm brown et al 1990 1993currently the most successful such systems employ socalled phrasebased methods that translate input text by translating sequences of words at a time och 2002 zens et al 2002 koehn et al 2003 vogel et al 2003 tillmann 2003 phrasebased machine translation systems make use of a language model trained for the target language and a translation model trained from a parallel corpusthe translation model is typically broken down into several components eg reordering model phrase translation model and word translation modelto adequately evaluate the quality of any translation is difficult since it is not entirely clear what the focus of the evaluation should besurely a good translation has to adequately capture the meaning of the foreign originalhowever pinning down all the nuances is hard and often differences in emphasis are introduced based on the interpretation of the translatorat the same time it is desirable to have fluent output that can be read easilythese two goals adequacy and fluency are the main criteria in machine translation evaluationhuman judges may be asked to evaluate the adequacy and fluency of translation output but this is a laborious and expensive taskpapineni et al 2002 addressed the evaluation problem by introducing an automatic scoring metric called bleu which allowed the automatic calculation of translation qualitythe system output is compared against a reference translation of the same source textformally the bleu metric is computed as followsgiven the precision of ngrams of size up to the length of the test set in words and the length of the reference translation in words the effectiveness of the bleu metric has been demonstrated by showing that it correlates with human judgmentlet us highlight two properties of the bleu metric the reliance on higher ngrams and the brevity penalty bpfirst look at table 1six different systems are compared here while the unigram precision of the three systems hovers around 60 the difference in 4gram precision is much largerthe finnish system has only roughly half of the 4gram precision of the spanish system this is the because for the relative large distance in overall bleu 1higher ngrams measure not only syntactic cohesion and semantic adequacy of the output but also give discriminatory power to the metricthe other property worth noting is the strong influence of the brevity penaltysince bleu is a precision based method the brevity penalty assures that a system does not only translate fragments of the test set of which it is confident resulting in high precisionis has become common practice to include a word penalty component in statistical machine translation system that gives bias to either longer or shorter outputthis is especially relevant for the bleu score that harshly penalizes translation output that is too shortto illustrate this point see figure 1bleu scores for both spanish and portuguese system drop off when a large word penalty is introduced into the translation model forcing shorter outputthis is not the case for a similar metric gtm an ngram precisionrecall metric proposed by melamed et al 2003 that does not have an explicit brevity penaltythe bleu metric also works with multiple reference translationshowever we often do not have the luxury of having multiple translations of the same source materialfortunately it has not been shown so far that having only a single reference translation causes serious problemswhile bleu has become the most popular metric for machine translation evaluation some of its shortcomings have become apparent it does not work on single sentences since 4gram precision is often 0it is also hard to interpretwhat a bleu score of 289 means is not intuitive and depends eg on the number of reference translation usedsome researchers have recently used relative human bleu scores by comparing machine bleu scores with high quality human translation scoreshowever the resulting numbers are unrealistically highin this section we describe the experimental framework of our workwe also report on a number of preliminary experiments that give us some intuition on variance of test scores on different test setswe carry out experiments using a phrasebased statistical machine translation system koehn et al 2003 koehn 2004we train this system on the europarl corpus a parallel corpus in 11 european languages of 2030 million words per language koehn 2002since the focus of this paper is the comparison of the performance of different systems we need a set of translation systemshere we resort to a trick instead of comparing different machine translation methods trained on the same training data we train the same machine translation method on different parallel copora language pairs with english as the target language and any of the 10 other languages as the source languagethen we assemble a test set that is sentence aligned across all 11 languagesduring evaluation the say spanishenglish and danishenglish system each translate a sentence that correspond to the same english reference translationhence we can compare how well the english output of the spanishenglish translation system matches the reference sentence vs how well the english output of the portugueseenglish matches the reference sentencewe would like to stress that comparing performance of a method that translates text from different languages instead of using different translation methods on the same input text is irrelevant for the purpose of this paperwe are comparing output of different systems in either caseas an alternative we may also use a single language pair and different systems say by using different parameter settingsbut we feel that this would leave us with many arbitrary choices that we would like to avoid for the sake of clarity of our argumentwe already reported some bleu scores earlierthese were computed on a 30000 sentence test set such a huge test set is very uncommon2 since translating such a large amount of text is computationally very expensiveit may take hours or even days which does not contribute to a fast turnaround system development processtherefore we would like to be able to work with smaller test setsthe tradeoff between fast testing and having meaningful results is at the very heart of this paper the statistical significance tests we propose give us the means to assess the significance of test resultslet us start with the following experiment we break up the test set into 100 blocks of 300 consecutive sentences each translate each block and compute the bleu score for each blockwe plot in figure 2 the bleu scores for each of the 100 blocks for the spanish systemthe bleu scores for the 100 blocks vary from 21 to 37many factors influence why some sentences are easier to translate than others unknown words sentence length degree of syntactic and semantic divergence and how the input and reference translation were generated in case of europarl they may be both translations from a third languagefactors that influence translation difficulty may be clusteredfor instance the original language or the topic and style usually stays the same over many sentences in sequencewhen we collect only 300 sentences in sequence certain therefore better test sets of 300 sentences may be constructed by sampling these sentences from different parts of the corpusin an effort to collect better test sets we now place into test set the sentences let us call this a broad sampleagain we measure translation quality on each of these 100 broad samplesfigure 3 shows that the bleu scores are now closer together ranging from 27 to 31obtaining a bleu score for a translation system on a given test set is not very meaningful by itselfwe want to use the metric to compare different translation systemsin a third preliminary experiment we compared for each of the 300 broad samples the bleu score for the spanish system against the bleu score for the danish systemresults are in figure 4the spanish system is better by up to 4in only one sample sample no10 the danish system outperforms the spanish system by 007let us stress in conclusion at this point that when working with small test sets it is important to obtain a representative sample as much as this is possibletranslation quality of neighboring sentences correlates positively therefore we want to chose test sentences from different parts of a larger setthe purpose of experimental testing is to assess the true translation quality of a system on text from a certain domainhowever even with the qualifier from a certain domain this is an abstract concept since it has to be computed on all possible sentences in that domainin practice we will always just be able to measure the performance of the system on a specific samplefrom this test result we would like to conclude what the true translation performance isstatistical significance is an estimate of the degree to which the true translation quality lies within a confidence interval around the measurement on the test setsa commonly used level of reliability of the result is 95 also written as called plevellet us explore this notion on a simpler metric of translation quality than bleua common metric in older machine translation papers is a score from 1 to 5 for each sentence or even simpler a human judge deemed a translation either as correct or wrongif say 100 sentence translations are evaluated and 30 are found correct what can we say about the true translation quality of the systemour best guess is 30 but that may be a few percent offhow much off is the question to be answered by statistical significance testsgiven a set of sentences we can compute the sample mean and variance of the individual sentence scores what we are really interested in however is the true mean let us assume that the sentence scores are distributed according to the normal distributionthis implies that a sentence score is independent from other sentence scoressince we do not know the true mean and variance we can not model the distribution of sentence scores with the normal distributionhowever we can use students tdistribution which approximates the normal distribution for large see figure 5 for an illustration given the sample mean and sample variance we estimate the probability distribution for true translation qualitywe are now interested in a confidence interaround the mean sentence scorethe true translation quality the factor depends on the desired plevel of statistical significance and the sample sizesee table 2 for typical valueswe described the standard method to compute statistical significance for some machine translation evaluation metricsunfortunately this method to compute confidence intervals does not work for the bleu metric since the bleu metric is not the mean of single sentence scoresrecall that we want to answer the following question given a test result of bleu we want to compute with a confidence that the true bleu score lies in an interval instead of using an analytical method to compute confidence intervals for the bleu score we resort to a randomized method called bootstrap resamplingbootstrap resampling has a long tradition in the field of statistics refer to efron and tibshirani 1994 for a general introduction and press et al 2002 for a typical implementationsome recent papers on statistical machine translation hint on the use of bootstrap resampling for assessing statistical significance of test results germann 2003 och 2003 kumar and byrne 2004 but do not lay out their methodologythe intuition behind bootstrap resampling goes as follows assume that we can only test translation performance on a test set of sentencesthese 300 sentences are randomly drawn from the worldgiven a test set we can compute a bleu scorethen we draw a second test set of 300 sentences and compute its bleu scoreif we do this repeatedly we get a number of different bleu scores not unlike the bleu scores displayed in figure 3if we do this for a large number of test sets we can sort the corresponding bleu scoresif we drop the top 25 and bottom 25 of bleu scores we have the remaining 95 of bleu scores in an interval the law of large numbers dictates that with an increasingly large number of bleu scores the interval approaches the 95 confidence interval for scores of test sets of size 300of course having to translate and score sets of 300 sentences repeatedly does not save anything in terms of computational translation cost and the need for a large set of potential sentenceswe therefore take the following leap instead of the selected the 300 sentences in each test set from an infinite set of test sentences we draw them from the same set of 300 sentences with replacementlet us clearly state this assumption assumption estimating the confidence interval from a large number of test sets with test sentences drawn from a set of test sentences with replacement is as good as estimating the confidence interval for test sets size from a large number of test sets with test sentences drawn from an infinite set of test sentencesthe benefit of this assumption is clear we only have to translate sentenceswe will now provide empirical evidence that we can make this assumption true bleu on 100 broad samples of 300 sentences assuming the 30000 sentence bleu as true score three mistakes are made on test set no10 81 and 88 fidence intervals computed for 100 test sets of 300 sentencesfor each of the 100 test sets we compute the confidence interval as described before we repeatedly generate bleu scores on test sets of 300 sentences drawn from this one test set of 300 sentenceswe use the 1000 bleu scores to estimate the confidence intervalwe drop the top 25 and bottom 25 bleu scores so that the displayed interval ranges from the 26th best bleu score to the 975th bestfor the first test set we obtain a confidence interval of 265307 for the second test set a confidence interval of 275330 and so onwe do not know the true bleu score for this system which has to be computed on a nearinfinite number of test sentencesbut the bleu score computed on 30000 test sentences is as good as any it is as you recall from table 1 289for all but three test set this neartrue test score lies within the estimated confidence intervalloosely speaking the 95 confidence level is actually 97 correctas stated earlier the value of scoring metrics comes from being able to compare the quality of different translation systemstypically we want to compare two systemswe translate the same test set with the two systems and measure the translation quality using an evaluation metricone system will fare better than the other with some difference in scorecan we conclude that the better scoring system is truly betterif the differences in score are small we intuitively have some doubt ples we make a 95 statistically significant conclusion that the spanish system is better than the danish create new virtual test sets by drawing sentences with replacement from the collectionfor each set we compute the evaluation metric score for both systemswe note which system performs betterif say one system outperforms the other system 95 of the time we draw the conclusion that it is better with 95 statistical significancewe call this method paired bootstrap resampling since we compare a pair of systemswe carry out experiments using this method using the bleu score to compare the spanishenglish system with the danishenglish systemresults are displayed in figure 7we estimate statistical significance for 100 different test sets with 300 sentences each for 65 samples we draw the conclusion the spanish system is better than the danish with at least 95 statistical significancerecall that the bleu score difference on the huge 30000 sentence test set is 20 in this particular case a small 300 sentence test set is often sufficient to detect the superiority of one of the systems with statistical significanceif the true difference in translation performance is closer we are less likely to draw conclusions from such a small test setsee figure 8 which compares the portuguese and the danish systemtheir bleu score difference on the huge 30000 sentence test set ples we make a 95 statistically significant conclusion that the portuguese system is better than the danish and 1 conclusion to the opposite is only 05using paired bootstrap resampling we drawn 13 conclusions 12 correct conclusions that the portuguese system is better and one wrong conclusion that the danish system is betterthat we draw one wrong conclusion is unfortunate but should not come as a surprise when we talk about 95 statistical singificance levelsat this level 1 conclusion in 20 is expected to be wrongwe introduced two methods using bootstrap resamplingone method estimates bounds for the true performance level of a systemthe other method paired bootstrap resampling estimates how confidently we can draw the conclusion from a test result that one system outperforms anotherwe would now like to provide experimental evidence that these estimates are indeed correct at the specified level of statistical significancewe carried out a large number of experiments to estimate statistical significance for system comparisonswe compared 9 different pairs of systems with different test set sizesdetailed results can be found in table 3the test set sizes vary from 100 sentences to 3000 sentencesas described earlier the systems translate aligned test sentences from different source languages into englishfirst we want to answer the questions how often can we draw conclusions with 95 statistical significancehow often are we correcteven for small test sets of size 300 sentences we can reliably draw the right conclusion if the true bleu score difference is at level and accuracy of the conclusions least 23note that we make no general statements about relation of test set size and statistical significance bleu score differences this particular finding is specific to our test scenario and depends highly on how similar the systems areonly one conclusion is wrong for test sets of size 300 the already mentioned conclusion that the danish system outperforms the portuguese systemfor test sets with only 100 sentences we observer more false conclusions which suggests that this might be too small for a test setwhile the 95 statistical significance level is the most commonly used for historical reasons we want to validate as well the accuracy of the bootstrap resampling method at different statistical significance levelstable 4 displays our findingsfor each conclusion we check into what statistical significance range it falls eg 90949then we check for all conclusions with an interval how many are correct ie consistent with the conclusion drawn from the much larger 30000 sentence test setthe numbers suggest that the method is fairly accurate and errs on the side of cautionfor instance when we conclude a statistical significance level of 90949 we are actually drawing the right conclusion 95 of the timehaving a trusted experimental framework is essential for drawing conclusions on the effects of system changesfor instance do not test on train do not use the same test set repeatedly etcwe stressed the importance of assembling test sets from different parts of a larger pool of sentences we discussed some properties of the widely used bleu score especially the effect of the brevity system comparisons and different sample sizes121 means 12 correct and 1 wrong conclusions30000 test sentences are divided into 300 100 50 and 10 samples each the size of 100 300 600 and 3000 sentences respectively penalty and the role of larger ngramsone important element of a solid experimental framework is a statistical significance test that allows us to judge if a change in score that comes from a change in the system truly reflects a change in overall translation qualitywe applied bootstrap resampling to machine translation evaluation and described methods to compute statistical significance intervals and levels for machine translation evaluation metricswe described how to compute statistical significance intervals for metrics such as bleu for small test sets using bootstrap resampling methodswe provided empirical evidence that the computed intervals are accurateaided by the proposed methods we hope that it becomes common practice in published machine translation research to report on the statistical significance of test results
W04-3250
statistical significance tests for machine translation evaluationautomatic evaluation metrics for machine translation systems such as bleu meteor and the related nist metric are becoming increasingly important in mt research and developmentthis paper presents a significance testdriven comparison of ngrambased automatic mt evaluation metricsstatistical significance tests use bootstrapping methods to estimate the reliability of automatic machine translation evaluationsbased on this reliability estimation we study the characteristics of different mt evaluation metrics and how to construct reliable and efficient evaluation suites
textrank bringing order into texts in this paper we introduce textrank a graphbased ranking model for text processing and show how this model can be successfully used in natural language applications in particular we propose two innovative unsupervised methods for keyword and sentence extraction and show that the results obtained compare favorably with previously published results on established benchmarks graphbased ranking algorithms like kleinbergs hits algorithm or googles pagerank have been successfully used in citation analysis social networks and the analysis of the linkstructure of the world wide webarguably these algorithms can be singled out as key elements of the paradigmshift triggered in the field of web search technology by providing a web page ranking mechanism that relies on the collective knowledge of web architects rather than individual content analysis of web pagesin short a graphbased ranking algorithm is a way of deciding on the importance of a vertex within a graph by taking into account global information recursively computed from the entire graph rather than relying only on local vertexspecific informationapplying a similar line of thinking to lexical or semantic graphs extracted from natural language documents results in a graphbased ranking model that can be applied to a variety of natural language processing applications where knowledge drawn from an entire text is used in making local rankingselection decisionssuch textoriented ranking methods can be applied to tasks ranging from automated extraction of keyphrases to extractive summarization and word sense disambiguation in this paper we introduce the textrank graphbased ranking model for graphs extracted from natural language textswe investigate and evaluate the application of textrank to two language processing tasks consisting of unsupervised keyword and sentence extraction and show that the results obtained with textrank are competitive with stateoftheart systems developed in these areasgraphbased ranking algorithms are essentially a way of deciding the importance of a vertex within a graph based on global information recursively drawn from the entire graphthe basic idea implemented by a graphbased ranking model is that of voting or recommendationwhen one vertex links to another one it is basically casting a vote for that other vertexthe higher the number of votes that are cast for a vertex the higher the importance of the vertexmoreover the importance of the vertex casting the vote determines how important the vote itself is and this information is also taken into account by the ranking modelhence the score associated with a vertex is determined based on the votes that are cast for it and the score of the vertices casting these votesformally let be a directed graph with the set of vertices and set of edges where is a subset of for a given vertex let be the set of vertices that point to it and let be the set of vertices that vertex points to the score of a vertex is defined as follows where is a damping factor that can be set between 0 and 1 which has the role of integrating into the model the probability of jumping from a given vertex to another random vertex in the graphin the context of web surfing this graphbased ranking algorithm implements the random surfer model where a user clicks on links at random with a probability and jumps to a completely new page with probability the factor is usually set to 085 and this is the value we are also using in our implementationstarting from arbitrary values assigned to each node in the graph the computation iterates until convergence below a given threshold is achieved 1after running the algorithm a score is associated with each vertex which represents the importance of the vertex within the graphnotice that the final values obtained after textrank runs to completion are not affected by the choice of the initial value only the number of iterations to convergence may be differentit is important to notice that although the textrank applications described in this paper rely on an algorithm derived from googles pagerank other graphbased ranking algorithms such as eghits or positional function can be easily integrated into the textrank model although traditionally applied on directed graphs a recursive graphbased ranking algorithm can be also applied to undirected graphs in which case the outdegree of a vertex is equal to the indegree of the vertexfor loosely connected graphs with the number of edges proportional with the number of vertices undirected graphs tend to have more gradual convergence curvesfigure 1 plots the convergence curves for a randomly generated graph with 250 vertices and 250 edges for a convergence threshold of 00001as the connectivity of the graph increases convergence is usually achieved after fewer iterations and the convergence curves for directed and undirected graphs practically overlapin the context of web surfing it is unusual for a page to include multiple or partial links to another page and hence the original pagerank definition for graphbased ranking is assuming unweighted graphshowever in our model the graphs are build from natural language texts and may include multiple or partial links between the units that are extracted from textit may be therefore useful to indicate and incorporate into the model the strength of the connection between two vertices and as a weight added to the corresponding edge that connects the two verticesconsequently we introduce a new formula for graphbased ranking that takes into account edge weights when computing the score associated with a vertex in the graphnotice that a similar formula can be defined to integrate vertex weightsfigure 1 plots the convergence curves for the same sample graph from section 21 with random weights in the interval 010 added to the edgeswhile the final vertex scores differ significantly as compared to their unweighted alternatives the number of iterations to convergence and the shape of the convergence curves is almost identical for weighted and unweighted graphsto enable the application of graphbased ranking algorithms to natural language texts we have to build a graph that represents the text and interconnects words or other text entities with meaningful relationsdepending on the application at hand text units of various sizes and characteristics can be added as vertices in the graph eg words collocations entire sentences or otherssimilarly it is the application that dictates the type of relations that are used to draw connections between any two such vertices eg lexical or semantic relations contextual overlap etcregardless of the type and characteristics of the elements added to the graph the application of graphbased ranking algorithms to natural language texts consists of the following main steps in the following we investigate and evaluate the application of textrank to two natural language processing tasks involving ranking of text units a keyword extraction task consisting of the selection of keyphrases representative for a given text and a sentence extraction task consisting of the identification of the most important sentences in a text which can be used to build extractive summariesthe task of a keyword extraction application is to automatically identify in a text a set of terms that best describe the documentsuch keywords may constitute useful entries for building an automatic index for a document collection can be used to classify a text or may serve as a concise summary for a given documentmoreover a system for automatic identification of important terms in a text can be used for the problem of terminology extraction and construction of domainspecific dictionariesthe simplest possible approach is perhaps to use a frequency criterion to select the important keywords in a documenthowever this method was generally found to lead to poor results and consequently other methods were exploredthe stateoftheart in this area is currently represented by supervised learning methods where a system is trained to recognize keywords in a text based on lexical and syntactic featuresthis approach was first suggested in where parametrized heuristic rules are combined with a genetic algorithm into a system for keyphrase extraction genex that automatically identifies keywords in a documenta different learning algorithm was used in where a naive bayes learning scheme is applied on the document collection with improved results observed on the same data set as used in neither turney nor frank report on the recall of their systems but only on precision a 290 precision is achieved with genex for five keyphrases extracted per document and 183 precision achieved with kea for fifteen keyphrases per documentmore recently applies a supervised learning system to keyword extraction from abstracts using a combination of lexical and syntactic features proved to improve significantly over previously published resultsas hulth suggests keyword extraction from abstracts is more widely applicable than from full texts since many documents on the internet are not available as fulltexts but only as abstractsin her work hulth experiments with the approach proposed in and a new approach that integrates part of speech information into the learning process and shows that the accuracy of the system is almost doubled by adding linguistic knowledge to the term representationin this section we report on our experiments in keyword extraction using textrank and show that the graphbased ranking model outperforms the best published results in this problemsimilar to we are evaluating our algorithm on keyword extraction from abstracts mainly for the purpose of allowing for a direct comparison with the results she reports with her keyphrase extraction systemnotice that the size of the text is not a limitation imposed by our system and similar results are expected with textrank applied on fulltextsthe expected end result for this application is a set of words or phrases that are representative for a given natural language textthe units to be ranked are therefore sequences of one or more lexical units extracted from text and these represent the vertices that are added to the text graphany relation that can be defined between two lexical units is a potentially useful connection that can be added between two such verticeswe are using a cooccurrence relation controlled by the distance between word occurrences two vertices are connected if their corresponding lexical units cooccur within a window of maximum words where can be set anywhere from 2 to 10 wordscooccurrence links express relations between syntactic elements and similar to the semantic links found useful for the task of word sense disambiguation they represent cohesion indicators for a given textthe vertices added to the graph can be restricted with syntactic filters which select only lexical units of a certain part of speechone can for instance consider only nouns and verbs for addition to the graph and consequently draw potential edges based only on relations that can be established between nouns and verbswe experimented with various syntactic filters including all open class words nouns and verbs only etc with best results observed for nouns and adjectives only as detailed in section 32the textrank keyword extraction algorithm is fully unsupervised and proceeds as followsfirst compatibility of systems of linear constraints over the set of natural numberscriteria of compatibility of a system of linear diophantine equations strict inequations and nonstrict inequations are consideredupper bounds for components of a minimal set of solutions and algorithms of construction of minimal generating sets of solutions for all types of systems are giventhese criteria and the corresponding algorithms for constructing a minimal supporting set of solutions can be used in solving all the considered types systems and systems of mixed types the text is tokenized and annotated with part of speech tags a preprocessing step required to enable the application of syntactic filtersto avoid excessive growth of the graph size by adding all possible combinations of sequences consisting of more than one lexical unit we consider only single words as candidates for addition to the graph with multiword keywords being eventually reconstructed in the postprocessing phasenext all lexical units that pass the syntactic filter are added to the graph and an edge is added between those lexical units that cooccur within a window of wordsafter the graph is constructed the score associated with each vertex is set to an initial value of 1 and the ranking algorithm described in section 2 is run on the graph for several iterations until it converges usually for 2030 iterations at a threshold of 00001once a final score is obtained for each vertex in the graph vertices are sorted in reversed order of their score and the top vertices in the ranking are retained for postprocessingwhile may be set to any fixed value usually ranging from 5 to 20 keywords limits the number of keywords extracted with his genex system to five we are using a more flexible approach which decides the number of keywords based on the size of the textfor the data used in our experiments which consists of relatively short abstracts is set to a third of the number of vertices in the graphduring postprocessing all lexical units selected as potential keywords by the textrank algorithm are marked in the text and sequences of adjacent keywords are collapsed into a multiword keywordfor instance in the text matlab code for plotting ambiguity functions if both matlab and code are selected as potential keywords by textrank since they are adjacent they are collapsed into one single keyword matlab codefigure 2 shows a sample graph built for an abstract from our test collectionwhile the size of the abstracts ranges from 50 to 350 words with an average size of 120 words we have deliberately selected a very small abstract for the purpose of illustrationfor this example the lexical units found to have higher importance by the textrank algorithm are numbers inequations linear diophantine upper bounds strict notice that this ranking is different than the one rendered by simple word frequenciesfor the same text a frequency approach provides the following topranked lexical units systems types solutions minimal linear inequations algorithms all other lexical units have a frequency of 1 and therefore cannot be ranked but only listedthe data set used in the experiments is a collection of 500 abstracts from the inspec database and the corresponding manually assigned keywordsthis is the same test data set as used in the keyword extraction experiments reported in the inspec abstracts are from journal papers from computer science and information technologyeach abstract comes with two sets of keywords assigned by professional indexers controlled keywords restricted to a given thesaurus and uncontrolled keywords freely assigned by the indexerswe follow the evaluation approach from and use the uncontrolled set of keywordsin her experiments hulth is using a total of 2000 abstracts divided into 1000 for training 500 for development and 500 for test2since our approach is completely unsupervised no trainingdevelopment data is required and we are only using the test docu2many thanks to anette hulth for allowing us to run our algorithm on the data set used in her keyword extraction experiments and for making available the trainingtestdevelopment data split ments for evaluation purposesthe results are evaluated using precision recall and fmeasurenotice that the maximum recall that can be achieved on this collection is less than 100 since indexers were not limited to keyword extraction as our system is but they were also allowed to perform keyword generation which eventually results in keywords that do not explicitly appear in the textfor comparison purposes we are using the results of the stateoftheart keyword extraction system reported in shortly her system consists of a supervised learning scheme that attempts to learn how to best extract keywords from a document by looking at a set of four features that are determined for each candidate keyword withindocument frequency collection frequency relative position of the first occurrence sequence of part of speech tagsthese features are extracted from both training and test data for all candidate keywords where a candidate keyword can be ngrams npchunks patterns the learning system is a rule induction system with baggingour system consists of the textrank approach described in section 31 with a cooccurrence windowsize set to two three five or ten wordstable 1 lists the results obtained with textrank and the best results reported in for each method the table lists the total number of keywords assigned the mean number of keywords per abstract the total number of correct keywords as evaluated against the set of keywords assigned by professional indexers and the mean number of correct keywordsthe table also lists precision recall and fmeasurediscussiontextrank achieves the highest precision and fmeasure across all systems although the recall is not as high as in supervised methods possibly due the limitation imposed by our approach on the number of keywords selected which is not made in the supervised systemaa larger window does not seem to help on the contrary the larger the window the lower the precision probably explained by the fact that a relation between words that are further apart is not strong enough to define a connection in the text graphexperiments were performed with various syntactic filters including all open class words nouns and adjectives and nouns only and the best performance was achieved with the filter that selects nouns and adjectives onlywe have also experimented with a setting where no part of speech information was added to the text and all words except a predefined list of stopwords were added to the graphthe results with this setting were significantly lower than the systems that consider part of speech information which corroborates with previous observations that linguistic information helps the process of keyword extraction experiments were also performed with directed graphs where a direction was set following the natural flow of the text eg one candidate keyword recommends the candidate keyword that follows in the text keeping the restraint imposed by the cooccurrence relationwe have also tried the reversed direction where a lexical unit points to a previous token in the texttable 1 includes the results obtained with directed graphs for a cooccurrence window of 2regardless of the direction chosen for the arcs results obtained with directed graphs are worse than results obtained with undirected graphs which suggests that despite a natural flow in running text there is no natural direction that can be established between cooccurring wordsoverall our textrank system leads to an fmeasure higher than any of the previously proposed systemsnotice that textrank is completely unsupervised and unlike other supervised systems it relies exclusively on information drawn from the text itself which makes it easily portable to other text collections domains and languagesthe other textrank application that we investigate consists of sentence extraction for automatic summarizationin a way the problem of sentence extraction can be regarded as similar to keyword extraction since both applications aim at identifying sequences that are more representative for the given textin keyword extraction the candidate text units consist of words or phrases whereas in sentence extraction we deal with entire sentencestextrank turns out to be well suited for this type of applications since it allows for a ranking over text units that is recursively computed based on information drawn from the entire textto apply textrank we first need to build a graph associated with the text where the graph vertices are representative for the units to be rankedfor the task of sentence extraction the goal is to rank entire sentences and therefore a vertex is added to the graph for each sentence in the textthe cooccurrence relation used for keyword extraction cannot be applied here since the text units in consideration are significantly larger than one or few words and cooccurrence is not a meaningful relation for such large contextsinstead we are defining a different relation which determines a connection between two sentences if there is a similarity relation between them where similarity is measured as a function of their content overlapsuch a relation between two sentences can be seen as a process of recommendation a sentence that addresses certain concepts in a text gives the reader a recommendation to refer to other sentences in the text that address the same concepts and therefore a link can be drawn between any two such sentences that share common contentthe overlap of two sentences can be determined simply as the number of common tokens between the lexical representations of the two sentences or it can be run through syntactic filters which only count words of a certain syntactic category eg all open class words nouns and verbs etcmoreover to avoid promoting long sentences we are using a normalization factor and divide the content overlap textrank extractive summary hurricane gilbert swept toward the dominican republic sunday and the civil de fense alerted its heavily populated south coast to prepare for high winds heavy rains and high seasthe national hurricane center in miami reported its position at 2 am sunday at latitude 161 north longitude 675 west about 140 miles south of ponce puerto rico and 200 miles southeast of santo domingothe national weather service in san juan puerto rico said gilbert was moving westward at 15 mph with a quotbroad area of cloudiness and heavy weatherquot rotating around the center of the stormstrong winds associated with gilbert brought coastal flooding strong southeast winds and up to 12 feet to puerto ricos south coastmanual abstract i hurricane gilbert is moving toward the dominican republic where the residents of the south coast especially the barahona province have been alerted to prepare for heavy rains and high wind and seastropical storm gilbert formed in the eastern carribean and became a hurricane on saturday nightby 2 am sunday it was about 200 miles southeast of santo domingo and moving westward at 15 mph with winds of 75 mphflooding is expected in puerto rico and in the virgin islandsthe second hurricane of the season florence is now over the southern united states and down graded to a tropical stormmanual abstract ii tropical storm gilbert in the eastern carribean strenghtened into a hurricane saturday nightthe national hurricane center in miami reported its position at 2 am sunday to be about 140 miles south of puerto rico and 200 miles southeast of santo domingoit is moving westward at 15 mph with a broad area of cloudiness and heavy weather with sustained winds of 75 mph gusting to 92 mphthe dominican republics civil defense alerted that countrys heavily populated south coast and the national weather service in san juan puerto rico issued a flood watch for puerto rico and the virgin islands until at least 6 pm sunday of two sentences with the length of each sentenceformally given two sentences and with a sentence being represented by the set of words that appear in the sentence the similarity of and is defined as other sentence similarity measures such as string kernels cosine similarity longest common subsequence etc are also possible and we are currently evaluating their impact on the summarization performancethe resulting graph is highly connected with a weight associated with each edge indicating the strength of the connections established between various sentence pairs in the textthe text is therefore represented as a weighted graph and consequently we are using the weighted graphbased ranking formula introduced in section 22after the ranking algorithm is run on the graph sentences are sorted in reversed order of their score and the top ranked sentences are selected for inclusion in the summaryfigure 3 shows a text sample and the associated weighted graph constructed for this textthe figure also shows sample weights attached to the edges connected to vertex 94 and the final textrank score computed for each sentencethe sentences with the highest rank are selected for inclusion in the abstractfor this sample article the sentences with ids 9 15 16 18 are extracted resulting in a summary of about 100 words which according to automatic evaluation measures is ranked the second among summaries produced by 15 other systems we evaluate the textrank sentence extraction algorithm on a singledocument summarization task using 567 news articles provided during the document understanding evaluations 2002 for each article textrank generates an 100words summary the task undertaken by other systems participating in this single document summarization taskfor evaluation we are using the rouge evaluation toolkit which is a method based on ngram statistics found to be highly correlated with human evaluations two manually produced reference summaries are provided and used in the evaluation process5fifteen different systems participated in this task and we compare the performance of textrank with the top five performing systems as well as with the baseline proposed by the duc evaluators consisting of a 100word summary constructed by taking the first sentences in each articletable 2 shows the results obtained on this data set of 567 news articles including the results for textrank the baseline and the results of the top five performing systems in the duc 2002 single document summarization task textrank top 5 duc 2002 systems and baselineevaluation takes into account all words stemmed words stemmed words and no stopwordsdiscussiontextrank succeeds in identifying the most important sentences in a text based on information exclusively drawn from the text itselfunlike other supervised systems which attempt to learn what makes a good summary by training on collections of summaries built for other articles textrank is fully unsupervised and relies only on the given text to derive an extractive summary which represents a summarization model closer to what humans are doing when producing an abstract for a given documentnotice that textrank goes beyond the sentence connectivity in a textfor instance sentence 15 in the example provided in figure 3 would not be identified as important based on the number of connections it has with other vertices in the graph but it is identified as important by textrank another important aspect of textrank is that it gives a ranking over all sentences in a text which means that it can be easily adapted to extracting very short summaries setting of rouge which was found to have the highest correlation with human judgments at a confidence level of 95only the first 100 words in each summary are considered sentence or longer more explicative summaries consisting of more than 100 wordswe are also investigating combinations of keyphrase and sentence extraction techniques as a method for building shortlong summariesfinally another advantage of textrank over previously proposed methods for building extractive summaries is the fact that it does not require training corpora which makes it easily adaptable to other languages or domainsintuitively textrank works well because it does not only rely on the local context of a text unit but rather it takes into account information recursively drawn from the entire text through the graphs it builds on texts textrank identifies connections between various entities in a text and implements the concept of recommendationa text unit recommends other related text units and the strength of the recommendation is recursively computed based on the importance of the units making the recommendationfor instance in the keyphrase extraction application cooccurring words recommend each other as important and it is the common context that enables the identification of connections between words in textin the process of identifying important sentences in a text a sentence recommends another sentence that addresses similar concepts as being useful for the overall understanding of the textthe sentences that are highly recommended by other sentences in the text are likely to be more informative for the given text and will be therefore given a higher scorean analogy can be also drawn with pageranks random surfer model where a user surfs the web by following links from any given web pagein the context of text modeling textrank implements what we refer to as text surfing which relates to the concept of text cohesion from a certain concept in a text we are likely to follow links to connected concepts that is concepts that have a relation with the current concept this also relates to the knitting phenomenon facts associated with words are shared in different parts of the discourse and such relationships serve to knit the discourse togetherthrough its iterative mechanism textrank goes beyond simple graph connectivity and it is able to score text units based also on the importance of other text units they link tothe text units selected by textrank for a given application are the ones most recommended by related text units in the text with preference given to the recommendations made by most influential ones ie the ones that are in turn highly recommended by other related unitsthe underlying hypothesis is that in a cohesive text fragment related text units tend to form a web of connections that approximates the model humans build about a given context in the process of discourse understandingin this paper we introduced textrank a graphbased ranking model for text processing and show how it can be successfully used for natural language applicationsin particular we proposed and evaluated two innovative unsupervised approaches for keyword and sentence extraction and showed that the accuracy achieved by textrank in these applications is competitive with that of previously proposed stateoftheart algorithmsan important aspect of textrank is that it does not require deep linguistic knowledge nor domain or language specific annotated corpora which makes it highly portable to other domains genres or languages
W04-3252
textrank bringing order into textsin this paper we introduce textrank a graphbased ranking model for text processing and show how this model can be successfully used in natural language applicationsin particular we propose two innovative unsupervised methods for keyword and sentence extraction and show that the results obtained compare favorably with previously published results on established benchmarkswe propose textrank which is one of the most wellknown graph based approaches to key phrase extractionwe propose the textrank model to rank key words based on the cooccurrence links between words
sentiment analysis using support vector machines with diverse information sources this paper introduces an approach to sentiment analysis which uses support vector machines to bring together diverse sources of potentially pertinent information including several favorability measures for phrases and adjectives and where available knowledge of the topic of the text models using the features introduced are further combined with unigram models which have been shown to be effective in the past and lemmatized versions of the unigram models experiments on movie review data from epinionscom demonstrate that hybrid svms which combine unigramstyle featurebased svms with those based on realvalued favorability measures obtain superior performance producing the best results yet published using this data further experiments using a feature set enriched with topic information on a smaller dataset of music reviews handannotated for topic are also reported the results of which suggest that incorporating topic information into such models may also yield improvement recently an increasing amount of research has been devoted to investigating methods of recognizing favorable and unfavorable sentiments towards specific subjects within natural language textsareas of application for such analysis are numerous and varied ranging from newsgroup flame filtering and informative augmentation of search engine responses to analysis of public opinion trends and customer feedbackfor many of these tasks classifying the tone of the communication as generally positive or negative is an important stepthere are a number of challenging aspects of this taskopinions in natural language are very often expressed in subtle and complex ways presenting challenges which may not be easily addressed by simple text categorization approaches such as ngram or keyword identification approachesalthough such approaches have been employed effectively there appears to remain considerable room for improvementmoving beyond these approaches can involve addressing the task at several levelsrecognizing the semantic impact of words or phrases is a challenging task in itself but in many cases the overarching sentiment of a text is not the same as that of decontextualized snippetsnegative reviews may contain many apparently positive phrases even while maintaining a strongly negative tone and the opposite is also commonthis paper introduces an approach to classifying texts as positive or negative using support vector machines a wellknown and powerful tool for classification of vectors of realvalued features the present approach emphasizes the use of a variety of diverse information sources and svms provide the ideal tool to bring these sources togetherwe describe the methods used to assign values to selected words and phrases and we introduce a method of bringing them together to create a model for the classification of textsin addition several classes of features based upon the proximity of the topic with phrases which have been assigned favorability values are described in order to take further advantage of situations in which the topic of the text may be explicitly identifiedthe results of a variety of experiments are presented using both data which is not topic annotated and data which has been hand annotated for topicin the case of the former the present approach is shown to yield better performance than previous models on the same datain the case of the latter results indicate that our approach may allow for further improvements to be gained given knowledge of the topic of the texta continual challenge in the task of sentiment analysis of a text is to home in on those aspects of the text which are in some way representative of the tone of the whole textin the past work has been done in the area of characterizing words and phrases according to their emotive tone but in many domains of text the values of individual phrases may bear little relation to the overall sentiment expressed by the textpang et al s treatment of the task as analogous to topicclassification underscores the difference between the two taskssources of misleading phrases include what pang et al refer to as thwarted expectations narrative where emotive effect is attained by emphasizing the contrast between what the reviewer expected and the actual experiencefor example in the record review data used in the present experiments the sentence how could they not be the most unimaginative bleak whiny emo band since occurs in one of the most highly rated reviews describing the reviewers initial misgivings about the record under review based on its packaging followed immediately by i do not knowbut it is nothing like you would imaginenot even almost clearly the strongly positive sentiment conveyed by these four sentences is much different from what we would expect from the sum of its partslikewise another exceptionally highly rated review contains the quote this was a completely different band defeated miserable and exhausted absolutely but not hopeless they had somehow managed to succeed where every other band in their shoes had failed other rhetorical devices which tend to widen the gap in emotional tone between what is said locally in phrases and what is meant globally in the text include the drawing of contrasts between the reviewed entity and other entities sarcasm understatement and digressions all of which are used in abundance in many discourse domainsthe motivation of the present research has been to incorporate methods of measuring the favorability content of phrases into a general classification tool for textshere the term semantic orientation refers to a real number measure of the positive or negative sentiment expressed by a word or phrasein the present work the approach taken by turney is used to derive such values for selected phrases in the textthis approach is simple and surprisingly effectivemoreover is not restricted to words of a particular part of speech nor even restricted to single words but can be used with multiple word phrasesin general two word phrases conforming to particular partofspeech templates representing possible descriptive combinations are usedthe phrase patterns used by turney can be seen in figure 1in some cases the present approach deviates from this utilizing values derived from single wordsfor the purposes of this paper these phrases will be referred to as value phrases since they will be the sources of so valuesonce the desired value phrases have been extracted from the text each one is assigned an so valuethe so of a phrase is determined based upon the phrases pointwise mutual information with the words excellent and poorpmi is defined by church and hanks as follows where is the probability that and cooccurthe so for a is the difference between its pmi with the word excellent and its pmi with the word poor the probabilities are estimated by querying the altavista advanced search engine1 for countsthe search engines near operator representing occurrences of the two queried words within ten words of each other in a text is used to define cooccurrencethe final so equation is intuitively this yields values above zero for phrases with greater pmi with the word excellent and below zero for greater pmi with poora so value of zero would indicate a completely neutral semantic orientationfurther feature types are derived using the method of kamps and marx of using wordnet relationships to derive three values pertinent to the emotive meaning of adjectivesthe three values correspond to the potency activity and the evaluative factors introduced in charles osgoods theory of semantic differentiation these values are derived by measuring the relative minimal path length in wordnet between the adjective in question and the pair of words appropriate for the given factorin the case of the evaluative factor for example the comparison is between the mpl between the adjective and good and the mpl between the adjective and badonly adjectives connected by synonymy to each of the opposites are consideredthe method results in a list of 5410 adjectives each of which is given a value for each of the three factors referred to as eva pot and actfor the purposes of this research each of these factors values are averaged over all the adjectives in a text yielding three realvalued feature values for the text which will be added to the svm modelour approach shares the intuition of natsukawa and yi that sentiment expressed with regard to a particular subject can best be identified with reference to the subject itselfcollecting emotive content from a text overall can only give the most general indication of the sentiment of that text towards the specific subjectnevertheless in the present work it is assumed that the pertinent analysis will occur at the text levelthe key is to find a way to incorporate pertinent semantic orientation values derived from phrases into a model of textsour approach seeks to employ semantic orientation values from a variety of different sources and use them to create a feature space which can be separated into classes using an svmin some application domains it is known in advance what the topic is toward which sentiment is to be evaluatedthe present approach allows for the incorporation of features which exploit this knowledge where availablethis is done by creating several classes of features based upon the semantic orientation values of phrases given their position in relation to the topic of the textalthough in opinionbased texts there is generally a single primary subject about which the opinion is favorable or unfavorable it would seem that secondary subjects may also be useful to identifythe primary subject of a book review for example is a bookhowever the reviews overall attitude to the author may also be enlightening although it is not necessarily identical to the attitude towards the booklikewise in a product review the attitude towards the company which manufactures the product may be pertinentit is an open question whether such secondary topic information would be beneficial or harmful to the modeling taskthe approach described in this paper allows such secondary information to be incorporated where availablein the second of the two datasets used in the present experiments texts were annotated by hand using the open ontology forge annotation tool in each record review references to the record being reviewed were tagged as this work and references to the artist under review were tagged as this artistwith these entities tagged a number of classes of features may be extracted representing various relationships between topic entities and value phrases similar to those described in section 31the classes looked at in this work are as follows turney value the average value of all value phrases so values for the textclassification by this feature alone is not the equivalent of turneys approach since the present approach involves retraining in a supervised modelin sentence with this work the average value of all value phrases which occur in the same sentence as a reference to the work being reviewedfollowing this work the average value of all value phrases which follow a reference to the work being reviewed directly or separated only by the copula or a prepositionpreceding this work the average value of all value phrases which precede a reference to the work being reviewed directly or separated only by the copula or a prepositionin sentence with this artist as above but with reference to the artistfollowing this artist as above but with reference to the artistpreceding this artist as above but with reference to the artistthe features used which make use of adjectives with wordnet derived osgood values include the following textwide eva the average eva value of all adjectives in a texttextwide pot the average pot value of all adjectives in a texttextwide act the average act value of all adjectives in a texttopicsentence eva the average eva value of all adjectives which share a sentence with the topic of the texttopicsentence pot the average pot value of all adjectives which share a sentence with the topic of the texttopicsentence act the average act value of all adjectives which share a sentence with the topic of the textthe grouping of these classes should reflect some common degree of reliability of features within a given class but due to data sparseness what might have been more natural class groupingsfor example including valuephrase preposition topicentity as a distinct classoften had to be conflated in order to get features with enough occurrences to be representativefor each of these classes a value may be derived for a textrepresenting each text as a vector of these realvalued features forms the basis for the svm modelin the case of data for which no explicit topic information is available only the turney value is used from the first list and the textwide eva pot and act values from the second lista resultant feature vector representing a text may be composed of a combination of boolean unigramstyle features and realvalued favorability measures in the form of the osgood values and the pmi derived valuessvms are a machine learning classification technique which use a function called a kernel to map a space of data points in which the data is not linearly separable onto a new space in which it is with allowances for erroneous classificationfor a tutorial on svms and details of their formulation we refer the reader to burges and cristiani and shawetailor a detailed treatment of these models application to text classification may be found in joachims first value phrases were extracted and their values were derived using the method described in section 31after this supervised learning was performed using these values as featuresin training data reviews corresponding to a below average rating were classed as negative and those with an above average rating were classed as positivethe first dataset consisted of a total of 1380 epinionscom movie reviews approximately half positive and half negativethis is the same dataset as was presented in pang et alin order to compare results as directly as possible we report results of 3fold cross validation following pang et allikewise we include punctuation as tokens and normalize the feature values for text lengthto lend further support to the conclusions we also report results for 10fold cross validation experimentson this dataset the feature sets investigated include various combinations of the turney value the three textwide osgood values and word token unigrams or lemmatized unigrams2 the second dataset consists of 100 record reviews from the pitchfork media online record review publication3 topicannotated by handin addition to the features employed with the first dataset this dataset allows the use those features described in 33 which make use of topic information namely the broader pmi derived so values and the topicsentence osgood valuesdue to the relatively small size of this dataset test suites were created using 100 20 10 and 5fold cross validation to maximize the amount of data available for training and the accuracy of the resultstext length normalization appeared to harm performance on this dataset and so the models reported here for this dataset were not normalized for lengthsvms were built using kudos tinysvm software implementation4 several kernel types kernel parameters and optimization parameters were investigated but no appreciable and consistent benefits were gained by deviating from the the default linear kernel with all parameter values set to their default so only these results are reported here with the exception of the turney valuesonly model on the pitchfork datasetthis singlefeatured model caused segmentation faults on some partitions with the linear kernel and so the results for this model only seen in figure 3 were obtained using a polynomial kernel with parameter set to 2 and the constraints violation penalty set at 2 several hybrid svm models were further tested using the results from the previously described models as featuresin these models the feature values for each event represent the distance from the dividing hyperplane for each constituent modelthe accuracy value represents the percentage of test texts which were classified correctly by the modelresults on the first dataset without topic information are shown in figure 2the results for 3fold cross validation show how the present feature sets compare with the best performing svm reported in pang et al in general the addition of osgood values does not seem to yield improvement in any of the modelsthe turney values appear more helpful which is not surprising given their superior performance alonein the case of the svm with only a single turney value accuracy is already at 683 reports that simply averaging these values on the same data yields 658 accuracythe osgood values are considerably less reliable yielding only 562 accuracy on their ownlemmas outperform unigrams in all experiments and in fact the simple lemma models outperform even those augmented with the turney and osgood features in the experiments on the epinions datathe contribution of these new feature types is most pronounced when they are used to train a separate svm and the two svms are combined in a hybrid svmthe best results are obtained using such hybrid svms which yield scores of 846 accuracy on the 3fold experiments and 860 accuracy on the 10fold experimentsin the second set of experiments again inclusion of osgood features shows no evidence of yielding any improvement in modeling when other features are present indeed as in the previous experiments there are some cases in which these features may be harming performancethe pmi values on the other hand appear to yield consistent improvementfurthermore on both the 20 and 100fold test suites the inclusion of all pmi values with lemmas outperforms the use of only the turney values suggesting that the incorporation of the available topic relations is helpfulalthough there is not enough data here to be certain of trends it is intuitive that the broader pmi values similarly to the unigrams would particularly benefit from increased training data due to their specificity and therefore their relative sparsenessonce again the information appears to be most fruitfully combined by building svms representing semantic values and lemmas separately and combining them in a single hybrid svmthe average score over the four nfold cross validation experiments for the hybrid svm is 865 whereas the average score for the secondbest performing model incoporating all semantic value features and lemmas is 85the simple lemmas model obtains an average score of 84 and the simple unigrams model obtains 7975the main development presented here is the incorporation of several new information sources as features into svms which previously relied entirely on the effective but limited bag of words approachthe ability of svms to handle realvalued features makes this possible and the information sources introduced in the work turney and kamps and marx provide sensible places to startthe intuition that topic relations and proximity should also yield gains also appears to be borne out in the present experimentsthe various sources of information appear to be best combined by integrating several distinct svmsat the level of the phrasal so assignment it would seem that some improvement could be gained by adding domain context to the altavista searchmanyperhaps mostterms favorability content depends to some extent on their contextas turney notes unpredictable is generally positive when describing a movie plot and negative when describing an automobile or a politicianlikewise such terms as devastating might be generally negative but in the context of music or art may imply an emotional engagement which is usually seen as positivelikewise although excellent and poor as the poles in assessing this value seems somewhat arbitrary cursory experiments in adjusting the search have thus far supported turneys conclusion that the former are the appropriate terms to use for this taskone problem with limiting the domain by adding topicrelated word constraints to the query is that the resultant hit count is greatly diminished canceling out any potential gainit is to be hoped that in the future as search engines continue to improve and the internet continues to grow more possibilities will open up in this regardit also seems likely that the topicrelations aspect of the present research only scratches the surface of what should be possiblethere is still considerable room for improvement in performancethe present models may also be further expanded with features representing other information sources which may include other types of semantic annotation or features based on more sophisticated grammatical or dependency relations or on zone informationin any case it is hoped that the present work may help to indicate how various information sources pertinent to the task may be brought togetherthe method introduced in this paper allows several methods of assigning semantic values to phrases and words within a text to be exploited in a more useful way than was previously possible by incorporating them as features for svm modeling and for explicit topic information to be utilized when available by features incorporating such valuescombinations of svms using these features in conjunction with svms based on unigrams and lemmatized unigrams are shown to outperform models which do not use these information sourcesthe approach presented here is flexible and suggests promising avenues of further investigation
W04-3253
sentiment analysis using support vector machines with diverse information sourcesthis paper introduces an approach to sentiment analysis which uses support vector machines to bring together diverse sources of potentially pertinent information including several favorability measures for phrases and adjectives and where available knowledge of the topic of the textmodels using the features introduced are further combined with unigram models which have been shown to be effective in the past and lemmatized versions of the unigram modelsexperiments on movie review data from the internet movie database demonstrate that hybrid svms which combine unigramstyle featurebased svms with those based on realvalued favorability measures obtain superior performance producing the best results yet published using this datafurther experiments using a feature set enriched with topic information on a smaller dataset of music reviews handannotated for topic are also reported the results of which suggest that incorporating topic information into such models may also yield improvement
a statistical semantic parser that integrates syntax and semantics we introduce a learning semantic parserscissor that maps naturallanguage sentences to a detailed formal meaning representation language it first usesan integrated statistical parser to pro duce a semantically augmented parse tree in which each nonterminal node has both a syntactic and a semantic label a compositionalsemantics procedure is then used to map the augmented parse tree into a final meaning representation we evaluate the system in two domains a naturallanguage database interface and an interpreter for coaching instructions in robotic soccer we present experimentalresults demonstrating that scissor produces more accurate semantic representa tions than several previous approaches most recent work in learning for semantic parsinghas focused on shallowanalysis such as seman tic role labeling in thispaper we address the more ambitious task of learning to map sentences to a complete formal meaning representation language we consider two mrls that can be directly used to perform useful complex tasksthe first is a prologbased language used in a previouslydeveloped corpus of queries to a database on yous geography the second mrl is a coaching language for robotic soccer developed for the robocup coach competition in which ai researchers compete to provide effective instructions to a coachable team of agents in a simulated soccer domain we present an approach based on a statisti cal parser that generates a semantically augmentedparse tree in which each internal node includes both a syntactic and semantic labelwe aug ment collinsheaddriven model 2 to incorporate a semantic label on each internalnodeby integrating syntactic and semantic inter pretation into a single statistical model and finding the globally most likely parse an accurate combined syntacticsemantic analysis can be obtainedonce a sapt is generated an additional step is required totranslate it into a final formal meaning representa tion our approach is implemented in a system called scissor training the system requires sentences an notated with both goldstandard sapts and mrs we present experimental results on corpora for bothgeographydatabase querying and robocup coaching demonstrating that scissor produces more accurate semantic representations than several previ ous approaches based on symbolic learning we used two mrls in our experiments clang and geoquerythey capture the meaning of linguistic utterances in their domain in a formal language9 21 clang the robocup coach languagerobocup is an interna tional ai research initiative using robotic soccer as its primary domainin the coach competition teams of agents compete on a simulated soccer field and receive advice from a team coach in a formal language called clangin clang tactics and behaviors are expressed in terms of ifthen rulesas described in its grammar consists of 37 nonterminal symbols and 133 productionsbelow is a sample rule with its english gloss if the ball is in our penalty area all our players except player 4 should stay in our half22 geoquery a db query languagegeoquery is a logical query language for a small database of yous geography containing about 800 factsthis domain was originally chosen to testcorpusbased semantic parsing due to the avail ability of a handbuilt naturallanguage interface geobase supplied with turbo prolog 20 the geoquery language consists of prolog queries augmented with several metapredicates below is a sample query with its english gloss answerloc consta how many cities are there in the usthis section describes our basic framework for semantic parsing which is based on a fairly standard approach to compositional semantics first a statistical parseris used to construct a sapt that captures the se mantic interpretation of individual words and the basic predicateargument structure of the sentencenext a recursive procedure is used to composition ally construct an mr for each node in the sapt from the semantic label of the node and the mrs has2 vpbowner player the ball nnplayer cdunum npnull nnnull vbbowner sbowner npplayer dtnull prpteam our figure 1 an sapt for a simple clang sentencefunctionbuildmr input the root node n of a sapt predicateargument knowledge k for the mrlnotation x mr is the mr of node x output n mr c i the ith child node of n 1 i n c h getsemantichead see section 3 c h mr buildmr for each other child c i where i 6 h c i mr buildmr composemr see section 3 n mr c h mr figure 2 computing an mr from a saptof its childrensyntactic structure provides information of how the parts should be composedambiguities arise in both syntactic structure and the semantic interpretation of words and phrasesby in tegrating syntax and semantics in a single statisticalparser that produces an sapt we can use both se mantic information to resolve syntactic ambiguitiesand syntactic information to resolve semantic ambi guitiesin a sapt each internal node in the parse tree is annotated with a semantic labelfigure 1 showsthe sapt for a simple sentence in the clang do mainthe semantic labels which are shown afterdashes are concepts in the domainsome type con cepts do not take arguments like team and unum some concepts which we refer to as predicates take an ordered list of argumentslike player and bowner the predicateargument knowledge k specifies for each predicate the semantic constraints on its argumentscon straints are specified in terms of the concepts that can fill each argument such as player and bownera special semantic label nullis used for nodes that do not correspond to any con cept in the domainfigure 2 shows the basic algorithm for build ing an mr from an saptfigure 3 illustrates the 10 player the ball n3bownern7player n2null null null n4player n5team our n6unum 2 n1bowner has n8bowner figure 3 mrs constructed for each sapt nodeconstruction of the mr for the sapt in figure 1nodes are numbered in the order in which the con struction of their mrs are completedthe first step getsemantichead determines which of anodes children is its semantic head based on hav ing a matching semantic labelin the example node n3 is determined to be the semantic head of the sentence since its semantic label bowner matchesn8s semantic labelnext the mr of the seman tic head is constructed recursivelythe semantichead of n3 is clearly n1since n1 is a partofspeech node its semantic label directly de termines its mr which becomes bowneronce the mr for the head is constructed the mr of all other children are computed recursivelyand composemr assigns their mrs to fill the arguments in the heads mr to construct the com plete mr for the nodeargument constraints areused to determine the appropriate filler for each ar gumentsince n2 has a null label the mr of n3 also becomes bownerwhen computing the mr for n7 n4 is determined to be the head with the mr playercomposemr then assigns n5s mr to fill the team argument and n6s mr to fill the unum argument to construct n7s complete mr playerthis mr in turn is composed withthe mr for n3 to yield the final mr for the sen tence bownerfor mrls such as clang whose syntax doesnot strictly follow a nested set of predicates and ar guments some final minor syntactic adjustment of the final mr may be neededin the example thefinal mr is in the fol lowing discussion we ignore the difference between these twothere are a few complications left which re quire special handling when generating mrslike coordination anaphora resolution and noncompositionality exceptionsdue to space limitations we do not present the straightforward tech niques we used to handle themthis section discusses how sentences for training scissor were manually annotated with sapts sentences were parsed by collinsheaddriven model 2 to generate an initial syntactic parse treethe trees were then manually corrected and each node augmented with a semantic labelfirst semantic labels for individual words called semantic tags are added to the pos nodes in thetreethe tag null is used for words that have no cor responding conceptsome concepts are conveyedby phrases like has the ballfor bowner in the pre vious exampleonly one word is labeled with the concept the syntactic head word is preferredduring parsing the other words in thephrase will provide context for determining the se mantic label of the head wordlabels are added to the remaining nodes in abottomup mannerfor each node one of its chil dren is chosen as the semantic head from which it will inherit its labelthe semantic head is chosen as the child whose semantic label can take the mrs of the other children as argumentsthis step wasdone mostly automatically but required some man ual corrections to account for unusual casesin order for composemr to be able to construct the mr for a node the argument constraints for its semantic head must identify a unique concept to fill each argumenthowever some predicates take multiple arguments of the same type such as pointnum which is a kind of point that represents a field coordinate in clangin this case extra nodes are inserted in the treewith new type concepts that are unique for each ar gumentan example is shown in figure 4 in whichthe additional type concepts num1 and num2 are in troducedagain during parsing context will be used to determine the correct type for a given wordthe point label of the root node of figure 4 is the concept that includes all kinds of points in clangonce a predicate has all of its arguments filled we 11 05 rrbrrbnull lrb01 cdnum cdnum lrbpointnum prnpoint cdnum1 cdnum2figure 4 adding new types to disambiguate argu mentsuse the most general clang label for its concept this generality avoids sparse data problems during training51 collins headdriven model 2collinsheaddriven model 2 is a generative lexi calized model of statistical parsingin the following section we follow the notation in each nonterminal x in the tree is a syntactic label which is lexicalized by annotating it with a word w and a pos tag t syn thus we write a nonterminal as x where x is a syntactic label and x hw t syn i x is then what is generated by the generative modeleach production lhs rhs in the pcfg is in the form p l n l 1 hr 1 r m where h is the headchild of the phrase which in herits the headword h from its parent p l 1 l n and r 1 r m are left and right modifiers of h sparse data makes the direct estimation ofp infeasibletherefore it is decom posed into several steps first generating the head then the right modifiers from the head outward then the left modifiers in the same waysyntactic subcategorization frames lc and rc for the leftand right modifiers respectively are generated be fore the generation of the modifierssubcat framesrepresent knowledge about subcategorization preferencesthe final probability of a production is com posed from the following probabilities 1the probability of choosing a head constituentlabel h p h 2the probabilities of choosing the left and rightsubcat frames lc and rc p l and p are has2our player the prpteam nnplayer cdunum nnnulldtnull npplayer vpbowner npnull ball sbowner vbbowner figure 5 a lexicalized sapt3the probabilities of generating the left and right modifiers q i1m1 p are jhp hi
W05-0602
a statistical semantic parser that integrates syntax and semanticswe introduce a learning semantic parser scissor that maps naturallanguage sentences to a detailed formal meaning representation languageit first uses an integrated statistical parser to produce a semantically augmented parse tree in which each nonterminal node has both a syntactic and a semantic labela compositionalsemantics procedure is then used to map the augmented parse tree into a final meaning representationwe evaluate the system in two domains a naturallanguage database interface and an interpreter for coaching instructions in robotic soccerwe present experimental results demonstrating that scissor produces more accurate semantic representations than several previous approacheswe introduced an approach scissor where the composition of meaning representations is guided by syntax
generalized inference with multiple semantic role labeling systems result with joint inference on the development set overall results on the development and test sets are shown in table 1 table 2 shows the results of individual systems and the improvement gained by the joint inference on the development set 4 conclusions we present an implementation of srl system which composed of four stages1 pruning 2 argument identification 3 argument classification and 4 inference the inference provides a natural way to take the output of multiple argument classifiers and combines them into a coherent predicateargument output significant improvement in overall srl performance through this inference is illustrated acknowledgments we are grateful to dash optimization for the free academic use of xpressmp this research is sup our srl system consists of four stages pruning argument identification argument classification and inferencein particular the goal of pruning and argument identification is to identify argument candidates for a given verb predicatethe system only classifies the argument candidates into their types during the argument classification stagelinguistic and structural constraints are incorporated in the inference stage to resolve inconsistent global predictionsthe inference stage can take as its input the output of the argument classification of a single system or of multiple systemswe explain the inference for multiple systems in sec2only the constituents in the parse tree are considered as argument candidatesin addition our system exploits the heuristic introduced by to filter out very unlikely constituentsthe heuristic is a recursive process starting from the verb whose arguments are to be identifiedit first returns the siblings of the verb then it moves to the parent of the verb and collects the siblings againthe process goes on until it reaches the rootin addition if a constituent is a pp its children are also collectedcandidates consisting of only a single punctuation mark are not consideredthis heuristic works well with the correct parse treeshowever one of the errors by automatic parsers is due to incorrect pp attachment leading to missing argumentsto attempt to fix this we consider as arguments the combination of any consecutive np and pp and the split of np and pp inside the np that was chosen by the previous heuristicsthe argument identification stage utilizes binary classification to identify whether a candidate is an argument or notwe train and apply the binary classifiers on the constituents supplied by the pruning stagemost of the features used in our system are standard features which include this stage assigns the final argument labels to the argument candidates supplied from the previous stagea multiclass classifier is trained to classify the types of the arguments supplied by the argument identification stageto reduce the excessive candidates mistakenly output by the previous stage the classifier can also classify the argument as null to discard the argumentthe features used here are the same as those used in the argument identification stage with the following additional featuresthe purpose of this stage is to incorporate some prior linguistic and structural knowledge such as arguments do not overlap or each verb takes at most one argument of each type this knowledge is used to resolve any inconsistencies of argument classification in order to generate final legitimate predictionswe use the inference process introduced by the process is formulated as an integer linear programming problem that takes as inputs the confidences over each type of the arguments supplied by the argument classifierthe output is the optimal solution that maximizes the linear sum of the confidence scores subject to the constraints that encode the domain knowledgeformally speaking the argument classifier attempts to assign labels to a set of arguments s1m indexed from 1 to m each argument si can take any label from a set of argument labels p and the indexed set of arguments can take a set of labels c1m e pmif we assume that the argument classifier returns an estimated conditional probability distribution prob then given a sentence the inference procedure seeks an global assignment that maximizes the following objective function subject to linguistic and structural constraintsin other words this objective function reflects the expected number of correct argument predictions subject to the constraintsthe constraints are encoded as the followingsthe inference process allows a natural way to combine the outputs from multiple argument classifiersspecifically given k argument classifiers which perform classification on k argument sets s1 skthe inference process aims to optimize the objective function traders say unable to cool the selling panic in both stocks and futures where probj is the probability output by system jnote that all systems may not output with the same set of argument candidates due to the pruning and argument identificationfor the systems that do not output for any candidate we assign the probability with a prior to this phantom candidatein particular the probability of the null class is set to be 06 based on empirical tests and the probabilities of the other classes are set proportionally to their occurrence frequencies in the training datafor example figure 1 shows the two candidate sets for a fragment of a sentence traders say unable to cool the selling panic in both stocks and futures in this example system a has two argument candidates a1 traders and a4 the selling panic in both stocks and futures system b has three argument candidates b1 traders b2 the selling panic and b3 in both stocks and futuresthe phantom candidates are created for a2 a3 and before of which probability is set to the priorspecifically for this implementation we first train two srl systems that use collins parser and charniaks parser respectivelyin fact these two parsers have noticeably different outputin evaluation we run the system that was trained with charniaks parser 5 times with the top5 parse trees output by charniaks parser1together we have six different outputs per predicateper each parse tree output we ran the first three stages namely pruning argument identification and argument classificationthen a joint inference stage is used to resolve the inconsistency of the output of argument classification in these systemsthe learning algorithm used is a variation of the winnow update rule incorporated in snow a multiclass classifier that is tailored for large scale learning taskssnow learns a sparse network of linear functions in which the targets are represented as linear functions over a common feature spaceit improves the basic winnow multiplicative update rule with a regularization term which has the effect of trying to separate the data with a large margin separator and voted weight vector softmax function is used to convert raw activation to conditional probabilitiesif there are n classes and the raw activation of class i is acti the posterior estimation for class i is each in summary training used both full and partial syntactic information as described in section 1in training snows default parameters were used with the exception of the separator thickness 15 the use of average weight vector and 5 training cyclesthe parameters are optimized on the development settraining for each system took about 6 hoursthe evaluation on both test sets which included running overall results on the development and test sets are shown in table 1table 2 shows the results of individual systems and the improvement gained by the joint inference on the development setwe present an implementation of srl system which composed of four stages1 pruning 2 argument identification 3 argument classification and 4 inferencethe inference provides a natural way to take the output of multiple argument classifiers and combines them into a coherent predicateargument outputsignificant improvement in overall srl performance through this inference is illustratedwe are grateful to dash optimization for the free academic use of xpressmpthis research is supported by ardas aquaint program dois reflex program and an onr muri award
W05-0625
generalized inference with multiple semantic role labeling systemswe present an approach to semantic role labeling that takes the output of multiple argument classifiers and combines them into a coherent predicateargument output by solving an optimization problemthe optimization stage which is solved via integer linear programming takes into account both the recommendation of the classifiers and a set of problem specific constraints and is thus used both to clean the classification results and to ensure structural integrity of the final role labelingwe illustrate a significant improvement in overall srl performance through this inferencewe adopted the outputs of multiple srl systems and combined them into a coherent predicate argument output by solving an optimization problem
syntactic features for evaluation of machine translation automatic evaluation of machine translabased on computing similarity between system output and human reference translations has revolutionized the development of mt systems we explore the use of syntactic information including constituent labels and headmodifier dependencies in computing similarity between output and reference our results show that adding syntactic information to the evaluation metric improves both sentencelevel and corpuslevel correlation with human judgments evaluation has long been a stumbling block in the development of machine translation systems due to the simple fact that there are many correct translations for a given sentencehuman evaluation of system output is costly in both time and money leading to the rise of automatic evaluation metrics in recent yearsthe most commonly used automatic evaluation metrics bleu and nist are based on the assumption that the closer a machine translation is to a professional human translation the better it is for every hypothesis bleu computes the fraction of ngrams which also appear in the reference sentences as well as a brevity penaltynist uses a similar strategy to bleu but further considers that ngrams with different frequency should be treated differently in the evaluationit introduces the notion of information weights which indicate that rarely occurring ngrams count more than those frequently occurring ones in the evaluation bleu and nist have been shown to correlate closely with human judgments in ranking mt systems with different qualities in the 2003 johns hopkins workshop on speech and language engineering experiments on mt evaluation showed that bleu and nist do not correlate well with human judgments at the sentence level even when they correlate well over large test sets kulesza and shieber use a machine learning approach to improve the correlation at the sentence leveltheir method based on the assumption that higher classification accuracy in discriminating human from machinegenerated translations will yield closer correlation with human judgments uses support vector machine based learning to weight multiple metrics such as bleu nist and wer the svm is trained for differentiating the mt hypothesis and the professional human translations and then the distance from the hypothesiss metric vector to the hyperplane of the trained svm is taken as the final score for the hypothesiswhile the machine learning approach improves correlation with human judgments all the metrics discussed are based on the same type of information ngram subsequences of the hypothesis translationsthis type of feature cannot capture the grammaticality of the sentence in part because they do not take into account sentencelevel informationfor example a sentence can achieve an excellent bleu score without containing a verbas mt systems improve the shortcomings of ngram based evaluation are becoming more apparentstateoftheart mt output often contains roughly the correct words and concepts but does not form a coherent sentenceoften the intended meaning can be inferred often it cannotevidence that we are reaching the limits of ngram based evaluation was provided by charniak et al who found that a syntaxbased language model improved the fluency and semantic accuracy of their system but lowered their bleu scorewith the progress of mt research in recent years we are not satisfied with the getting correct words in the translations we also expect them to be wellformed and more readablethis presents new challenges to mt evaluationas discussed above the existing wordbased metrics can not give a clear evaluation for the hypothesis fluencyfor example in the bleu metric the overlapping fractions of ngrams with more than one word are considered as a kind of metric for the fluency of the hypothesisconsider the following simple exampleif we use bleu to evaluate the two sentences hypothesis 2 has two bigrams a dog and i had which are also found in the reference and hypothesis 1 has no bigrams in common with the referencethus hypothesis 2 will get a higher score than hypothesis 1the result is obviously incorrecthowever if we evaluate their fluency based on the syntactic similarity with the reference we will get our desired resultsfigure 1 shows syntactic trees for the example sentences from which we can see that hypothesis 1 has exactly the same syntactic structure with the reference while hypothesis 2 has a very different onethus the evaluation of fluency can be transformed as computing the syntactic similarity of the hypothesis and the referencesthis paper develops a number of syntactically motivated evaluation metrics computed by automatically parsing both reference and hypothesis sentencesour experiments measure how well these metrics correlate with human judgments both for individual sentences and over a large test set translated by mt systems of varying qualityin order to give a clear and direct evaluation for the fluency of a sentence syntax trees are used to generate metrics based on the similarity of the mt hypothesiss tree and those of the referenceswe cannot expect that the whole syntax tree of the hypothesis can always be found in the references thus our approach is to be based on the fractions of the subtrees which also appear in the reference syntax treesthis idea is intuitively derived from bleu but with the consideration of the sparse subtrees which lead to zero fractions we average the fractions in the arithmetic mean instead of the geometric mean used in bleuthen for each hypothesis the fractions of subtrees with different depths are calculated and their arithmetic mean is computed as the syntax tree based metric which we denote as subtree metric stm where d is the maximum depth of subtrees considered count denotes the number of times subtree t appears in the candidates syntax tree and countclip denotes the clipped number of times t appears in the references syntax treesclipped here means that for a given subtree the count computed from the hypothesis syntax tree can not exceed the maximum number of times the subtree occurs in any single references syntax treea simple example with one hypothesis and one reference is shown in figure 2setting the maximum depth to 3 we go through the hypothesis syntax tree and compute the fraction of subtrees with different depthsfor the 1depth subtrees we get s np vp pron v np which also appear in the reference syntax treesince pron only occurs once in the reference its clipped count should be 1 rather than 2then we get 6 out of 7 for the 1depth subtreesfor the 2depth subtrees we get snp vp nppron and vpv np which also appear in the reference syntax treefor the same reason the subtree nppron can only be counted oncethen we get 3 out of 4 for the 2depth subtreesimilarly the fraction of 3depth subtrees is 1 out of 2therefore the final score of stm is 30702while the subtree overlap metric defined above considers only subtrees of a fixed depth subtrees of other configurations may be important for discriminating good hypothesesfor example we may want to look for the subtree to find sentences with transitive verbs while ignoring the internal structure of the subject noun phrasein order to include subtrees of all configurations in our metric we turn to convolution kernels on our treesusing h to denote the vector of counts of all subtrees found in tree x for two trees t1 and t2 the inner product h h counts the number of matching pairs of subtrees of t1 and t2collins and duffy describe a method for efficiently computing this dot product without explicitly computing the vectors h which have dimensionality exponential in the size of the original treein order to derive a similarity measure ranging from zero to one we use the cosine of the vectors h we can compute the cosine similarity using the kernel method without ever computing the entire of vector of counts h our kernelbased subtree metric tkm is then defined as the maximum of the cosine measure over the references the advantage of using the tree kernel is that it can capture the similarity of subtrees of different shapes the weak point is that it can only use the reference trees one by one while stm can use them simultaneouslythe dot product also weights individual features differently than our other measures which compute overlap in the same way as does bleufor example if the same subtree occurs 10 times in both the hypothesis and the reference this contributes a term of 100 to the dot product rather than 10 in the clipped count used by bleu and by our subtree metric stmdependency trees consist of trees of headmodifier relations with a word at each node rather than just at the leavesdependency trees were found to correspond better across translation pairs than constituent trees by fox and form the basis of the machine translation systems of alshawi et al and lin we derived dependency trees from the constituent trees by applying the deterministic headword extraction rules used by the parser of collins for the example of the reference syntax tree in figure 2 the whole tree with the root s represents a sentence and the subtree npart n represents a noun phrasethen for every node in the syntax tree we can determine its headword by its syntactic structure from the subtree npart n for example the headword selection rules chose the headword of np to be word corresponding to the pos n in the subtree and the other child which corresponds to art is the modifier for the headwordthe dependency tree then is a kind of structure constituted by headwords and every subtree represents the modifier information for its root headwordfor example the dependency tree of the sentence i have a red pen is shown as belowthe dependency tree contains both the lexical and syntactic information which inspires us to use it for the mt evaluationnoticing that in a dependent tree the child nodes are the modifier of its parent we propose a dependencytree based metric by extracting the headwords chains from both the hypothesis and the reference dependency treesa headword chain is a sequence of words which corresponds to a path in the dependency treetake the dependency tree in figure 2 as the example the 2word headword chains include have i have pen pen a and pen redbefore using the headword chains we need to extract them out of the dependency treesfigure 3 gives an algorithm which recursively extracts the headword chains in a dependency tree from short to longhaving the headword chains the headword chain based metric is computed in a manner similar to bleu but using ngrams of dependency chains rather than ngrams in the linear order of the sentencefor every hypothesis the fractions of headword chains which also appear in the reference dependency trees are averaged as the final scoreusing hwcm to denote the headword chain based metric it is computed as follows where d is chosen as the maximum length chain consideredwe may also wish to consider dependency relations over more than two words that are contiguous but not in a single ancestor chain in the dependency treefor this reason the two methods described in section 31 are used to compute the similarity of dependency trees between the mt hypothesis and its references and the corresponding metrics are denoted dstm for dependency subtree metric and dtkm for dependency tree kernel metricour testing data contains two partsone part is a set of 665 english sentences generated by a chineseenglish mt systemand for each mt hypothesis three reference translations are associated with itinput dependency tree t maximum length n of the headword chain output headword chains from length 1 to n the human judgments on a scale of 1 to 5 were collected at the 2003 johns hopkins speech and language summer workshop which tells the overall quality of the mt hypothesesthe translations were generated by the alignment template system of och this testing set is called jhu testing set in this paperthe other set of testing data is from mt evaluation workshop at acl05three sets of human translations are selected as the references and the outputs of seven mt systems are used for testing the performance of our syntactic metricseach set of mt translations contains 929 english sentences each of which is associated with human judgments for its fluency and adequacythe fluency and adequacy scores both range from 1 to 5our syntactic metrics are motivated by a desire to better capture grammaticality in mt evaluation and thus we are most interested in how well they correlate with human judgments of sentences fluency rather than the adequacy of the translationto do this the syntactic metrics parser as well as bleu were used to evaluate hypotheses in the test set from acl05 mt workshop which provides both fluency and adequacy scores for each sentence and their pearson coefficients of correlation with the human fluency scores were computedfor bleu and hwcm in order to avoid assigning zero scores to individual sentences when precision for ngrams of a particular length is zero we replace it with an epsilon value of 103we choose e14 and e15 as two representative mt systems in the acl05 mt workshop data set which have relatively high human scores and low human scores respectivelythe results are shown in table 1 and table 2 with every metric indexed by the maximum ngram length or subtree depththe last row of the each table shows the treekernelbased measures which have no depth parameter to adjust but implicitly consider all depthsthe results show that in both systems our syntactic metrics all achieve a better performance in the correlation with human judgments of fluencywe also notice that with the increasing of the maximum length of ngrams the correlation of bleu with human judgments does not necessarily increase but decreases in most casesthis is contrary to the argument in bleu which says that longer ngrams better represent the sentences fluency than the shorter onesthe problem can be explained by the limitation of the reference translationsin our experiments every hypothesis is evaluated by referring to three human translationssince the three human translations can only cover a small set of possible translations with the increasing of ngram length more and more correct ngrams might not be found in the references so that the fraction of longer ngrams turns to be less reliable than the short ones and hurts the final scoresin the the corpuslevel evaluation of a mt system the sparse data problem will be less serious than in the sentencelevel evaluation since the overlapping ngrams of all the sentences and their references will be summed upso in the traditional bleu algorithm used for corpuslevel evaluation a maximum ngram of length 4 or 5 is usually useda similar trend can be found in syntax tree and dependency tree based metrics but the decreasing ratios are much lower than bleu which indicates that the syntactic metrics are less affected by the sparse data problemthe poor performance of treekernel based metrics also confirms our arguments on the sparse data problem since the kernel measures implicitly consider the overlapping ratios of the subtrees of all shapes and thus will be very much affected by the sparse data problemthough our syntactic metrics are proposed for evaluating the sentences fluency we are curious how well they do in the overall evaluation of sentencesthus we also computed each metrics correlation with human overall judgments in e14 e15 and jhu testing setthe overall human score for each sentence in e14 and e15 is computed by summing up its fluency score and adequacy scorethe results are shown in table 3 table 4 and table 5we can see that the syntactic metrics achieve competitive correlations in the test among which hwcm based on headword chains gives better performances in evaluation of e14 and e15 and a slightly worse performance in jhu testing set than bleujust as with the fluency evaluation hwcm and other syntactic metrics present more stable performance as the ngrams length increaseswhile sentencelevel evaluation is useful if we are interested in a confidence measure on mt outputs corpus level evaluation is more useful for comparing mt systems and guiding their developmentdoes higher sentencelevel correlation necessarily indicate higher correlation in corpuslevel evaluationto answer this question we used our syntactic metrics and bleu to evaluate all the humanscored mt systems in the acl05 mt workshop test set and computed the correlation with human overall judgmentsthe human judgments for an mt system are estimated by summing up each sentences human overall scoretable 6 shows the results indexed by different ngrams and tree depthswe can see that the corpuslevel correlation and the sentencelevel correlation do not always correspondfor example the kernel dependency subtree metric achieves a very good performance in corpuslevel evaluation but it has a poor performance in sentencelevel evaluationsentencelevel correlation reflects the relative qualities of different hypotheses in a mt system which does not indicate any information for the relative qualities of different systemsif we uniformly decrease or increase every hypothesiss automatic score in a mt system the sentencelevel correlation with human judgments will remain the same but the corpuslevel correlation will be changedso we might possibly get inconsistent corpuslevel and sentencelevel correlationsfrom the results we can see that with the increase of ngrams length the performance of bleu and hwcm will first increase up to length 5 and then starts decreasing where the optimal ngram length of 5 corresponds to our usual setting for bleu algorithmthis shows that corpuslevel evaluation compared with the sentencelevel evaluation is much less sensitive to the sparse data problem and thus leaves more space for making use of comprehensive evaluation metricswe speculate this is why the kernel dependency subtree metric achieves the best performance among all the metricswe can also see that hwcm and dstm beat bleu in most cases and exhibit more stable performancean example hypothesis which was assigned a high score by hwcm but a low score by bleu is shown in table 7in this particular sentence the common headmodifier relations aboard plane and plane the caused a high headword chain overlap but did not appear as common ngrams counted by bleuthe hypothesis is missing the word fifth but was nonetheless assigned a high score by human judgesthis is probably due to its fluency which hwcm seems to capture better than bleuthis paper introduces several syntaxbased metrics for the evaluation of mt which we find to be particularly useful for predicting a hypothesiss fluencythe syntactic metrics except the kernel based ones all outperform bleu in sentencelevel fluency evaluationfor the overall evaluation of sentences for fluency and adequacy the metric based on headword chain performs better than bleu in both sentencelevel and corpuslevel correlation with human judgmentsthe kernel based metrics though poor in sentencelevel evaluation achieve the best results in corpuslevel evaluation where sparse data are less of a barrierour syntaxbased measures require the existence of a parser for the language in question however it is worth noting that a parser is required for the target language only as all our measures of similarity are defined across hypotheses and references in the same languageour results in particular for the primarily structural stm may be surprising in light of the fact that the parser is not designed to handle illformed or ungrammatical sentences such as those produced by machine translation systemsmodern statistical parsers have been tuned to discriminate good structures from bad rather than good sentences from badindeed in some recent work on reranking machine translation hypotheses parserproduced structures were not found to provide helpful information as a parser is likely to assign a goodlooking structure to even a lousy input hypothesishowever there is an important distinction between the use of parsers in reranking and evaluation in the present work we are looking for similarities between pairs of parse trees rather than at features of a single treethis means that the syntaxbased evaluation measures can succeed even when the tree structure for a poor hypothesis looks reasonable on its own as long as it is sufficiently distinct from the structures used in the referenceswe speculate that by discriminatively training weights for the individual subtrees and headword chains used by the syntaxbased metrics further improvements in evaluation accuracy are possibleacknowledgments we are very grateful to alex kulesza for assistance with the jhu datathis work was partially supported by nsf itr iis09325646 and nsf itr iis0428020
W05-0904
syntactic features for evaluation of machine translationautomatic evaluation of machine translation based on computing ngram similarity between system output and human reference translations has revolutionized the development of mt systemswe explore the use of syntactic information including constituent labels and headmodier dependencies in computing similarity between output and referenceour results show that adding syntactic information to the evaluation metric improves both sentencelevel and corpuslevel correlation with human judgmentswe measure the syntactic similarity between mt output and reference translationwe used syntactic structure and dependency information to go beyond the surface level matching
meteor an automatic metric for mt evaluation with improved correlation with human judgments evaluation such metrics are still of great value and utility in order to be both effective and useful an automatic metric for mt evaluation has to satisfy several basic criteria the primary and most intuitive requirement is that the metric have very high correlation with quantified human notions of mt quality furthermore a good metric should be as sensitive as possible to differences in mt quality between different systems and between different versions of the same system the metric should be consistent reliable and general needless to say satisfying all of the above criteria is extremely difficult and all of the metrics that have been proposed so far fall short of adequately addressing most if not all of these requirements nevertheless when appropriately quantified and converted into concrete test measures such requirements can set an overall standard by which different mt evaluation metrics can be compared and evaluated in this paper we describe meteor an automatic metric for mt evaluation which we have been developing meteor was designed to explicitly address several observed weaknesses in ibm bleu metric it is based on an explicit wordtoword matching between the mt output being evaluated and one or more reference translations our current matching supports not only matching between words that are identical in the two strings being compared but can also match words that are simple morphological variants of each other and words that are synonyms of each other we envision ways in which this strict matching can be further expanded in the future and describe these at the end of the paper each possible matching is scored based on a combination of several features these currently include unigramprecision unigramrecall and a direct measure of how outoforder the words of the mt output are with respect to the reference the score assigned to each individual sentence of mt output is derived from the best scoring match among all matches over all reference translations the maximalscoring matching is then also used in order to calculate an aggregate score for the mt system over the entire test set section 2 describes the metric in detail and provides a full example of the matching and scoring in previous work we compared meteor with ibm bleu metric and it is derived nist metric using several empirical evaluation methods that have been proposed in the recent literature as concrete means to assess the level of correlation of automatic metrics and human judgments we demonstrated that meteor has significantly improved correlation with human judgments furthermore our results demonstrated that recall plays a more important role than precision in obtaining highlevels of correlation with human judgments the previous analysis focused on correlation with human judgments at the system level in this paper we focus our attention on improving correlation between meteor score and human judgments at the segment levelhighlevels of correlation at the segment level are important because they are likely to yield a metric that is sensitive to minor differences between systems and to minor differences between different versions of the same system furthermore current levels of correlation at the sentence level are still rather low offering a very significant space for improvement the results reported in this paper demonstrate that all of the individual components included within meteor contribute to improved correlation with human judgments in particular meteor is shown to have statistically significant better correlation compared to unigramprecision unigramrecall and the harmonic fi combination of the two we are currently in the process of exploring several further enhancements to the current meteor metric which we believe have the potential to significantly further improve the sensitivity of the metric and its level of correlation with human judgments our work on these directions is described in further detail in section 4the main principle behind ibm bleu metric is the measurement of the overlap in unigrams and higher order ngrams of words between a translation being evaluated and a set of one or more reference translations the main component of bleu is ngram precision the proportion of the matched ngrams out of the total number of ngrams in the evaluated translationprecision is calculated separately for each ngram order and the precisions are combined via a geometric averaging bleu does not take recall into account directly recall the proportion of the matched ngrams out of the total number of ngrams in the reference translation is extremely important for assessing the quality of mt output as it reflects to what degree the translation covers the entire content of the translated sentence bleu does not use recall because the notion of recall is unclear when matching simultaneously against a set of reference translations to compensate for recall bleu uses a brevity penalty which penalizes translations for being quottoo shortquot the nist metric is conceptually similar to bleu in most aspects including the weaknesses discussed below bleu and nist suffer from several weaknesses which we attempt to address explicitly in our proposed meteor metric the lack of recall we believe that the fixed brevity penalty in bleu does not adequately compensate for the lack of recall our experimental results strongly support this claim use of higher order ngrams higher order ngrams are used in bleu as an indirect measure of a translation level of grammatical wellformedness we believe an explicit measure for the level of grammaticality can better account for the importance of grammaticality as a factor in the mt metric and result in better correlation with human judgments of translation quality lack of explicit wordmatching between translation and reference ngram counts do not require an explicit wordtoword matching but this can result in counting incorrect quotmatchesquot particularly for common function words use of geometric averaging of ngrams geometric averaging results in a score of quotzeroquot whenever one of the component ngram scores is zero consequently bleu scores at the sentence level can be meaningless although bleu was intended to be used only for aggregate counts over an entire testset scores at the sentence level can be useful indicators of the quality of the metric in experiments we conducted a modified version of bleu that uses equalweight arithmetic averaging of ngram scores was found to have better correlation with human judgments meteor was designed to explicitly address the weaknesses in bleu identified above it evaluates a translation by computing a score based on explicit wordtoword matches between the translation and a reference translation if more than one reference translation is available the given translation is scored against each reference independently and the best score is reported this is discussed in more detail later in this section given a pair of translations to be compared meteor creates an alignment between the two strings we define an alignment as a mapping between unigrams such that every unigram in each string maps to zero or one unigram in the other string and to no unigrams in the same string thus in a given alignment a single unigram in one string cannot map to more than one unigram in the other string this alignment is incrementally produced through a series of stages each stage consisting of two distinct phases in the first phase an external module lists all the possible unigram mappings between the two strings thus for example if the word quotcomputerquot occurs once in the system translation and twice in the reference translation the external module lists two possible unigram mappings one mapping the occurrence of quotcomputerquot in the system translation to the first occurrence of quotcomputerquot in the reference translation and another mapping it to the second occurrence different modules map unigrams based on different criteria the quotexactquot module maps two unigrams if they are exactly the same the quotporter stemquot module maps two unigrams if they are the same after they are stemmed using the porter stemmer the quotwn synonymyquot module maps two unigrams if they are synonyms of each other in the second phase of each stage the largest subset of these unigram mappings is selected such that the resulting set constitutes an alignment as defined above if more than one subset constitutes an alignment and also has the same cardinality as the largest set meteor selects that set that has the least number of unigram mapping crosses intuitively if the two strings are typed out on two rows one above the other and lines are drawn connecting unigrams that are mapped to each other each line crossing is counted as a quotunigram mapping crossquot formally two unigram mappings and are said to cross if and only if the following formula evaluates to a negative number where pos is the numeric position of the unigram tx in the system translation string and pos is the numeric position of the unigram ry in the reference string for a given alignment every pair of unigram mappings is evaluated as a cross or not and the alignment with the least total crosses is selected in this second phase note that these two phases together constitute a variation of the algorithm presented in each stage only maps unigrams that have not been mapped to any unigram in any of the preceding stages thus the order in which the stages are run imposes different priorities on the mapping modules employed by the different stages that is if the first stage employs the quotexactquot mapping module and the second stage employs the quotporter stemquot module meteor is effectively preferring to first map two unigrams based on their surface forms and performing the stemming only if the surface forms do not match note that meteor is flexible in terms of the number of stages the actual external mapping module used for each stage and the order in which the stages are run by default the first stage uses the quotexactquot mapping module the second the quotporter stemquot module and the third the quotwn synonymyquot mo dule in section 4 we evaluate each of these configurations of meteor once all the stages have been run and a final alignment has been produced between the system translation and the reference translation the meteor score for this pair of translations is computed as follows first unigram precision is computed as the ratio of the number of unigrams in the system translation that are mapped to the total number of unigrams in the system translation similarly unigram recall is computed as the ratio of the number of unigrams in the system translation that are mapped to the total number of unigrams in the reference translation next we compute fmean by combining the precision and recall via a harmonicmean that places most of the weight on recall we use a harmonic mean of p and 9r the resulting formula used is precision recall and fmean are based on unigram matches to take into account longer matches meteor computes a penalty for a given alignment as follows first all the unigrams in the system translation that are mapped to unigrams in the reference translation are grouped into the fewest possible number of chunks such that the unigrams in each chunk are in adjacent positions in the system translation and are also mapped to unigrams that are in adjacent positions in the reference translation thus the longer the ngrams the fewer the chunks and in the extreme case where the entire system translation string matches the reference translation there is only one chunk in the other extreme if there are no bigram or longer matches there are as many chunks as there are unigram matches the penalty is then computed through the following formula unigrams matched _ for example if the system translation was quotthe president spoke to the audiencequot and the reference translation was quotthe president then spoke to the audiencequot there are two chunks quotthe presidentquot and quotspoke to the audiencequot observe that the penalty increases as the number of chunks increases to a maximum of 05as the number of chunks goes to 1 penalty decreases and its lower bound is decided by the number of unigrams matched the parameters if this penalty function were determined based on some experimentation with deveopment data but have not yet been trained to be optimal finally the meteor score for the given alignment is computed as follows data sets the mt evaluation systems analyzed in this paper only evaluate english sentences produced by translation systems by comparing them to english reference sentences this has the effect of reducing the fmean by the maximum of 50 if there are no bigram or longer matches for a single system translation meteor computes the above score for each reference translation and then reports the best score as the score for the translation the overall meteor score for a system is calculated based on aggregate statistics accumulated over the entire test set similarly to the way this is done in bleu we calculate aggregate precision aggregate recall an aggregate penalty and then combine them using the same formula used for scoring individual segmentswe evaluated the meteor metric and compared its performance with bleu and nist on the darpatides 2003 arabictoenglish and chinesetoenglish mt evaluation data released through the ldc as a part of the workshop on intrinsic and extrinsic evaluation measures for mt andor summarization at the annual meeting of the association of computational linguistics the chinese data set consists of 920 sentences while the arabic data set consists of 664 sentences each sentence has four reference translations furthermore for 7 systems on the chinese data and 6 on the arabic data every sentence translation has been assessed by two separate human judges and assigned an adequacy and a fluency score each such score ranges from one to five for this paper we computed a combined score for each translation by averaging the adequacy and fluency scores of the two judges for that translation we also computed an average system score for each translation system by averaging the combined score for all the translations produced by that system when the recall is weighed more heavily than precision and when a penalty is levied for fragmented matches as mentioned in the previous section our main goal in this paper is to evaluate meteor and its components on their translationbytranslation level correlation with human judgment towards this end in the rest of this paper our evaluation methodology is as follows for each system we compute the meteor score for every translation produced by the system and then compute the correlation between these individual scores and the human assessments for the same translations thus we get a single pearson r value for each system for which we have human assessments finally we average the r values of all the systems for each of the two language data sets to arrive at the overall average correlation for the chinese dataset and the arabic dataset this number ranges between 10 to 10 we compare the correlation between human assessments and meteor scores produced above with that between human assessments and precision recall and fmean scores to show the advantage of the various components in the meteor scoring function finally we run meteor using different mapping modules and compute the correlation as described above for each configuration to show the effect of each unigram mapping mechanism we computed sentence by sentence correlation between meteor scores and human assessments for each translation for every system tables 2 and 3 show the pearson r correlation values for each system as well as the average correlation value per language dataset we computed translation by translation correlations between human assessments and other metrics besides the meteor score namely precision recall and fmean tables 4 and 5 show the correlations for the various scores we observe that recall by itself correlates with human assessment much better than precision and that combining the two using the fmean formula the exact matching improves correlations some further improvement in correlation is produced by adding the synonymy mo dule described above results in further improvement by penalizing the fmean score using the chunk count we get some further marginal improvement in correlation to observe the effect of various unigram mapping modules on the correlation between the meteor score and human assessments we ran meteor with different sequences of stages with different mapping modules in them in the first experiment we ran meteor with only one stage that used the quotexactquot mapping mo dule this module matches unigrams only if their surface forms match in the second experiment we ran meteor with two stages the first using the quotexactquot mapping module and the second the quotporterquot mapping mo dule the porter mapping module matches two unigrams to each other if they are identical after being passed through the porter stemmer in the third experiment we replaced the porter mapping module with the wnstem mapping mo dule this module maps two unigrams to each other if they share the same base form in wordnet this can be thought of as a different kind of stemmer the difference from the porter stemmer is that the word stems are actual words when stemmed through wordnet in this manner in the last experiment we ran meteor with three stages the first two using the exact and the porter modules and the third the wnsynonymy mapping mo dule this module maps two unigrams together if at least one sense of each word belongs to the same synset in wordnet intuitively this implies that at least one sense of each of the two words represent the same concept this can be thought of as a poorman synonymy detection algorithm that does not disambiguate the words being tested for synonymy note that the meteor scores used to compute correlations in the other tables used exactly this sequence of stages tables 6 and 7 show the correlations between meteor scores produced in each of these experiments and human assessments for both the arabic and the chinese datasets on both data sets adding either stemming modules to simply using one problem with conducting correlation experiments with human assessment scores at the sentence level is that the human scores are noisy that is the levels of agreement between human judges on the actual sentence level assessment scores is not extremely high to partially address this issue the human assessment scores were normalized by a group at the mitre corporation to see the effect of this noise on the correlation we computed the correlation between the meteor score and both the raw human assessments as well as the normalized human assessments table 8 shows that indeed meteor scores correlate better with normalized human assessments in other words the noise in the human assessments hurts the correlations between automatic scores and human assessmentsthe meteor metric we described and evaluated in this paper while already demonstrating great promise is still relatively simple and naive we are in the process of enhancing the metric and our experimentation in several directions train the penalty and score formulas on data the formulas for penalty and meteor score were manually crafted based on empirical tests on a separate set of development data however we plan to optimize the formulas by training them on a separate data set and choosing that formula that best correlates with human assessments on the training data use semantic relatedness to map unigrams so far we have experimented with exact mapping stemmed mapping and synonymy mapping between unigrams our next step is to experiment with different measures of semantic relatedness to match unigrams that have a related meaning but are not quite synonyms of each other more effective use of multiple reference translations our current metric uses multiple reference translations in a weak way we compare the translation with each reference separately and select the reference with the best match this was necessary in order to incorporate recall in our metric which we have shown to be highly advantageous as our matching approach improves the need for multiple references for the metric may in fact diminish nevertheless we are exploring ways in which to improve our matching against multiple references recent work by provides the mechanism for producing semantically meaningful additional quotsyntheticquot references from a small set of real references we plan to explore whether using such synthetic references can improve the performance of our metric weigh matches produced by different modules differently our current multistage approach prefers metric imposes a priority on the different matching mo dules however once all the stages have been run unigrams mapped through different mapping modules are treated the same another approach to treating different mappings differently is to apply different weights to the mappings produced by different mapping mo dules thus quotcomputerquot may match quotcomputerquot with a score of 1 quotcomputersquot with a score of 08 and quotworkstationquot with a score of 03 as future work we plan to develop a version of meteor that uses such weighting schemeswe acknowledge kenji sagae and shyamsundar jayaraman for their work on the meteor system we also wish to thank john henderson and william morgan from mitre for providing us with the normalized human judgment scores used for this work
W05-0909
meteor an automatic metric for mt evaluation with improved correlation with human judgmentswe describe meteor an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machineproduced translation and humanproduced reference translationsunigrams can be matched based on their surface forms stemmed forms and meanings furthermore meteor can be easily extended to include more advanced matching strategiesonce all generalized unigram matches between the two strings have been found meteor computes a score for this matching using a combination of unigramprecision unigramrecall and a measure of fragmentation that is designed to directly capture how wellordered the matched words in the machine translation are in relation to the referencewe evaluate meteor by measuring the correlation between the metric scores and human judgments of translation qualitywe compute the pearson r correlation value between its scores and human quality assessments of the ldc tides 2003 arabictoenglish and chinesetoenglish datasetswe perform segmentbysegment correlation and show that meteor gets an r correlation value of 0347 on the arabic data and 0331 on the chinese datathis is shown to be an improvement on using simply unigramprecision unigramrecall and their harmonic f1 combinationwe also perform experiments to show the relative contributions of the various mapping modules
measuring the semantic similarity of texts this paper presents a knowledgebased method for measuring the semanticsimilarity of texts while there is a large body of previous work focused on finding the semantic similarity of concepts and words the application of these wordoriented methods to text similarity has not been yet explored in this paper we introduce a method that combines wordtoword similarity metrics into a texttotext metric and we show that this method outperforms the traditional text similarity metrics based on lexical matching measures of text similarity have been used for a long time in applications in natural language processing and related areasone of the earliest applications of text similarity is perhaps the vectorial model in information retrieval where the document most relevant to an input query is determined by ranking documents in a collection in reversed order of their similarity to the given query text similarity has been also used for relevance feedback and text classification word sense disambiguation and more recently for extractive summarization and methods for automatic evaluation of machine translation or text summarization the typical approach to finding the similarity between two text segments is to use a simple lexical matching method and produce a similarity score based on the number of lexical units that occur in both input segmentsimprovements to this simple method have considered stemming stopword removal partofspeech tagging longest subsequence matching as well as various weighting and normalization factors while successful to a certain degree these lexical matching similarity methods fail to identify the semantic similarity of textsfor instance there is an obvious similarity between the text segments i own a dog and i have an animal but most of the current text similarity metrics will fail in identifying any kind of connection between these textsthe only exception to this trend is perhaps the latent semantic analysis method which represents an improvement over earlier attempts to use measures of semantic similarity for information retrieval lsa aims to find similar terms in large text collections and measure similarity between texts by including these additional related wordshowever to date lsa has not been used on a large scale due to the complexity and computational cost associated with the algorithm and perhaps also due to the blackbox effect that does not allow for any deep insights into why some terms are selected as similar during the singular value decomposition processin this paper we explore a knowledgebased method for measuring the semantic similarity of textswhile there are several methods previously proposed for finding the semantic similarity of words to our knowledge the application of these wordoriented methods to text similarity has not been yet exploredwe introduce an algorithm that combines the wordtoword similarity metrics into a texttotext semantic similarity metric and we show that this method outperforms the simpler lexical matching similarity approach as measured in a paraphrase identification applicationgiven two input text segments we want to automatically derive a score that indicates their similarity at semantic level thus going beyond the simple lexical matching methods traditionally used for this taskalthough we acknowledge the fact that a comprehensive metric of text semantic similarity should take into account the relations between words as well as the role played by the various entities involved in the interactions described by each of the two texts we take a first rough cut at this problem and attempt to model the semantic similarity of texts as a function of the semantic similarity of the component wordswe do this by combining metrics of wordtoword similarity and language models into a formula that is a potentially good indicator of the semantic similarity of the two input textsthere is a relatively large number of wordtoword similarity metrics that were previously proposed in the literature ranging from distanceoriented measures computed on semantic networks to metrics based on models of distributional similarity learned from large text collectionsfrom these we chose to focus our attention on six different metrics selected mainly for their observed performance in natural language processing applications eg malapropism detection and word sense disambiguation and for their relatively high computational efficiencywe conduct our evaluation using the following word similarity metrics leacock chodorow lesk wu palmer resnik lin and jiang conrathnote that all these metrics are defined between concepts rather than words but they can be easily turned into a wordtoword similarity metric by selecting for any given pair of words those two meanings that lead to the highest concepttoconcept similaritywe use the wordnetbased implementation of these metrics as available in the wordnetsimilarity package we provide below a short description for each of these six metricsthe leacock chodorow similarity is determined as where length is the length of the shortest path between two concepts using nodecounting and d is the maximum depth of the taxonomythe lesk similarity of two concepts is defined as a function of the overlap between the corresponding definitions as provided by a dictionaryit is based on an algorithm proposed in as a solution for word sense disambiguationthe wu and palmer similarity metric measures the depth of the two concepts in the wordnet taxonomy and the depth of the least common subsumer and combines these figures into a similarity score the measure introduced by resnik returns the information content of the lcs of two concepts where ic is defined as and p is the probability of encountering an instance of concept c in a large corpusthe next measure we use in our experiments is the metric introduced by lin which builds on resniks measure of similarity and adds a normalization factor consisting of the information content of the two input concepts finally the last similarity metric we consider is jiang conrath which returns a score determined by in addition to the semantic similarity of words we also want to take into account the specificity of words so that we can give a higher weight to a semantic matching identified between two very specific words and give less importance to the similarity score measured between generic concepts while the specificity of words is already measured to some extent by their depth in the semantic hierarchy we are reinforcing this factor with a corpusbased measure of word specificity based on distributional information learned from large corporalanguage models are frequently used in natural language processing applications to account for the distribution of words in languagewhile word frequency does not always constitute a good measure of word importance the distribution of words across an entire collection can be a good indicator of the specificity of the wordsterms that occur in a few documents with high frequency contain a greater amount of discriminatory ability while terms that occur in numerous documents across a collection with a high frequency have inherently less meaning to a documentwe determine the specificity of a word using the inverse document frequency introduced in which is defined as the total number of documents in the corpus divided by the total number of documents that include that wordin the experiments reported in this paper we use the british national corpus to derive the document frequency counts but other corpora could be used to the same effectprovided a measure of semantic similarity between words and an indication of the word specificity we combine them into a measure of text semantic similarity by pairing up those words that are found to be most similar to each other and weighting their similarity with the corresponding specificity scorewe define a directional measure of similarity which indicates the semantic similarity of a text segment ti with respect to a text segment tjthis definition provides us with the flexibility we need to handle applications where the directional knowledge is useful and at the same time it gives us the means to handle bidirectional similarity through a simple combination of two unidirectional metricsfor a given pair of text segments we start by creating sets of openclass words with a separate set created for nouns verbs adjectives and adverbsin addition we also create a set for cardinals since numbers can also play an important role in the understanding of a textnext we try to determine pairs of similar words across the sets corresponding to the same openclass in the two text segmentsfor nouns and verbs we use a measure of semantic similarity based on wordnet while for the other word classes we apply lexical matching1for each noun in the set of nouns belonging to one of the text segments we try to identify the noun in the other text segment that has the highest semantic similarity according to one of the six measures of similarity described in section 21if this similarity measure results in a score greater than 0 then the word is added to the set of similar words for the corresponding word class wspo32the remaining word classes adjectives adverbs and cardinals are checked for lexical similarity with their counterparts and included in the corresponding word class set if a match is foundthe similarity between the input text segments ti and tj is then determined using a scoring function that combines the wordtoword similarities and the word specificity this score which has a value between 0 and 1 is a measure of the directional similarity in this case computed with respect to tithe scores from both directions can be combined into a bidirectional similarity using a simple average function sim simti simtj 2 text segment 1 the jurors were taken into the courtroom in groups of 40 and asked to fill out a questionnairewe illustrate the application of the text similarity measure with an examplegiven two text segments as shown in figure 1 we want to determine a score that reflects their semantic similarityfor illustration purposes we restrict our attention to one measure of wordtoword similarity the wu palmer metricfirst the text segments are tokenized partofspeech tagged and the words are inserted into their corresponding word class setsthe sets obtained for the given text segments are illustrated in figure 1starting with each of the two text segments and for each word in its word class sets we determine the most similar word from the corresponding set in the other text segmentas mentioned earlier we seek a wordnetbased semantic similarity for nouns and verbs and only lexical matching for adjectives adverbs and cardinalsthe word semantic similarity scores computed starting with the first text segment are shown in table 3text 1 text 2 maxsim idf jurors jurors 100 580 courtroom jurors 030 523 questionnaire questionnaire 100 357 groups questionnaire 029 085 were were 100 009 taken asked 100 028 asked asked 100 045 fill complete 086 129 out 0 006 40 0 139 next we use equation 7 and determine the semantic similarity of the two text segments with respect to text 1 as 06702 and with respect to text 2 as 07202finally the two figures are combined into a bidirectional measure of similarity calculated as 06952 based on equation 8although there are a few words that occur in both text segments there are also words that are not identical but closely related eg courtroom found similar to juror or fill which is related to completeunlike traditional similarity measures based on lexical matching our metric takes into account the semantic similarity of these words resulting in a more precise measure of text similarityto test the effectiveness of the text semantic similarity metric we use this measure to automatically identify if two text segments are paraphrases of each otherwe use the microsoft paraphrase corpus consisting of 4076 training pairs and 1725 test pairs and determine the number of correctly identified paraphrase pairs in the corpus using the text semantic similarity measure as the only indicator of paraphrasingin addition we also evaluate the measure using the pascal corpus consisting of 1380 testhypothesis pairs with a directional entailment for each of the two data sets we conduct two evaluations under two different settings an unsupervised setting where the decision on what constitutes a paraphrase is made using a constant similarity threshold of 05 across all experiments and a supervised setting where the optimal threshold and weights associated with various similarity metrics are determined through learning on training datain this case we use a voted perceptron algorithm 3we evaluate the text similarity metric built on top of the various wordtoword metrics introduced in section 21for comparison we also compute three baselines a random baseline created by randomly choosing a true or false value for each text pair a lexical matching baseline which only counts the number of matching words between the two text segments while still applying the weighting and normalization factors from equation 7 and a vectorial similarity baseline using a cosine similarity measure as traditionally used in information retrieval with tfidf term weightingfor comparison we also evaluated the corpusbased similarity obtained through lsa however the results obtained were below the lexical matching baseline and are not reported herefor paraphrase identification we use the bidirectional similarity measure and determine the similarity with respect to each of the two text segments in turn and then combine them into a bidirectional similarity metricfor entailment identification since this is a directional relation we only measure the semantic similarity with respect to the hypothesis we evaluate the results in terms of accuracy representing the number of correctly identified true or false classifications in the test data setwe also measure precision recall and fmeasure calculated with respect to the true values in each of the test data setstables 2 and 3 show the results obtained in the unsupervised setting when a text semantic similarity larger than 05 was considered to be an indicator of paraphrasing we also evaluate a metric that combines all the similarity measures using a simple average with results indicated in the combined rowthe results obtained in the supervised setting are shown in tables 4 and 5the optimal combination of similarity metrics and optimal threshold are now determined in a learning process performed on the training setunder this setting we also compute an additional baseline consisting of the most frequent label as determined from the training datafor the task of paraphrase recognition incorporating semantic information into the text similarity measure increases the likelihood of recognition significantly over the random baseline and over the lexical matching baselinein the unsupervised setting the best performance is achieved using a method that combines several similarity metrics into one for an overall accuracy of 688when learning is used to find the optimal combination of metrics and optimal threshold the highest accuracy of 715 is obtained by combining the similarity metrics and the lexical matching baseline togetherfor the entailment data set although we do not explicitly check for entailment the directional similarity computed for textual entailment recognition does improve over the random and lexical matching baselinesonce again the combination of similarity metrics gives the highest accuracy measured at 583 with a slight improvement observed in the supervised setting where the highest accuracy was measured at 589both these figures are competitive with the best results achieved during the pascal entailment evaluation although our method relies on a bagofwords approach as it turns out the use of measures of semantic similarity improves significantly over the traditional lexical matching metrics4we are nonetheless aware that a bagofwords approach ignores many of important relationships in sentence structure such as dependencies between words or roles played by the various arguments in the sentencefuture work will consider the investigation of more sophisticated representations of sentence structure such as first order predicate logic or semantic parse trees which should allow for the implementation of more effective measures of text semantic similarity
W05-1203
measuring the semantic similarity of textsthis paper presents a knowledgebased method for measuring the semanticsimilarity of textswhile there is a large body of previous work focused on finding the semantic similarity of concepts and words the application of these wordoriented methods to text similarity has not been yet exploredin this paper we introduce a method that combines wordtoword similarity metrics into a texttotext metric and we show that this method outperforms the traditional text similarity metrics based on lexical matchingwe proposed a hybrid method by combining six existing knowledgebased methods
better kbest parsing we discuss the relevance of kbest parsing torecent applications in natural language pro cessing and develop efficient algorithms for kbest trees in the framework of hypergraphparsing to demonstrate the efficiency scal ability and accuracy of these algorithms we present experiments on bikels implementation of collins lexicalized pcfg model and on chiangs cfgbased decoder for hierarchicalphrasebased translation we show in particu lar how the improved output of our algorithms has the potential to improve results from parse reranking systems and other applications many problems in natural language processing in volve optimizing some objective function over a set of possible analyses of an input stringthis set is often exponentialsized but can be compactly represented by merging equivalent subanalysesif the objective function is compatible with a packed representation then it can beoptimized efficiently by dynamic programmingfor ex ample the distribution of parse trees for a given sentence under a pcfg can be represented as a packed forest from which the highestprobability tree can be easily extractedhowever when the objective function f has no com patible packed representation exact inference would beintractableto alleviate this problem one common approach from machine learning is loopy belief propaga tion another solution is to split the computation into two phases in the first phase use some compatible objective function f to produce a kbest list which serves as an approximation to the full setthen in the second phase optimize f over all the analyses in the kbest lista typical example is discrimina tive reranking on kbest lists from a generative module such as for parsing and for translation where the reranking model has nonlocal features that cannot be computed during parsing properanother example is minimumbayesrisk decoding where assuming f defines a probability distribution over all candi dates one seeks the candidate with the highest expected score according to an arbitrary metric since in general the metric will not be com patible with the parsing algorithm the kbest lists canbe used to approximate the full distribution f a simi lar situation occurs when the parser can produce multiple derivations that are regarded as equivalent if we want the maximum a posteriori parse we have to sum over equivalent derivationsagainthe equivalence relation will in general not be compati ble with the parsing algorithm so the kbest lists can be used to approximate f as in data oriented parsing and in speech recognition another instance of this kbest approach is cascadedoptimizationnlp systems are often cascades of mod ules where we want to optimize the modulesobjectivefunctions jointlyhowever often a module is incompati ble with the packed representation of the previous module due to factors like nonlocal dependenciesso we might want to postpone some disambiguation by propagating kbest lists to subsequent phases as in joint parsing and semantic role labeling information extraction and coreference resolution and formal se mantics of tag moreover much recent work on discriminative training uses kbest lists they are sometimes used to approximate the normalization constant or partition func tion or to train a model by optimizing some metric incompatible with the packed representationfor example och showshow to train a loglinear translation model not by max imizing the likelihood of training data but maximizing the bleu score of the model on 53the datasimilarly chiang uses the kbest pars ing algorithm described below in a cfgbased loglinear translation model in order to learn feature weights which maximize bleufor algorithms whose packed representations aregraphs such as hidden markov models and other finitestate methods ratnaparkhis mxparse parser and many stackbased machine transla tion decoders the kbest paths problem is wellstudied in both pure algorithmic context and for surveys and nlpspeech community this pa per however aims at the kbest tree algorithms whose packed representations are hypergraphs which includes most parsersand parsingbased mt decodersany algorithm express ible as a weighted deductive system falls into this classinour experiments we apply the algorithms to the lexical ized pcfg parser of bikel which is very similar to collinsmodel 2 and to a synchronous cfg based machine translation system as pointed out by charniak and johnson the ma jor difficulty in kbest parsing is dynamic programmingthe simplest method is to abandon dynamic programming and rely on aggressive pruning to maintain tractabil ity as is used in but thisapproach is prohibitively slow and produces rather lowquality kbest lists gildea and juraf sky described an ooverhead extension for the cky algorithm and reimplemented collinsmodel 1 to obtain kbest parses with an average of 149 parses per sentencetheir algorithm turns out to be a special case of our algorithm 0 and is reported to also be prohibitively slowsince the original design of the algorithm described below we have become aware of two efforts that are very closely related to ours one by jimenez and marzal and another done in parallel to ours by charniakand johnson jimenez and marzal present an al gorithm very similar to our algorithm 3 while charniak and johnson propose using an algorithm similar to our algorithm 0 but with multiple passes to improve efficiencythey apply this method to the charniak parser to get 50best lists for reranking yielding an i am provement in parsing accuracyour work differs from jimenez and marzals in thefollowing three respectsfirst we formulate the parsing problem in the more general framework of hypergraphs making it applica ble to a very wide variety of parsing algorithms whereasjimenez and marzal define their algorithm as an exten sion of cky for cfgs in chomsky normal form onlythis generalization is not only of theoretical importance but also critical in the application to stateofthe art parsers such as and in collinsparsing model for instance the rules are dynamically generated and include unary productions mak ing it very hard to convert to cnf by preprocessing whereas our algorithms can be applied directly to these parserssecond our algorithm 3 has an improvementover jimenez and marzal which leads to a slight theoret ical and empirical speedupthird we have implementedour algorithms on top of stateoftheart largescale sta tistical parserdecoders and report extensive experimentalresults while jimenez and marzals was tested on rela tively small grammarson the other hand our algorithms are more scalable and much more general than the coarsetofine approachof charniak and johnsonin our experiments we can ob tain 10000best lists nearly as fast as 1best parsing with very modest use of memoryindeed charniak hasadopted our algorithm 3 into his own parser implemen tation and confirmed our findingsin the literature of k shortestpath problems minieka generalized the floyd algorithm in a way very similar to our algorithm 0 and lawler improvedit using an idea similar to but a little slower than the bi nary branching case of our algorithm 1for hypergraphs gallo et al study the shortest hyperpath problemand nielsen et al extend it to k shortest hyper pathour work differes from in two aspectsfirst we solve the problem of kbest derivations not the kbest hyperpaths although in many cases they coincide second their work assumes nonnegative costs so that they can apply dijkstralike algorithmsalthough generative models being probability based do not suffer from this problem more general models may require negative edge costs our work based on the viterbi algorithm is still appli cable as long as the hypergraph is acyclic and is used by mcdonald et al to get the kbest parsesfollowing klein and manning we use weighted directed hypergraphs as an abstraction of the probabilistic parsing problemdefinition 1an ordered hypergraph h is a tuple v e tr where v is a finite set of vertices e is a finite set of hyperarcs and r is the set of weightseach hyperarc e e is a triple 54 e t h f where h v is its head andt v is a vector of tail nodesf is a weight func tion from rt to r t v is a distinguished vertex called target vertexnote that our definition is different from those in previ ous work in the sense that the tails are now vectors rather than sets so that we can allow multiple occurrences of the same vertex in a tail and there is an ordering among the components of a taildefinition 2a hypergraph h is said to be monotonic if there is a total ordering on r such that every weightfunction f in h is monotonic in each of its arguments ac cording to ie if f rm 7r then 1 i m if ai ai then f f we also define the comparison function min to out put a if a b or b if otherwisein this paper we will assume this monotonicity whichcorresponds to the optimal substructure property in dy namic programming definition 3we denote e t to be the arity of the hyperarcif e 0 then f r is a constant and wecall h a source vertexwe define the arity of a hyper graph to be the maximum arity of its hyperarcsdefinition 4the backwardstar bs of a vertex v is the set of incoming hyperarcs e e h vthe indegree of v is bs definition 5a derivation d of a vertex v in a hyper graph h its size d and its weight w are recursively defined as follows if e bs with e 0 then d e is a derivation of v its size d 1 and its weight w f if e bs where e 0 and di is a derivation of ti for 1 i e then d ed1 deis a derivation of v its size d 1 ei1 di and itsweight w f wthe ordering on weights in r induces an ordering on derivations d d iff w wdefinition 6define di to be the ithbest derivation of v we can think of d1 dk as the components of a vector we shall denote by dthe kbest derivationsproblem for hypergraphs then is to find d given a hy pergraph v e trwith the derivations thus ranked we can introduce anonrecursive representation for derivations that is analogous to the use of backpointers in parser implementa tiondefinition 7a derivation with backpointers dof v is a tuple e jsuch that e bs and j 1 2 kethere is a onetoone correspondence between dbps of v and derivations of v e ed j1 d je accordingly we extend the weight function w to dbps w w if dd this in turn induces an ordering on dbps ddiff w wlet di denote the ithbest dbp of vwhere no confusion will arise we use the terms deriva tionand dbpinterchangeablycomputationally then the kbest problem can bestated as follows given a hypergraph h with arity a com pute d1 dk1 as shown by klein and manning hypergraphs can be used to represent the search space of most parsers more generally hypergraphs can be used to represent the search space of most weighted deductive sys tem for example the weighted cky algorithm given a contextfree grammar g nt p s in chomsky normal form and an input string w can be represented as a hypergraph of arity 2 as followseach item x i j is represented as a vertex v corre sponding to the recognition of nonterminal x spanning w from positions i1 through j for each production rule x yz in p and three free indices i j i excursus derivations and hyperpathsthe work of klein and manning introduces a cor respondence between hyperpaths and derivationswhen extended to the kbest case however that correspondence no longer holdsdefinition 9 given a hypergraph h v e tr a hyperpath piv of destination v v is an acyclic minimal hypergraph hpi vpi epi vr such that 1epi e 2v vpi eepi h 3you vpi you is either a source vertex or connected to a source vertex in hpias illustrated by figure 1 derivations are dif ferent from hyperpaths in the sense that in a derivation the same vertex can appear more than once with possibly different subderivations while itis represented at most once in a hyperpaththus the k best derivations problem we solve in this paper is verydifferent in nature from the kshortest hyperpaths prob lem in however the two problems do coincide when k 1 and for this reason the 1best hyperpath algorithm in is very similar to the 1best tree algo rithm in for kbest case they alsocoincide when the hypergraph is isomorphic to a case factor diagram the derivation forest of cfg parsing under the cky algorithm for instance can be represented as a cfd while the forest of earley algorithm can notan figure 2 an earley derivationnote that item appears twice 1 procedure v 2 for v v in topological order do 3 for e bs do for all incoming hyperarcs 4 d1min e 1update figure 3 the generic 1best viterbi algorithm item can appear twice in an earley derivation because of the prediction rule the kbest derivations problem has potentially more applications in tree generation which can not be modeled by hyperpathsbut de tailed discussions along this line are out of the scope of this paperthe traditional 1best viterbi algorithm traverses the hypergraph in topological order and for each vertex v calculates its 1best derivation d1 using all incoming hy perarcs e bs if we take the arity ofthe hypergraph to be constant then the overall time com plexity of this algorithm is o41 algorithm 0 navefollowing we isolate two basic operations in line 4 of the 1best algorithm that 56 can be generalized in order to extend the algorithm firstthe formation of the derivation e 1out of e best sub derivations second min which chooses the betterof two derivations we now generalize these two operations to operate on kbest listslet are ethe new multiplication operation multk is performed in three steps 1enumerate the kr derivations e j1 jr i 1 ji ktime o2sort these kr derivations time o o3select the first k elements from the sorted list of kr elementstime oso the overall time complexity of multk is owe also have to extend minto mergek which takes two vectors of length k as input and outputs the top k of the 2k elementsthis is similar to mergesort and can be done in linear time othen we only need to rewrite line 4 of the viterbi algorithm to extend it to the kbest case 4 d mergekmultk and the time complexity for this line is omaking the overall complexity o if we con sider the arity a of the hypergraph to be constant2 the overall space complexity is o since for each vertex we need to store a vector of length k in the context of cky parsing for cfg the 1bestviterbi algorithm has complexity o while the kbest version is o which is slower by a fac tor of o42 algorithm 1 speed up multk first we seek to exploit the fact that input vectors are all sorted and the function f is monotonic moreover we areonly interested in the top k elements of the ke possibili tiesdefine 1 to be the vector whose elements are all 1 de fine bi to be the vector whose elements are all 0 exceptbii 1as we compute pe multk we maintain a candi date set c of derivations that have the potential to be the next best derivation in the listif we picture the input as an edimensional space c contains those derivations that 2actually we do not need to sort all ke elements in order to extract the top k among them there is an efficient algorithm that can select the kth best element from the ke elements in time oso we can improve the overhead to ohave not yet been included in pe but are on the bound ary with those which haveit is initialized to e 1at each step we extract the best derivation from ccall it e jand append it to pethen e jmust be replaced in c by its neighbors e j bl 1 l e we implement c as apriority queue to make the extrac tion of its best derivation efficientat each iteration there are one emand e ioperationsif we use a binaryheap implementation for priority queues we get o time complexity for each iteration3 since we are only interested in the top k elements there are k iterations and the time complexity for a single multk is o yielding an overall time complexity of o and reducing the multiplicative overhead by a factor of o in the context of cky parsing this reduces the overhead to ofigure 5 shows the additional pseudocode needed for this algorithmit is integrated into the viterbialgorithm simply by rewriting line 4 of to in voke the function m 4 d mergekm 43 algorithm 2 combine mergek into multk we can further speed up both mergek and multk by a similar ideainstead of letting each multk generate a full k derivations for each hyperarc e and only then applying mergek to the results we can combine the candidate sets for all the hyperarcs into a single candidate setthat is we initialize c to e 1 e bs the set of all the top parses from each incoming hyperarc indeed it suffices to keep only the top k out of the bs candidates in c which would lead to a significant speedup in the case where bs k 4 now the top derivation in c is the top derivation for v then whenever we remove an element e jfrom c we replace it with the e elements e j bl 1 l e the full pseudocode for this algorithm is shown in figure 644 algorithm 3 compute multk lazilyalgorithm 2 exploited the idea of lazy computation per forming multk only as many times as necessarybut thisalgorithm still calculates a full kbest list for every ver tex in the hypergraph whereas we are only interested in 3if we maintain a minheap along with the minheap wecan reduce the periteration cost to o and with fi bonacci heap we can further improve it to be obut these techniques do not change the overall complexity when a is constant as we will see4this can be implemented by a lineartime randomized selection algorithm 57 2 2 0 1 1 2 4 2 2 3 0 1 2 1 2 4 2 2 3 4 0 1 2 4 1 2 4 figure 4 an illustration of algorithm 1 in e 2 dimensionshere k 3 is the numerical and the monotonic function f is defined as f a b italic numbers on the x and y axes are ais and b js respectivelywe want to compute the top 3 results from f with 1 i j 3in each iteration the current frontier is shown in oval boxes with the boldface denoting the best element among themthat element will be extracted and replaced by its two neighbors in the next iteration1 function m 2 cand e 1initialize the heap 3 pempty list the result of multk 4 while p 0 do 5 an 6 return p 7 8 procedure an 9 e jem 10 append e jto p 11 for i1 e do add the e neighbors 12 jj bi 13 if ji d and e j 0 do 8 an 9 10 procedure gc 11 tempe 1 e bs 12 candvthe top k elements in temp prune away useless candidates 13 h figure 6 algorithm 2 1 procedure lkbk is the global k 2 if d k then kth derivation already computed3 return 4 if candv is not defined then first visit of vertex v5 gcinitialize the heap 6 append them to d 1best 7 while d 0 do 8 e jdd last derivation 9 lnupdate the heap adding the successors of last derivation 10 append them to d get the next best derivation and delete it from the heap 11 12 procedure ln13 for i1 e do add the e neighbors 14 jj bi 15 lkb ji krecursively solve a subproblem 16 if ji d and e j 20to further explore the impact of collinscell limit on the quality of kbest lists we plotted average number of parses for a given sentence length generally speaking as input sentences get longer the number of parses grows but we see that the curve for collinskbest list goes down for large k we suspect this is due to the cell limit of 100 pruning awaypotentially good parses too early in the chartas sen tences get longer it is more likely that a lowerprobability parse might contribute eventually to the kbest parsesso we infer that collinskbest lists have limited quality for large k and this is demonstrated by the early convergence of its oraclereranking scoreby comparison our curves of both beam widths continue to grow with k 100all these experiments suggest that our kbest parses are of better quality than those from previous kbest parsers 7the reason the maximum is 101 and not 100 is that collins merged the 100best list using a beam of 103 with the 1best list using a beam of 104 60 86 88 90 92 94 96 98 1 2 5 10 20 30 50 70 100 o ra cle f s co re k this work with beam width 104 oracle reranking 0 2 4 6 8 10 1 2 5 10 20 30 50 70 100 pe rc en ta ge o f i m pr ov them en t o ve r 1 b es t k this work with beam width 104 relative improvement figure 9 absolutive and relative fscores of oracle reranking for the top k parses for section 23 compared to and 0 20 40 60 80 100 0 10 20 30 40 50 60 70 av er ag e nu m be r o f p ar se s sentence length this work with beam width 104 this work with beam width 103 with beam width 103 figure 10 average number of parses for each sentence length in section 23 using k100 with beam width 104 and 103 compared to 61 0001 001 01 1 10 10 100 1000 10000 100000 1e06 s e c o n ds k algorithm 2 algorithm 3figure 11 algorithm 2 compared with algorithm 3 on mt decoding taskaverage time vs k and similar quality to those from which has so far the highest fscore after rerank ing and this might lead to better results in real parse reranking52 experiment 2 mt decoderour second experiment was on a ckybased decoderfor a machine translation system imple mented in python 24 accelerated with psyco 13 we implemented algorithms 2 and 3 to computekbest english translations of mandarin sentencesbe because the cfg used in this system is large to begin with and then effectively intersected with a finitestate machine on the english side the grammar constant for this system is quite largethe decoder uses a relatively narrow beam search for efficiencywe ran the decoder on a 28 ghz xeon with 4 gb of memory on 331 sentences from the 2002 nist mteval test setwe tested algorithm 2 for k 2i 3 i 10 and algorithm 3 for k 2i 3 i 20for each sentence we measured the time to calculate the kbest list not including the initial 1best parsing phasewe then averaged the times over our test set to produce the graph of figure 11 which shows that algorithm 3 runs an average of about 300 times faster than algorithm 2furthermore we were able to test algorithm 3 up tok 106 in a reasonable amount of time8 8the curvature in the plot for algorithm 3 for k 1000 may be due to lack of resolution in the timing function for short timesthe problem of kbest parsing and the effect of kbest listsize and quality on applications are subjects of increas ing interest for nlp researchwe have presented herea generalpurpose algorithm for kbest parsing and ap plied it to two stateoftheart largescale nlp systems bikels implementation of collinslexicalized pcfgmodel and chiangs syn chronous cfg based decoder for machine translationwe hope that this work will encourage further investigation into whether larger and better kbest lists will improve performance in nlp applications questions which we ourselves intend to pursue as wellacknowledgements we would like to thank one of the anonymous reviewers of a previous version of this paper for pointing out the work by jimenez and marzal and eugene charniak and mark johnson for providing an early draft of their paperand very useful commentswe are also extremely grate ful to dan bikel for the help in experiments and michael collins for providing the data in his paperour thanksalso go to dan gildea jonathan graehl julia hock enmaier aravind joshi kevin knight daniel marcumitch marcus ryan mcdonald fernando pereira gior gio satta libin shen and hao zhang
W05-1506
better kbest parsingwe discuss the relevance of kbest parsing to recent applications in natural language processing and develop efficient algorithms for kbest trees in the framework of hypergraph parsingto demonstrate the efficiency scal ability and accuracy of these algorithms we present experiments on bikels implementation of collins lexicalized pcfg model and on chiangs cfgbased decoder for hierarchical phrasebased translationwe show in particular how the improved output of our algorithms has the potential to improve results from parse reranking systems and other applications
a classifierbased parser with linear runtime complexity we present a classifierbased parser that produces constituent trees in linear time the parser uses a basic bottomup shiftreduce algorithm but employs a classifier to determine parser actions instead of a grammar this can be seen as an extension of the deterministic dependency parser of nivre and scholz to full constituent parsing we show that with an appropriate feature set used in classification a very simple onepath greedy parser can perform at the same level of accuracy as more complex parsers we evaluate our parser on section 23 of the wsj section of the penn treebank and obtain precision and recall of 8754 and 8761 respectively two classifierbased deterministic dependency parsers for english have been proposed recently although they use different parsing algorithms and differ on whether or not dependencies are labeled they share the idea of greedily pursuing a single path following parsing decisions made by a classifierdespite their greedy nature these parsers achieve high accuracy in determining dependenciesalthough stateoftheart statistical parsers are more accurate the simplicity and efficiency of deterministic parsers make them attractive in a number of situations requiring fast lightweight parsing or parsing of large amounts of datahowever dependency analyses lack important information contained in constituent structuresfor example the treepath feature has been shown to be valuable in semantic role labeling we present a parser that shares much of the simplicity and efficiency of the deterministic dependency parsers but produces both dependency and constituent structures simultaneouslylike the parser of nivre and scholz it uses the basic shiftreduce stackbased parsing algorithm and runs in linear timewhile it may seem that the larger search space of constituent trees would make it unlikely that accurate parse trees could be built deterministically we show that the precision and recall of constituents produced by our parser are close to those produced by statistical parsers with higher runtime complexityone desirable characteristic of our parser is its simplicitycompared to other successful approaches to corpusbased constituent parsing ours is remarkably simple to understand and implementan additional feature of our approach is its modularity with regard to the algorithm and the classifier that determines the parsers actionsthis makes it very simple for different classifiers and different sets of features to be used with the same parser with very minimal workfinally its linear runtime complexity allows our parser to be considerably faster than lexicalized pcfgbased parserson the other hand a major drawback of the classifierbased parsing framework is that depending on node with four childrenin the transformed tree internal structure was added to the subtree rooted by the node with more than two childrenthe word dog is the head of the original np and it is kept as the head of the transformed np as well as the head of each np node the classifier used its training time can be much longer than that of other approacheslike other deterministic parsers our parser considers the problem of syntactic analysis separately from partofspeech taggingbecause the parser greedily builds trees bottomup in one pass considering only one path at any point in the analysis the task of assigning pos tags to words is done before other syntactic analysisin this work we focus only on the processing that occurs once pos tagging is completedin the sections that follow we assume that the input to the parser is a sentence with corresponding pos tags for each wordour parser employs a basic bottomup shiftreduce parsing algorithm requiring only a single pass over the input stringthe algorithm considers only trees with unary and binary branchingin order to use trees with arbitrary branching for training or generating them with the parser we employ an instance of the transformationdetransformation process described in in our case the transformation step involves simply converting each production with n children into n 1 binary productionstrees must be lexicalized1 so that the newly created internal structure of constituents with previous branching of more than two contains only subtrees with the same lexical head as the original constituentadditional nonterminal symbols introduced in this process are clearly markedthe transformed trees may then be used for trainingdetransformation is applied to trees produced by the parserthis involves the removal of nonterminals introduced in the transformation process producing trees with arbitrary branchingan example of transformationdetransformation is shown in figure 1the parsing algorithm involves two main data structures a stack s and a queue w items in s may be terminal nodes or subtrees of the final parse tree for the input stringitems in w are terminals corresponding to the input stringwhen parsing begins s is empty and w is initialized by inserting every word from the input string in order so that the first word is in front of the queueonly two general actions are allowed shift and reducea shift action consists only of removing the first item from w and placing it on top of s reduce actions are subdivided into unary and binary casesin a unary reduction the item on top of s is popped and a new item is pushed onto s the new item consists of a tree formed by a nonterminal node with the popped item as its single childthe lexical head of the new item is the same as the lexical head of the popped itemin a binary reduction two items are popped from s in sequence and a new item is pushed onto s the new item consists of a tree formed by a nonterminal node with two children the first item popped from s is the right child and the second item is the left childthe lexical head of the new item is either the lexical head of its left child or the lexical head of its right childif s is empty only a shift action is allowedif w is empty only a reduce action is allowedif both s and w are nonempty either shift or reduce actions are possibleparsing terminates when w is empty and s contains only one item and the single item in s is the parse tree for the input stringbecause the parse tree is lexicalized we also have a dependency structure for the sentencein fact the binary reduce actions are very similar to the reduce actions in the dependency parser of nivre and scholz but they are executed in a different order so constituents can be builtif w is empty and more than one item remain in s and no further reduce actions take place the input string is rejecteda parser based on the algorithm described in the previous section faces two types of decisions to be made throughout the parsing processthe first type concerns whether to shift or reduce when both actions are possible or whether to reduce or reject the input when only reduce actions are possiblethe second type concerns what syntactic structures are createdspecifically what new nonterminal is introduced in unary or binary reduce actions or which of the left or right children are chosen as the source of the lexical head of the new subtree produced by binary reduce actionstraditionally these decisions are made with the use of a grammar and the grammar may allow more than one valid action at any single point in the parsing processwhen multiple choices are available a grammardriven parser may make a decision based on heuristics or statistical models or pursue every possible action following a search strategyin our case both types of decisions are made by a classifier that chooses a unique action at every point based on the local context of the parsing action with no explicit grammarthis type of classifierbased parsing where only one path is pursued with no backtracking can be viewed as greedy or deterministicin order to determine what actions the parser should take given a particular parser configuration a classifier is given a set of features derived from that configurationthis includes crucially the two topmost items in the stack s and the item in front of the queue w additionally a set of context features is derived from a limited number of items below the two topmost items of s and following the item in front of w the specific features are shown in figure 2the classifiers target classes are parser actions that specify both types of decisions mentioned abovethese classes are let s denote the nth item from the top of the stack s and w denote the nth item from the front of the queue w features rectly related to the lexicalized constituent trees that are built during parsing while the features described in items 8 13 are more directly related to the dependency structures that are built simultaneously to the constituent structuresadditionally the head of the new subtree is the same as the head of the left child of the root node when the algorithm described in section 21 is combined with a trained classifier that determines its parsing actions as described in section 22 we have a complete classifierbased parsertraining the parser is accomplished by training its classifierto that end we need training instances that consist of sets of features paired with their classes corresponding to the correct parsing actionsthese instances can be obtained by running the algorithm on a corpus of sentences for which the correct parse trees are knowninstead of using the classifier to determine the parsers actions we simply determine the correct action by consulting the correct parse treeswe then record the features and corresponding actions for parsing all sentences in the corpus into their correct treesthis set of features and corresponding actions is then used to train a classifier resulting in a complete parserwhen parsing a sentence with n words the parser takes n shift actions because the maximum branching factor of trees built by the parser is two the total number of binary reduce actions is n 1 if a complete parse is foundif the input string is rejected the number of binary reduce actions is less than n 1therefore the number of shift and binary reduce actions is linear with the number of words in the input stringhowever the parser as described so far has no limit on the number of unary reduce actions it may takealthough in practice a parser properly trained on trees reflecting natural language syntax would rarely make more than 2n unary reductions pathological cases exist where an infinite number of unary reductions would be taken and the algorithm would not terminatesuch cases may include the observation in the training data of sequences of unary productions that cycle through nonterminals such as ababduring parsing it is possible that such a cycle may be repeated infinitelythis problem can be easily prevented by limiting the number of consecutive unary reductions that may be made to a finite numberthis may be the number of nonterminal types seen in the training data or the length of the longest chain of unary productions seen in the training datain our experiments we limited the number of consecutive unary reductions to three although the parser never took more than two unary reduction actions consecutively in any sentencewhen we limit the number of consecutive unary reductions to a finite number m the parser makes at most m unary reductions when parsing a sentence of length n placing this limit not only guarantees that the algorithm terminates but also guarantees that the number of actions taken by the parser is o where n is the length of the input stringthus the parser runs in linear time assuming that classifying a parser action is done in constant timeas mentioned before our parser shares similarities with the dependency parsers of yamada and matsumoto and nivre and scholz in that it uses a classifier to guide the parsing process in deterministic fashionwhile yamada and matsumoto use a quadratic runtime algorithm with multiple passes over the input string nivre and scholz use a simplified version of the algorithm described here which handles only dependency structuresadditionally our parser is in some ways similar to the maximumentropy parser of ratnaparkhi ratnaparkhis parser uses maximumentropy models to determine the actions of a shiftreducelike parser but it is capable of pursuing several paths and returning the topk highest scoring parses for a sentenceits observed time is linear but parsing is somewhat slow with sentences of length 20 or more taking more than one second to parse and sentences of length 40 or more taking more than three secondsour parser only pursues one path per sentence but it is very fast and of comparable accuracy in addition ratnaparkhis parser uses a more involved algorithm that allows it to work with arbitrary branching trees without the need of the binarization transform employed hereit breaks the usual reduce actions into smaller pieces and uses two separate passes for determining chunks and higher syntactic structures separatelyfinally there have been other deterministic shiftreduce parsers introduced recently but their levels of accuracy have been well below the stateoftheartthe parser in kalt uses a similar algorithm to the one described here but the classification task is framed differentlyusing decision trees and fewer features kalts parser has significantly faster training and parsing times but its accuracy is much lower than that of our parserkalts parser achieves precision and recall of about 77 and 76 respectively compared to our parsers 86 the parser of wong and wu uses a separate npchunking step and like ratnaparkhis parser does not require a binary transracy and time required to parse the test setthe parsers of yamada and matsumoto and nivre and scholz do not produce constituent structures only dependenciesunk indicates unknown valuesresults for mblpar and svmpar using correct pos tags formit achieves about 81 precision and 82 recall with goldstandard tags wong and wus parser is further differentiated from the other parsers mentioned here in that it does not use lexical items working only from partofspeech tagswe conducted experiments with the parser described in section 2 using two different classifiers tinysvm 2 and the memorybased learner timbl we trained and tested the parser on the wall street journal corpus of the penn treebank using the standard split sections 221 were used for training section 22 was used for development and tuning of parameters and features and section 23 was used for testingevery experiment reported here was performed on a pentium iv 18ghz with 1gb of rameach tree in the training set had emptynode and function tag information removed and the trees were lexicalized using similar headtable rules as those mentioned in the trees were then converted into trees containing only unary and binary branching using the binarization transform described in section 2classifier training instances of features paired with classes were extracted from the trees in the training set as described in section 23the total number of training instances was about 15 millionthe classifier in the svmbased parser uses the polynomial kernel with degree 2 following the work of yamada and matsumoto on svmbased deterministic dependency parsing and a oneagainstall scheme for multiclass classificationbecause of the large number of training instances we used yamada and matsumotos idea of splitting the training instances into several parts according to pos tags and training classifiers on each partthis greatly reduced the time required to train the svms but even with the splitting of the training set total training time was about 62 hourstraining set splitting comes with the cost of reduction in accuracy of the parser but training a single svm would likely take more than one weekyamada and matsumoto experienced a reduction of slightly more than 1 in dependency accuracy due to training set splitting and we expect that a similar loss is incurred herewhen given perfectly tagged text svmpar has labeled constituent precision and recall of 8754 and 8761 respectively and dependency accuracy of 903 over all sentences in the test setthe total time required to parse the entire test set was 11 minutesout of more than 2400 sentences only 26 were rejected by the parser for these sentences partial analyses were created by combining the items in the stack in flat structures and these were included in the evaluationpredictably the labeled constituent precision and recall obtained with automatically postagged sentences were lower at 8601 and 8615the partofspeech tagger used in our experiments was svmtool and its accuracy on the test set is 97the mblbased parser uses the ib1 algorithm with five nearest neighbors and the modified value difference metric following the work of nivre and scholz on mblbased deterministic dependency parsingmblpar was trained with all training instances in under 15 minutes but its accuracy on the test set was much lower than that of svmpar with constituent precision and recall of 800 and 802 and dependency accuracy of 863 it was also much slower than svmpar in parsing the test set taking 127 minutesin addition the total memory required for running mblpar was close to 1 gigabyte while svmpar required only about 200 megabytes table 1 shows a summary of the results of our experiments with svmpar and mblpar and also results obtained with the charniak parser the bikel implementation of the collins parser and the ratnaparkhi parserwe also include the dependency accuracy from yamada and matsumotos svmbased dependency parser and nivre and scholzs mblbased dependency parserthese results show that the choice of classifier is extremely important in this tasksvmpar and mblpar use the same algorithm and features and differ only on the classifiers used to make parsing decisionswhile in many natural language processing tasks different classifiers perform at similar levels of accuracy we have observed a dramatic difference between using support vector machines and a memorybased learneralthough the reasons for such a large disparity in results is currently the subject of further investigation we speculate that a relatively small difference in initial classifier accuracy results in larger differences in parser performance due to the deterministic nature of the parser we also believe classifier choice to be one major source of the difference in accuracy between nivre and scholzs parser and yamada and matsumotos parserwhile the accuracy of svmpar is below that of lexicalized pcfgbased statistical parsers it is surprisingly good for a greedy parser that runs in linear timeadditionally it is considerably faster than lexicalized pcfgbased parsers and offers a good alternative for when fast parsing is neededmblpar on the other hand performed poorly in terms of accuracy and speedwe have presented a simple shiftreduce parser that uses a classifier to determine its parsing actions and runs in linear timeusing svms for classification the parser has labeled constituent precision and recall higher than 87 when using the correct partofspeech tags and slightly higher than 86 when using automatically assigned partofspeech tagsalthough its accuracy is not as high as those of stateoftheart statistical parsers our classifierbased parser is considerably faster than several wellknown parsers that employ search or dynamic programming approachesat the same time it is significantly more accurate than previously proposed deterministic parsers for constituent structureswe have also shown that much of the success of a classifierbased parser depends on what classifier is usedwhile this may seem obvious the differences observed here are much greater than what would be expected from looking for example at results from chunkingshallow parsing future work includes the investigation of the effects of individual features the use of additional classification features and the use of different classifiersin particular the use of tree features seems appealingthis may be accomplished with svms using a tree kernel or the tree boosting classifier bact described in additionally we plan to investigate the use of the beam strategy of ratnaparkhi to pursue multiple parses while keeping the runtime linear
W05-1513
a classifierbased parser with linear runtime complexitywe present a classifierbased parser that produces constituent trees in linear timethe parser uses a basic bottomup shiftreduce algorithm but employs a classifier to determine parser actions instead of a grammarthis can be seen as an extension of the deterministic dependency parser of nivre and scholz to full constituent parsingwe show that with an appropriate feature set used in classification a very simple onepath greedy parser can perform at the same level of accuracy as more complex parserswe evaluate our parser on section 23 of the wsj section of the penn treebank and obtain precision and recall of 8754 and 8761 respectivelywe propose a constituency based parsing method to determine sentence dependency structures
extracting opinions opinion holders and topics expressed in online news media text this paper presents a method for identifying an opinion with its holder and topic given a sentence from online news media texts we introduce an approach of exploiting the semantic structure of a sentence anchored to an opinion bearing verb or adjective this method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from framenet we decompose our task into three phases identifying an opinionbearing word labeling semantic roles related to the word in the sentence and then finding the holder and the topic of the opinion word among the labeled semantic roles for a broader coverage we also employ a clustering technique to predict the most probable frame for a word which is not defined in framenet our experimental results show that our system performs significantly better than the baseline the challenge of automatically identifying opinions in text automatically has been the focus of attention in recent years in many different domains such as news articles and product reviewsvarious approaches have been adopted in subjectivity detection semantic orientation detection review classification and review miningdespite the successes in identifying opinion expressions and subjective wordsphrases there has been less achievement on the factors closely related to subjectivity and polarity such as opinion holder topic of opinion and intertopicinteropinion relationshipsthis paper addresses the problem of identifying not only opinions in text but also holders and topics of opinions from online news articlesidentifying opinion holders is important especially in news articlesunlike product reviews in which most opinions expressed in a review are likely to be opinions of the author of the review news articles contain different opinions of different opinion holders by grouping opinion holders of different stance on diverse social and political issues we can have a better understanding of the relationships among countries or among organizationsan opinion topic can be considered as an object an opinion is aboutin product reviews for example opinion topics are often the product itself or its specific features such as design and quality in news articles opinion topics can be social issues governments acts new events or someone is opinions as for opinion topic identification little research has been conducted and only in a very limited domain product reviewsin most approaches in product review mining given a product its frequently mentioned features are first collected and then used as anchor pointsin this study we extract opinion topics from news articlesalso we do not prelimit topics in advancewe first identify an opinion and then find its holder and topicwe define holder as an entity who holds an opinion and topic as what the opinion is aboutin this paper we propose a novel method that employs semantic role labeling a task of identifying semantic roles given a sentencewe deproceedings of the workshop on sentiment and subjectivity in text pages 18 sydney july 2006 c2006 association for computational linguistics compose the overall task into the following steps in this paper we focus on the first three subtasksthe main contribution of this paper is to present a method that identifies not only opinion holders but also opinion topicsto achieve this goal we utilize framenet data by mapping target words to opinionbearing words and mapping semantic roles to holders and topics and then use them for system trainingwe demonstrate that investigating semantic relations between an opinion and its holder and topic is crucial in opinion holder and topic identificationthis paper is organized as follows section 2 briefly introduces related work both in sentiment analysis and semantic role labelingsection 3 describes our approach for identifying opinions and labeling holders and topics by utilizing framenet1 data for our tasksection 4 reports our experiments and results with discussions and finally section 5 concludesthis section reviews previous works in both sentiment detection and semantic role labelingsubjectivity detection is the task of identifying subjective words expressions and sentences identifying subjectivity helps separate opinions from fact which may be useful in question answering summarization etcsentiment detection is the task of determining positive or negative sentiment of words phrases and sentences or documents building on this work more sophisticated problems such as opinion holder identification have also been studied identify opinion propositions and holderstheir work is similar to ours but different because their opinion is restricted to propositional opinion and mostly to verbsanother related works are both of them use the mpqa corpus2 but they only identify opinion holders not topicsas for opinion topic identification little research has been conducted and only in a very limited domain product reviews present product mining algorithms with extracting certain product features given specific product typesour paper aims at extracting topics of opinion in general news media textsemantic role labeling is the task of identifying semantic roles such as agent patient speaker or topic in a sentencea statistical approach for semantic role labeling was introduced by their system learned semantic relationship among constituents in a sentence from framenet a large corpus of semantically handannotated datathe framenet annotation scheme is based on frame semantics frames are defined as schematic representations of situations involving various frame elements such as participants props and other conceptual roles for example given a sentence jack built a new house out of bricks a semantic role labeling system should identify the roles for the verb built such as agent jack built created_entity a new house component out of bricks3in our study we build a semantic role labeling system as an intermediate step to label opinion holders and topics by training it on opinionbearing frames and their frame elements in framenetfor the goal of this study extracting opinions from news media texts with their holders and topics we utilize framenet datathe basic idea of our approach is to explore how an opinion holder and a topic are semantically related to an opinion bearing word in a sentencegiven a sentence and an opinion bearing word our method identifies frame elements in the sentence and searches which frame element corresponds to the opinion holder and which to the topicthe example in figure 1 shows the intuition of our algorithmwe decompose our task in 3 subtasks collect opinion words and opinionrelated frames we describe the subtask of collecting opinion words and related frames in 3 phasesin this study we consider an opinionbearing word is a key indicator of an opiniontherefore we first identify opinionbearing word from a given sentence and extract its holder and topicsince previous studies indicate that opinionbearing verbs and adjectives are especially efficient for opinion identification we focus on creating a set of opinionbearing verbs and adjectiveswe annotated 1860 adjectives and 2011 verbs4 by classifying them into positive negative and neutral classeswords in the positive class carry positive valence whereas 4 these were randomly selected from 8011 english verbs and 19748 english adjectives those in negative class carry negative valencewords that are not opinionbearing are classified as neutralnote that in our study we treat word sentiment classification as a threeway classification problem instead of a twoway classification problem by adding the third class neutral we can prevent the classifier assigning either positive or negative sentiment to weak opinionbearing wordfor example the word central that hatzivassiloglou and mckeown marked as a positive adjective is not classified as positive by our systeminstead we mark it as neutral since it is a weak clue for an opinionfor the same reason we did not consider able classified as a positive word by general inquirer5 a sentiment word lexicon as a positive opinion indicatorfinally we collected 69 positive and 151 negative verbs and 199 positive and 304 negative adjectiveswe collected frames related to opinion words from the framenet corpuswe used framenet ii which contains 450 semantic frames and more than 3000 frame elements a frame consists of lexical items called lexical unit and related frame elementsfor instance lus in attack frame are verbs such as assail assault and attack and nouns such as invasion raid and strikeframenet ii contains approximately 7500 lexical units and over 100000 annotated sentencesfor each word in our opinion word set described in phase 1 we find a frame to which the word belongs49 frames for verbs and 43 frames for adjectives are collectedtable 1 shows examples of selected frames with some of the lexical units those frames coverfor example our system found the frame desiring from opinionbearing words want wish hope etcfinally we collected 8256 and 11877 sentences related to selected opinion bearing frames for verbs and adjectives respectivelyphase 3 framenet expansion even though phase 2 searches for a correlated frame for each verb and adjective in our opinionbearing word list not all of them are defined in framenet datasome words such as criticize and harass in our list have associated frames whereas others such as vilify and maltreat do not have those for a word in case 2 we use a clustering algorithms cbc to predict the closest frame of undefined word from existing framescbc was developed based on the distributional hypothesis that words which occur in the same contexts tend to be similarusing cbc for example our clustering module computes lexical similarity between the word vilify in case 2 and all words in case 1then it picks criticize as a similar word so that we can use for vilify the frame judgment_communication to which criticize belongs and all frame elements defined under judgment_ communicationto find a potential holder and topic of an opinion word in a sentence we first label semantic roles in a sentencemodeling we follow the statistical approaches for semantic role labeling which separate the task into two steps identify candidates of frame elements and assign semantic roles for those candidates like their intuition we treated both steps as classification problemswe first collected all constituents of the given sentence by parsing it using the charniak parserthen in step 1 we classified candidate constituents of frame elements from noncandidatesin step 2 each selected candidate was thus classified into one of frame element types as a learning algorithm for our classification model we used maximum entropy for system development we used mega model optimization package6 an implementation of me modelsdata we collected 8256 and 11877 sentences which were associated to opinion bearing frames for verbs and adjectives from framenet annotation dataeach sentence in our dataset contained a frame name a target predicate and frame elements labeled with element typeswe divided the data into 90 for training and 10 for testfeatures used table 2 describes features that we used for our classification modelthe target word is an opinionbearing verb or adjective which is associated to a framewe used the charniak parser to get a phrase type feature of a frame element and the parse tree path featurewe determined a head word of a phrase by an algorithm using a tree head table7 position feature by the order of surface words of a frame element and the target word and the voice feature by a simple patternframe name for a target word was selected by methods described in phase 2 and phase 3 in subsection 31after identifying frame elements in a sentence our system finally selects holder and topic from those frame elementsin the example in table 1 the frame desiring has frame elements such as event experiencer location_of_event focal_participant among these fes we can consider that experiencer can be a holder and focal_participant can be a topic we manually built a mapping table to map fes to holder or topic using as support the fe definitions in each opinion related frame and the annotated sample sentencesthe goal of our experiment is first to see how our holder and topic labeling system works on the framenet data and second to examine how it performs on online news media textthe first data set consists of 10 of data described in subsection 32 and the second is manually annotated by 2 humanswe report experimental results for both test setsgold standard in total testset 1 contains 2028 annotated sentences collected from framenet data set we measure the system performance using precision recall and fscorebaseline for the baseline system we applied two different algorithms for sentences which have opinionbearing verbs as target words and for those that have opinionbearing adjectives as target wordsfor verbs baseline system labeled a subject of a verb as a holder and an object as a topic for adjectives the baseline marked the subject of a predicate adjective as a holder for the topics of adjectives the baseline picks a modified word if the target adjective is a modifier and a subject word if the adjective is a predicateresult table 3 and 4 show evaluation results of our system and the baseline system respectivelyour system performed much better than the baseline system in identifying topic and holder for both sets of sentences with verb target words and those with adjectivesespecially in recognizing topics of target opinionbearing words our system improved fscore from 304 to 665 for verb target words and from 382 to 703 for adjectivesit was interesting to see that the intuition that a subject of opinionbearing verb is a holder and an object is a topic which we applied for the baseline achieved relatively good fscore however our system obtained much higher fscore holder identification task achieved higher fscore than topic identification which implies that identifying topics of opinion is a harder taskwe believe that there are many complicated semantic relations between opinionbearing words and their holders and topics that simple relations such as subject and object relations are not able to capturefor example in a sentence her letter upset me simply looking for the subjective and objective of the verb upset is not enough to recognize the holder and topicit is necessary to see a deeper level of semantic relations her letter is a stimulus and me is an experiencer of the verb upsetgold standard two humans 8 annotated 100 sentences randomly selected from news media textsthose news data is collected from online news sources such as the new york times un office for the coordination of humanitarian affairs and bbc news9 which contain articles about various international affairesannotators identified opinionbearing sentences with marking opinion word with its holder and topic if they existedthe interannotator agreement in identifying opinion sentences was 82baseline in order to identify opinionbearing sentences for our baseline system we used the opinionbearing word set introduced in phase 1 in subsection 31if a sentence contains an opinionbearing verb or adjective the baseline system started looking for its holder and topicfor holder and topic identification we applied the same baseline algorithm as described in subsection 41 to testset 2result note that testset 1 was collected from sentences of opinionrelated frames in framenet and therefore all sentences in the set contained either opinionbearing verb or adjective however sentences in testset 2 were randomly collected from online news media pages and therefore not all of them are opinionbearingwe first evaluated the task of opinionbearing sentence identificationtable 5 shows the system resultswhen we mark all sentences as opinionbearing it achieved 43 and 38 of accuracy for the annotation result of human1 and human2 respectivelyour system performance is comparable with the unique assignmentwe measured the holder and topic identification system with precision recall and fscoreas we can see from table 6 our system achieved much higher precision than the baseline system for both topic and holder identification taskshowever we admit that there is still a lot of room for improvementthe system achieved higher precision for topic identification whereas it achieved higher recall for holder identificationin overall our system attained higher fscore in holder identification task including the baseline systembased on fscore we believe that identifying topics of opinion is much more difficult than identifying holdersit was interesting to see the same phenomenon that the baseline system mainly assuming that subject and object of a sentence are likely to be opinion holder and topic achieved lower scores for both holder and topic identification tasks in testset 2 as in testset 1this implies that more sophisticated analysis of the relationship between opinion words and their topics and holders is crucialwe observed several difficulties in evaluating holder and topic identificationfirst the boundary of an entity of holder or topic can be flexiblefor example in sentence senator titus olupitan who sponsored the bill wants the permission not only senator titus olupitan but also senator titus olupitan who sponsored the bill is an eligible answersecond some correct holders and topics which our system found were evaluated wrong even if they referred the same entities in the gold standard because human annotators marked only one of them as an answerin the future we need more annotated data for improved evaluationthis paper presented a methodology to identify an opinion with its holder and topic given a sentence in online news media textswe introduced an approach of exploiting semantic structure of a sentence anchored to an opinion bearing verb or adjectivethis method uses semantic role labeling as an intermediate step to label an opinion holder and topic using framenet dataour method first identifies an opinionbearing word labels semantic roles related to the word in the sentence and then finds a holder and a topic of the opinion word among labeled semantic rolesthere has been little previous study in identifying opinion holders and topics partly because it requires a great amount of annotated datato overcome this barrier we utilized framenet data by mapping target words to opinionbearing words and mapping semantic roles to holders and topicshowever framenet has a limited number of words in its annotated corpusfor a broader coverage we used a clustering technique to predict a most probable frame for an unseen wordour experimental results showed that our system performs significantly better than the baselinethe baseline system results imply that opinion holder and topic identification is a hard taskwe believe that there are many complicated semantic relations between opinionbearing words and their holders and topics which simple relations such as subject and object relations are not able to capturein the future we plan to extend our list of opinionbearing verbs and adjectives so that we can discover and apply more opinionrelated framesalso it would be interesting to see how other types of part of speech such as adverbs and nouns affect the performance of the system
W06-0301
extracting opinions opinion holders and topics expressed in online news media textthis paper presents a method for identifying an opinion with its holder and topic given a sentence from online news media textswe introduce an approach of exploiting the semantic structure of a sentence anchored to an opinion bearing verb or adjectivethis method uses semantic role labeling as an intermediate step to label an opinion holder and topic using data from framenetwe decompose our task into three phases identifying an opinionbearing word labeling semantic roles related to the word in the sentence and then finding the holder and the topic of the opinion word among the labeled semantic rolesfor a broader coverage we also employ a clustering technique to predict the most probable frame for a word which is not defined in framenetour experimental results show that our system performs significantly better than the baselinewe identify opinion holders and targets with semantic role labeling
automatic identification of noncompositional multiword expressions using latent semantic analysis making use of latent semantic analysis we explore the hypothesis that local linguistic context can serve to identify multiword expressions that have noncompositional meanings we propose that vectorsimilarity between distribution vectors associated with an mwe as a whole and those associated with its constitutent parts can serve as a good measure of the degree to which the mwe is compositional we present experiments that show that low similarity does in fact correlate with noncompositionality identifying noncompositional multiword expressions is an important subtask for any computational system and significant attention has been paid to practical methods for solving this problem in recent years while corpusbased techniques for identifying collocational multiword expressions by exploiting statistical properties of the cooccurrence of the component words have become increasingly sophisticated it is well known that mere cooccurrence does not well distinguish compositional from noncompositional expressions while expressions which may potentially have idiomatic meanings can be identified using various lexical association measures other techniques must be used to determining whether or not a particular mwe does in fact have an idiomatic usein this paper we explore the hypothesis that the local linguistic context can provide adequate cues for making this determination and propose one method for doing thiswe characterize our task on analogy with wordsense disambiguation as noted by schutze wsd involves two related tasks the general task of sense discriminationdetermining what senses a given word hasand the more specific task of sense selectiondetermining for a particular use of the word in context which sense was intendedfor us the discrimination task involves determining for a given expression whether it has a noncompositional interpretation in addition to its compositional interpretation and the selection task involves determining in a given context whether a given expression is being used compositionally or noncompostionallythe german expression ins wasser fallen for example has a noncompositional interpretation on which it means to fail to happen and a compositional interpretation on which it means to fall into water 1 the discrimination task then is to identify ins wasser fallen as an mwe that has an idiomatic meaning and the selection task is to determine that in it is the compositional meaning that is intended while in it is the noncompositional meaningfollowing schutze and landauer dumais our general assumption is that the meaning of an expression can be modelled in terms of the words that it cooccurs with its cooccurrence signatureto determine whether a phrase has a noncompositional meaning we compute whether the cooccurrence signature of the phrase is systematically related to the cooccurrence signatures of its partsour hypothesis is that a systematic relationship is indicative of compositional interpretation and lack of a systematic relationship is symptomatic of noncompositionalityin other words we expect compositional mwes to appear in contexts more similar to those in which their component words appear than do noncompositional mwesin this paper we describe two experiments that test this hypothesisin the first experiment we seek to confirm that the local context of a known idiom can reliably distinguish idiomatic uses from nonidiomatic usesin the second experiment we attempt to determine whether the difference between the contexts in which an mwe appears and the contexts in which its component words appear can indeed serve to tell us whether the mwe has an idiomatic usein our experiments we make use of lexical semantic analysis as a model of contextsimilarity since this technique is often used to model meaning we will speak in terms of meaning similiarityit should be clear however that we are only using the lsa vectorsderived from context of occurrence in a corpusto model meaning and meaning composition in a very rough wayour hope is simply that this rough model is sufficient to the task of identifying noncompositional mwesrecent work which attempts to discriminate between compositional and noncompositional mwes include lin who used mutualinformation measures identify such phrases baldwin et al who compare the distribution of the head of the mwe with the distribution of the entire mwe and vallada moiron tiedemann who use a wordalignment strategy to identify noncompositional mwes making use of parallel textsschone jurafsky applied lsa to mwe identification althought they did not focus on distinguishing compositional from noncompositional mweslins goal like ours was to discriminate noncompositional mwes from compositional mweshis method was to compare the mutual information measure of the constituents parts of an mwe with the mutual information of similar expressions obtained by substituting one of the constituents with a related word obtained by thesaurus lookupthe hope was that a significant difference between these measures as in the case of red tape compared to yellow tape or orange tape would be characteristic of noncompositional mwesalthough intuitively appealing lins algorithm only achieves precision and recall of 157 and 137 respectively schone jurafsky evaluated a number of cooccurrencebased metrics for identifying mwes showing that as suggested by lins results there was need for improvement in this areasince lsa has been used in a number of meaningrelated language tasks to good effect they had hoped to improve their results by identify noncompositional expressions using a method similar to that which we are exploring herealthough they do not demonstrate that this method actually identifies noncompositional expressions they do show that the lsa similarity technique only improves mwe identification minimallybaldwin et al focus more narrowly on distinguishing english nounnoun compounds and verbparticle constructions which are compositional from those which are not compositionaltheir approach is methodologically similar to ours in that they compute similarity on the basis of contexts of occurrance making use of lsatheir hypothesis is that high lsabased similarity between the mwe and each of its constituent parts is indicative of compositionalitythey evaluate their technique by assessing the correlation between high semantic similarity of the constituents of an mwe to the mwe as a whole with the likelihood that the mwe appears in wordnet as a hyponym of one of the constituentswhile the expected correlation was not attested we suspect this to be more an indication of the inappropriateness of the evaluation used than of the faultiness of the general approachlin baldwin et al and schone jurafsky all use as their gold standard either idiom dictionaries or wordnet while schone jurafsky show that wordnet is as good a standard as any of a number of machine readable dictionaries none of these authors shows that the mwes that appear in wordnet are generally noncompositional in the relevant senseas noted by sag et al many mwes are simply institutionalized phrases whose meanings are perfectly compositional but whose frequency of use make them highly salientit is certainly clear that many mwes that appear in wordnetexamples being law student medical student college manare perfectly compositional semanticallyzhai in an early attempt to apply statistical methods to the extraction of noncompositional mwes made use of what we take to be a more appropriate evaluation metricin his comparison among a number of different heuristics for identifying noncompositional nounnoun compounds zhai did his evaluation by applying each heuristic to a corpus of items handclassified as to their compositionalityalthough zhais classification appears to be problematic we take this to be the appropirate paradigm for evaluation in this domain and we adopt it herein our work we made use of the word space model of similiarty and extended it slightly to mwesin this framework meaning is modeled as an ndimensional vector derived via singular value decomposition from word cooccurrence counts for the expression in question a technique frequently referred to as latent semantic analysis this kind of dimensionality reduction has been shown to improve performance in a number of textbased domains for our experiments we used a local german newspaper corpus2 we built our lsa model with the infomap software package3 using the 1000 most frequent words not on the 102word handgenerated stop list as the contentbearing dimension words the 20000 most frequent content words were assigned row values by counting occurrences within a 30word windowsvd was used to reduce the dimensionality from 1000 to 100 resulting in 100 dimensional meaningvectors for each wordin our experiments mwes were assigned meaningvectors as a whole using the same proceedurefor meaning similarity we adopt the standard measure of cosine of the angle between two vectors as a metric on this metric two expressions are taken to be unrelated if their meaning vectors are orthogonal and synonymous if their vectors are parallel figure 1 illustrates such a vector space in two dimensionsnote that the meaning vector for loffel spoon is quite similar to that for essen to eat but distant from sterben to die while the meaning vector for the mwe den loffel abgeben is close to that for sterbenindeed den loffel abgeben like to kick the bucket is a noncompositional idiom meaning to diewhile den loffel abgeben is used almost exclusively in its idiomatic sense many mwes are used regularly in both their idiomatic and in their literal sensesabout two thirds of the uses of the mwe ins wasser fallen in our corpus are idiomatic uses and the remaing one third are literal usesin our first experiment we tested the hypothesis that these uses could reliably be distinguished using distributionbased models of their meaningfor this experiment we manually annotated the 67 occurrences of ins wasser fallen in our corpus as to whether the expression was used compositionally or noncompositionally 4 marking this distinction we generate an lsa meaning vectors for the compositional uses and an lsa meaning vector for the noncompositional uses of ins wasser fallenthe vectors turned out as expected to be almost orthogonal with a cosine of the angle between them of 002this result confirms that the linguistic contexts in which the literal and the idiomatic use of ins wasser fallen appear are very different indicatingnot surprisinglythat the semantic difference between the literal meaning and the idiomatic meaning is reflected in the way these these phrases are usedour next task was to investigate whether this difference could be used in particular cases to determine what the intended use of an mwe in a particular context wasto evaluate this we did a 10fold crossvalidation study calculating the literal and idiomatic vectors for ins wasser fallen on the basis of the training data and doing a simple nearest neighbor classification of each memember of the test set on the basis of the meaning vectors computed from its local context our result of an average accurace of 72 for our lsabased classifier far exceeds the simple maximumlikelihood baseline of 58in the final part of this experiment we compared the meaning vector that was computed by summing over all uses of ins wasser fallen with the literal and idiomatic vectors from abovesince idiomatic uses of ins wasserfallen prevail in the corpus it is not surprisingly that the similarity to the literal vector is much than similarity to the idiomatic vector to summarize experiment i which is a variant of a supervised phrase sense disambiguation task demonstrates that we can use lsa to distinguish between literal and the idiomatic usage of an mwe by using local linguistic context4this was a straightforward task two annotators annotated independently with very high agreementkappa score of over 095 occurrences on which the annotators disagreed were thrown outof the 64 occurrences we used 37 were idiomatic and 27 were literalin our second experiment we sought to make use of the fact that there are typically clear distributional difference between compositional and noncompositional uses of mwes to determine whether a given mwe indeed has noncompositional uses at allin this experiment we made use of a test set of german prepositionnounverb collocation candidate database whose extraction is described by krenn and which has been made available electronically5 from this database only word combinations with frequency of occurrence more than 30 in our test corpus were consideredour task was to classify these 81 potential mwes according whether or not thay have an idiomatic meaningto accomplish this task we took the following approachwe computed on the basis of the distribution of the components of the mwe an estimate for the compositional meaning vector for the mwewe then compared this to the actual vector for the mwe as a whole with the expectation mwes which indeed have noncompositinoal uses will be distinguished by a relatively low vector similarity between the estimated compositional meaning vector and the actual meaning vectorin other words small similarity values should be diagnostic for the presense of noncompositinoal uses of the mwewe calculated the estimated compositional meaning vector by taking it to be the sum of the meaning vector of the parts ie the compositional meaning of an expression w1w2 consisting of two words is taken to be sum of the meaning vectors for the constituent words6 in order to maximize the independent contribution of the constituent words the meaning vectors for these words were always computed from contexts in which they appear alone we call the estimated compositional meaning vector the composed vector7 the comparisons we made are illustrated in figure 2 where vectors for the mwe auf die strecke bleiben to fall by the wayside and the words strecke route and bleiben to stay are mapped into two dimensions8here we see that the linear combination of the component words of the mwe is clearly distinct from that of the mwe as a wholeas a further illustration of the difference between the composed vector and the mwe vector in table 2 we list the words whose meaning vector is most similar to that of the mwe auf dis strecke bleiben along with their similarity values and in table 3 we list those words whose meaning vector is most similar to the composed vectorthe semantic differences among these two classes are readily apparent0769663 0732372 0731411 0717294 0704939 strecken to lengthen 0743309 fahren to drive 0741059 laufen to run 0726631 fahrt drives 0712352 schließen to close 0704364 we recognize that the composed vector is clearly nowhere near a perfect model of compositional meaning in the general casethis can be illustrated by considering for example the mwe fire breathingthis expression is clearly compositional as it denotes the process of producing combusting exhalation exactly what the semantic combination rules of the english would predictnevertheless the distribution of fire breathing is quite unrelated to that of its constituents fire and breathing despite these principled objections the composed vector provides a useful baseline for our investigationwe should note that a number of researchers in the lsa tradition have attempted to provide more compelling combinatory functions to capture the nonlinearity of linguistic compositional interpretation as a check we chose at random a number of simple clearlycompositional word combinations we expected that on the whole these would evidence a very high similarity measure when compared with their associated composed vector and this is indeed the case as shown in table 1we also compared the literal and nonliteral vectors for ins wasser fallen from the first experiment with the composed vector computed out of the meaning vectors for wasser and for fallen9 the difference is not large but nevertheless the composed vector is more similar to the literal vector than to the nonliteral vector extending to the general case our task was to compare the composed vector to the actual vector for all the mwes in our test setthe resulting cosine similarity values range from 001 to 080our hope was that there would be a similarity threshold for distinguishing mwes that have noncompositional interpretations from those that do notindeed of the mwes with a similarity values of under 01 just over half are mwes which were handannotated to have nonliteral uses10 it used in their idiomatic sense particularly frequently in contexts in which elements of the literal meaning were also present11 is clear then that the technique described is prima facie capable of detecting idiomatic mwesto evaluate the method we used the careful manual annotation of the pnv database described by krenn as our gold standardby adopting different threshholds for the classification decision we obtained a range of results table 4 illustrates this rangethe fscore measure is maximized in our experiments by adopting a similarity threshold of 02this means that mwes which have a meaning vector whose cosine is under this value when compared with with the combined vector should be classified as having a nonliteral meaningto compare our method with that proposed by baldwin et al we applied their method to our materials generating lsa vectors for the component content words in our candidate mwes and comparing their semantic similarity to the mwes lsa vector as a whole with the expectation being that low similarity between the mwe as a whole and its component words is indication of the noncompositionality of the mwethe results are given in table 5it is clear that while baldwin et als expectation is borne out in the case of the constituent noun it is not in the case of the constituent verb even in the case of the nouns however the results are for the most part markedly inferior to the results we achieved using the composed vectorsthere are a number of issues that complicate the workability of the unsupervised technique described herewe rely on there being enough noncompositional uses of an idiomatic mwe in the corpus that the overall meaning vector for the mwe reflects this usageif the literal meaning is overwhelmingly frequent this will reduce the effectivity of the method significantlya second problem concerns the relationship between the literal and the nonliteral meaningour technique relies on these meaning being highly distinctif the meanings are similar it is likely that local context will be inadequate to distinguish a compositional from a noncompositional use of the expressionin our investigation it became apparent in fact that in the newspaper genre highly idiomatic expressions such as ins wasser fallen were oftento summarize in order to classify an mwe as noncompositional we compute an approximation of its compositional meaning and compare this with the meaning of the expression as it is used on the wholeone of the obvious improvements to the algorithm could come from better models for simulating compositional meaninga further issue that can be explored is whether linguistic preprocessing would influence the resultswe worked only on raw text datathere is some evidence that part of speech tagging might improve results in this kind of taskwe also only considered local word sequencescertainly some recognition of the syntactic structure would improve resultsthese are however more general issues associated with mwe processingrather promising results were attained using only local context howeverour study shows that the fscore measure is maximized by taking as threshold for distinguishing noncompositional phrases from compositional ones a cosine similarity value somewhere between 0102an important point to be explored is that compositionality appears to come in degreesas bannard and lascarides have noted mwes do not fall cleanly into the binary classes of compositional and noncompositional expressions but populate a continuum between the two extremes while our experiment was designed to classify mwes the technique described here of course provides a means if rather a blunt one for quantifying the degreee of compositonality of an expression
W06-1203
automatic identification of noncompositional multiword expressions using latent semantic analysismaking use of latent semantic analysis we explore the hypothesis that local linguistic context can serve to identify multiword expressions that have noncompositional meaningswe propose that vectorsimilarity between distribution vectors associated with an mwe as a whole and those associated with its constitutent parts can serve as a good measure of the degree to which the mwe is compositionalwe present experiments that show that low similarity does in fact correlate with noncompositionalitywe devise a supervised method in which they compute the meaning vectors for the literal and non literal usages of a given expression in the trainningdatawe used a supervised learning method to distinguish between compositional and noncompositional uses of an expression by using contextual information in the form of latent semantic analy sis vectors
spmt statistical machine translation with syntactified target language phrases we introduce spmt a new class of statistical translation models that use syntactified target language phrases the spmt models outperform a state of the art phrasebased baseline model by 264 bleu points on the nist 2003 chineseenglish test corpus and 028 points on a humanbased quality metric that ranks translations on a scale from 1 to 5 during the last four years various implementations and extentions to phrasebased statistical models have led to significant increases in machine translation accuracyalthough phrasebased models yield highquality translations for language pairs that exhibit similar word order they fail to produce grammatical outputs for language pairs that are syntactically divergentrecent models that exploit syntactic information of the source language have been shown to produce better outputs than phrasebased systems when evaluated on relatively small scale domain specific corporaand syntaxinspired formal models in spite of being trained on significantly less data have shown promising results when compared on the same test sets with mature phrasebased systemsto our knowledge though no previous research has demonstrated that a syntaxbased statistical translation system could produce better results than a phrasebased system on a largescale wellestablished open domain translation taskin this paper we present such a systemour translation models rely upon and naturally exploit submodels that have been initially developed in phrasebased systems for choosing target translations of source language phrases and use new syntaxbased translation and target language submodels for assembling target phrases into wellformed grammatical outputsafter we introduce our models intuitively we discuss their formal underpinning and parameter training in section 2in section 3 we present our decoder and in section 4 we evaluate our models empiricallyin section 5 we conclude with a brief discussionafter being exposed to 100m words of parallel chineseenglish texts current phrasebased statistical machine translation learners induce reasonably reliable phrasebased probabilistic dictionariesfor example our baseline statistical phrasebased system learns that with high probabilities the chinese phrases astro nauts france and russia and comingfrom can be translated into english as astronautscosmonauts france and russiafrance and russian and coming fromfrom respectively1 unfortunately when given as input chinese sentence 1 our phrasebased system produces the output shown in 2 and not the translation in 3 which correctly orders the phrasal translations into a grammatical sequencewe believe this happens because the distortionreordering models that are used by stateoftheart phrasebased systems which exploit phrase movement and ngram target to increase readability in this paper we represent chinese words using fully capitalized english glosses and english words using lowercased letters language models are too weak to help a phrasebased decoder reorder the target phrases into grammatical outputsone method for increasing the ability of a decoder to reorder target language phrases is that of decorating them with syntactic constituent informationfor example we may make explicit that the chinese phrase astro nauts may be translated into english as a noun phrase np that the phrase france and russia may be translated into a complex nounphrase np cc np that the phrase comingfrom may be translated into a partially realized verb phrase that is looking for a noun phrase to its right in order to be fully realized vp pp npx0 and that the chinese particle pde when occurring between a chinese string that was translated into a verb phrase to its left and another chinese string that was translated into a noun phrase to its right vpx1 pde npx0 should be translated to nothing while forcing the reordering of the two constituents npif all these translation rules were available to a decoder that derives english parse trees starting from chinese input strings this decoder could produce derivations such as that shown in figure 2because our approach uses translation rules with syntactified target language phrases we call it spmtwe are interested to model a generative process that explains how english parse trees 7r and their associated english string yields e foreign sentences f and wordlevel alignments a are producedwe assume that observed triplets are generated by a stochastic process similar to that used in data oriented parsing models for example if we assume that the generative process has already produced the top np node in figure 2 then the corresponding partial english parse tree foreignsource string and wordlevel alignment could be generated by the rule derivation r4 where each rule is assumed to have some probabilitythe extended tree to string transducers introduced by knight and graehl provide a natural framework for expressing the tree to string transformations specific to our spmt modelsthe transformation rules we plan to exploit are equivalent to onestate xrs topdown transducers with look ahead which map subtree patterns to stringsfor example rule r3 in figure 1 can be applied only when one is in a state that has a vp as its syntactic constituent and the tree pattern vp pp np immediately underneaththe rule application outputs the string comingfrom as the transducer moves to the state coindexed by x0 the outputs produced from the new state will be concatenated to the right of the string comingfromsince there are multiple derivations that could lead to the same outcome the probability of a tuple is obtained by summing over all derivations oi e o that are consistent with the tuple c _ the probability of each derivation bz is given by the product of the probabilities of all the rules p in the derivation in order to acquire the rules specific to our model and to induce their probabilities we parse the english side of our corpus with an inhouse implementation of collins parsing models and we wordalign the parallel corpus with the giza2 implementation of the ibm models we use the automatically derived tuples in order to induce xrs rules for several modelsin our simplest model we assume that each tuple in our automatically annotated corpus could be produced by applying a combination of minimally syntactified lexicalized phrasebased compatible xrs rules and minimalnecessary nonlexicalized xrs ruleswe call a rule nonlexicalized whenever it does not have any directly aligned sourcetotarget wordsrules r9r12 in figure 1 are examples of nonlexicalized rulesminimally syntactified lexicalized phrasebasedcompatible xrs rules are extracted via a simple algorithm that finds for each foreign phrase fzj the smallest xrs rule that is consistent with the foreign phrase fzj the english syntactic tree 7r and the alignment athe algorithm finds for each foreignsource phrase span its projected span on the english side and then traverses the english parse tree bottom up until it finds a node that subsumes the projected spanif this node has children that fall outside the projected span then those children give rise to rules that have variablesfor example if the tuple shown in figure 2 is in our training corpus for the foreignsource phrases france france and france and russia and astro nauts we extract the minimally syntactified lexicalized phrasebasedcompatible xrs rules r5 r6 r2 and r7 in figure 1 respectivelybecause as in phrasebased mt all our rules have continuous phrases on both the source and target language sides we call these phrasebased compatible xrs rulessince these lexicalized rules are not sufficient to explain an entire tuple we also extract the required minimalnecessary nonlexicalized xrs rulesthe minimal nonlexicalized rules that are licensed by the tuple in figure 2 are labeled r4 r9 r10 r11 and r12 in figure 1to obtain the nonlexicalized xrs rules we compute the set of all minimal rules by applying the algorithm proposed by galley et al and then remove the lexicalized ruleswe remove the galley et als lexicalized rules because they are either already accounted for by the minimally syntactified lexicalized phrasebasedcompatible xrs rules or they subsume noncontinuous sourcetarget phrase pairsit is worth mentioning that in our framework a rule is defined to be minimal with respect to a foreignsource language phrase ie it is the minimal xrs rule that yields that source phrasein contrast in the work of galley et al a rule is defined to be minimal when it is necessary in order to explain a tupleunder spmt model 1 the tree in figure 2 can be produced for example by the following derivation r4r3we hypothesize that composed rules ie rules that can be decomposed via the application of a sequence of model 1 rules may improve the performance of an spmt systemfor example although the minimal model 1 rules r11 and r13 are sufficient for building an english np on top of two nps separated by the chinese conjunction and the composed rule r14 in figure 1 accomplishes the same result in only one stepwe hope that the composed rules could play in spmt the same role that phrases play in stringbased translation modelsto test our hypothesis we modify our rule extraction algorithm so that for every foreign phrase fz we extract not only a minimally syntactified lexicalized xrs rule but also one composed rulethe composed rule is obtained by extracting the rule licensed by the foreignsource phrase alignment english parse tree and the first multichild ancestor node of the root of the minimal ruleour intuition is that composed rules that involve the application of more than two minimal rules are not reliablefor example for the tuple in figure 2 the composed rule that we extract given the foreign phrases and and comingfrom are respectively labeled as rules r14 and r15 in figure 1under the spmt composed model 1 the tree in figure 2 can be produced for example by the following derivation r15 r14 r12in many instances the tuples in our training corpus exhibit alignment patterns that can be easily handled within a phrasebased smt framework but that become problematic in the spmt models discussed until nowconsider for example the tuple fragment in figure 3when using a phrasebased translation model one can easily extract the phrase pair and use it during the phrasebased model estimation phrase and in decodinghowever within the xrs transducer framework that we use it is impossible to extract an equivalent syntactified phrase translation rule that subsumes the same phrase pair because valid xrs translation rules cannot be multiheadedwhen faced with this constraint one has several options our spmt model 2 adopts the third option by rewriting on the fly the english parse tree for each foreignsource phrase and alignment that lead to nonsyntactifiable phrase pairsthe rewriting process adds new rules to those that can be created under the spmt model 1 constraintsthe process creates one xrs rule that is headed by a pseudo nonsyntactic nonterminal symbol that subsumes the target phrase and corresponding multiheaded syntactic structure and one sibling xrs rule that explains how the nonsyntactic nonterminal symbol can be combined with other genuine nonterminals in order to obtain genuine parse treesin this view the foreignsource phrase the mutual and corresponding alignment in figure 3 licenses the rules npb nn jj the mutual and npb x0 x1 even though the foreign word understanding is aligned to an english word outside the npb consituentthe name of the nonsyntactic nonterminal reflects the intuition that the english phrase the mutual corresponds to a partially realized npb that needs an nn to its right in order to be fully realizedour hope is that the rules headed by pseudo nonterminals could make available to an spmt system all the rules that are typically available to a phrasebased system and that the sibling rules could provide a sufficiently robust generalization layer for integrating pseudo partially realized constituents into the overall decoding processthe spmt composed model 2 uses all rule types described in the previous modelsfor each model we extract all rule instances that are licensed by a symmetrized gizaaligned parallel corpus and the constraints we put on the modelwe condition on the root node of each rule and use the rule counts f and a basic maximum likelihood estimator to assign to each rule type a conditional probability it is unlikely that this joint probability model can be discriminative enough to distinguish between good and bad translationswe are not too concerned though because in practice we decode using a larger set of submodels given the way all our lexicalized xrs rules have been created one can safely strip out the syntactic information and end up with phrasetophrase translation rulesfor example in stringtostring world rule r5 in figure 1 can be rewritten as fiance france and rule r6 can be rewritten as fiance and france andwhen one analyzes the lexicalized xrs rules in this manner it is easy to associate with them any of the submodel probability distributions that have been proven useful in statistical phrasebased mtthe nonlexicalized rules are assigned probability distributions under these submodels as well by simply assuming a null phrase for any missing lexicalized source or target phrasein the experiments described in this paper we use the following submodels syntaxbasedlike submodels all these models are combined loglinearly during decodingthe weights of the models are computed automatically using a variant of the maximum bleu training procedure proposed by och the phrasebasedlike submodels have been proved useful in phrasebased approaches to smt the first two syntaxbased submodels implement a fused translation and lexical grounded distortion model and a syntaxbased distortion model the indicator submodels are used to determine the extent to which our system prefers lexicalized vs nonlexicalized rules simple vs composed rules and high vs low count ruleswe decode with each of our spmt models using a straightforward bottomup ckystyle decoder that builds english syntactic constituents on the top of chinese sentencesthe decoder uses a binarized representation of the rules which is obtained via a syncronous binarization procedure the ckystyle decoder computes the probability of english syntactic constituents in a bottom up fashion by loglinearly interpolating all the submodel scores described in section 23the decoder is capable of producing nbest derivations and nbest lists which are used for maximum bleu training when decoding the test corpus the decoder returns the translation that has the most probable derivation in other words the sum operator in equation 4 is replaced with an argmaxcombining multiple mt outputs to increase performance is in general a difficult task when significantly different engines compete for producing the best outputsin our case combining multiple mt outputs is much simpler because the submodel probabilities across the four models described here are mostly identifical with the exception of the root normalized and cfglike submodels which are scaled differently since model 2 composed has for example more rules than model 1 the root normalized and cfglike submodels have smaller probabilities for identical rules in model 2 composed than in model 1we compare these two probabilities across the submodels and we scale all model probabilities to be compatible with those of model 2 composedwith this scaling procedure into place we produce 6000 nonunique nbest lists for all sentences in our development corpus using all spmt submodelswe concatenate the lists and we learn a new combination of weights that maximizes the bleu score of the combined nbest list using the same development corpus we used for tuning the individual systems we use the new weights in order to rerank the nbest outputs on the test corpuswe evaluate our models on a chinese to english machine translation taskwe use the same training corpus 1387m words of parallel chineseenglish data released by ldc in order to train several statisticalbased mt systems in all systems we use a rule extraction algorithm that limits the size of the foreignsource phrases to four wordsfor all systems we use a kneserney smoothed trigram language model trained on 23 billion words of englishas development data for the spmt systems we used the sentences in the 2002 nist development corpus that are shorter than 20 words we made this choice in order to finish all experiments in time for this submissionthe pbmt system used all sentences in the 2002 nist corpus for developmentas test data we used the 2003 nist test settable 1 shows the number of stringtostring or treetostring rules extracted by each system and the performance on both the subset of sentences in the test corpus that were shorter than 20 words and the entire test corpusthe performance is measured using the bleu metric on lowercased tokenized outputsreferencesthe results show that the spmt models clearly outperform the phrasebased systems the 95 confidence intervals computed via bootstrap resampling in all cases are around 1 bleu pointthe results also show that the simple system combination procedure that we have employed is effective in our settingthe improvement on the development corpus transfers to the test setting as wella visual inspection of the outputs shows significant differences between the outputs of the four modelsthe models that use composed rules prefer to produce outputs by using mostly lexicalized rules in contrast the simple m1 and m2 models produce outputs in which content is translated primarily using lexicalized rules and reorderings and word insertions are explained primarily by the nonlexical rulesit appears that the two strategies are complementary succeeding and failing in different instanceswe believe that this complementarity and the overcoming of some of the search errors in our decoder during the model rescoring phase explain the success of the system combination experimentswe suspect that our decoder still makes many search errorsin spite of this the sptm outputs are still significantly better than the pbmt outputswe also tested whether the bleu score improvements translate into improvements that can be perceived by humansto this end we randomly selected 138 sentences of less than 20 words from our development corpus we expected the translation quality of sentences of this size to be easier to assess than that of sentences that are very longwe prepared a webbased evaluation interface that showed for each input sentence the evaluated mt systems were the six systems shown in table 1 and one of the reference translationsthe reference translation presented as automatically produced output was selected from the set of four reference translations provided by nist so as to be representative of human translation qualitymore precisely we chose the second best reference translation in the nist corpus according to its bleu score against the other three reference translationsthe seven outputs were randomly shufied and presented to three english speakers for assessmentthe judges who participated in our experiment were instructed to carefully read the three reference translations and seven machine translation outputs and assign a score between 1 and 5 to each translation output on the basis of its qualityhuman judges were told that the translation quality assessment should take into consideration both the grammatical iuency of the outputs and their translation adequacytable 2 shows the average scores obtained by each system according to each judgefor convenience the table also shows the bleu scores of all systems on three reference translationsthe results in table 2 show that the human judges are remarkably consistent in preferring the syntaxbased outputs over the phrasebased outputson a 1 to 5 quality scale the difference between the phrasebased and syntaxbased systems was on average between 02 and 03 pointsall differences between the phrasebased baseline and the syntaxbased outputs were statistically significantfor example when comparing the phrasebased baseline against the combined system the improvement in human scores was significant at p 404e6the results also show that the ldc reference translations are far from being perfectalthough we selected from the four references the second best according to the bleu metric this human reference was judged to be at a quality level of only 467 on a scale from 1 to 5most of the translation errors were iuency errorsalthough the human outputs had most of the time the right meaning the syntax was sometimes incorrectin order to give readers a iavor of the types of reorderings enabled by the spmt models we present in table 3 several translation outputs produced by the phrasebased baseline and the combined spmt systemthe outputs were selected to reflect both positive and negative effects of largescale reorderingsthe spmt models are similar to the models proposed by chiang and galley et al if we analyze these three models in terms of expressive power the galley et al model is more expressive than the spmt models which in turn are more expressive than chiangs modelthe xrs formalism utilized by galley et al allows for the use of translation rules that have multilevel target tree annotations and discontinuous source language phrasesthe spmt models are less general they use translation rules that have multilevel target tree annotations but require that the source language phrases are continuousthe syncronous grammar formalism utilized by chiang is stricter than spmt since it allows only for singlelevel target tree annotationsthe parameters of the spmt models presented in this paper are easier to estimate than those of galley et als and can easily exploit and expand on previous research in phrasebased machine translationalso the spmt models yield significantly fewer rules that the model of galley et al in contrast with the model proposed by chiang the spmt models introduced in this paper are fully grounded in syntax this makes them good candidates for exploring the impact that syntaxbased language models could have on translation performancefrom a machine translation perspective the spmt translation model family we have proposed in this paper is promisingto our knowledge we are the first to report results that show that a syntaxbased system can produce results that are better than those produced by a strong phrasebased system in experimental conditions similar to those used in largescale wellestablished independent evaluations such as those carried out annually by nistalthough the number of syntaxbased rules used by our models is smaller than the number of phrasebased rules used in our stateoftheart baseline system the spmt models produce outputs of higher qualitythis feature is encouraging because it shows that the syntactified translation rules learned in the spmt models can generalize better than the phrasebased ruleswe were also pleased to see that the bleu score improvements going from the phrase to the syntaxbased models as well as the bleu improvements going from the simple syntaxbased models to the combined models system are fully consistent with the human qualitative judgments in our subjective evaluationsthis correlation suggests that we can continue to use the bleu metric to further improve our models and systemsacknowledgementsthis research was partially supported by the national institute of standards and technologys advanced technology program award 70nanb4h3050 to language weaver inc
W06-1606
spmt statistical machine translation with syntactified target language phraseswe introduce spmt a new class of statistical translation models that use syntactified target language phrasesthe spmt models outperform a state of the art phrasebased baseline model by 264 bleu points on the nist 2003 chineseenglish test corpus and 028 points on a humanbased quality metric that ranks translations on a scale from 1 to 5
phrasetable smoothing for statistical machine translation we discuss different strategies for smoothing the phrasetable in statistical mt and give results over a range of translation settings we show that any type of smoothing is a better idea than the relativefrequency estimates that are often used the best smoothing techniques yield consistent gains of approximately 1 according to the bleu metric smoothing is an important technique in statistical nlp used to deal with perennial data sparseness and empirical distributions that overfit the training corpussurprisingly however it is rarely mentioned in statistical machine translationin particular stateoftheart phrasebased smt relies on a phrasetablea large set of ngram pairs over the source and target languages along with their translation probabilitiesthis table which may contain tens of millions of entries and phrases of up to ten words or more is an excellent candidate for smoothingyet very few publications describe phrasetable smoothing techniques in detailin this paper we provide the first systematic study of smoothing methods for phrasebased smtalthough we introduce a few new ideas most methods described here were devised by others the main purpose of this paper is not to invent new methods but to compare methodsin experiments over many language pairs we show that smoothing yields small but consistent gains in translation performancewe feel that this paper only scratches the surface many other combinations of phrasetable smoothing techniques remain to be testedwe define a phrasetable as a set of source phrases s and their translations t along with associated translation probabilities p and tsthese conditional distributions are derived from the joint frequencies c of sourcetarget phrase pairs observed in a wordaligned parallel corpustraditionally maximumlikelihood estimation from relative frequencies is used to obtain conditional probabilities eg p c es c and p are symmetrical we will usually refer only to p for brevitythe most obvious example of the overfitting this causes can be seen in phrase pairs whose constituent phrases occur only once in the corpusthese are assigned conditional probabilities of 1 higher than the estimated probabilities of pairs for which much more evidence exists in the typical case where the latter have constituents that cooccur occasionally with other phrasesduring decoding overlapping phrase pairs are in direct competition so estimation biases such as this one in favour of infrequent pairs have the potential to significantly degrade translation qualityan excellent discussion of smoothing techniques developed for ngram language models may be found in phrasetable smoothing differs from ngram lm smoothing in the following ways proceedings of the 2006 conference on empirical methods in natural language processing pages 5361 sydney july 2006 c2006 association for computational linguistics c 01 however probability mass is to model p we use a standard loglinear reserved for the set of unseen translations implying that probability mass is subtracted from the seen translationswe propose various ways of dealing with these special features of the phrasetable smoothing problem and give evaluations of their performance within a phrasebased smt systemthe paper is structured as follows section 2 gives a brief description of our phrasebased smt system section 3 presents the smoothing techniques used section 4 reviews previous work section 5 gives experimental results and section 6 concludes and discusses future workgiven a source sentence s our phrasebased smt system tries to find the target sentence tˆ that is the most likely translation of s to make search more efficient we use the viterbi approximation and seek the most likely combination of t and its alignment a with s rather than just the most likely t where a tk are target phrases such that t t1 tk sk are source phrases such that s sj1 sjk and sk is the translation of the kth target phrase tk where each fi is a feature function and weights ai are set using ochs algorithm to maximize the systems bleu score on a development corpusthe features used in this study are the length of t a singleparameter distortion penalty on phrase reordering in a as described in phrase translation model probabilities and trigram language model probabilities log p using kneserney smoothing as implemented in the srilm toolkit phrase translation model probabilities are features of the form ie we assume that the phrases sk specified by a are conditionally independent and depend only on their aligned phrases tkthe forward phrase probabilities p are not used as features but only as a filter on the set of possible translations for each source phrase s that matches some ngram in s only the 30 topranked translations t according to p are retainedto derive the joint counts c from which p and p are estimated we use the phrase induction algorithm described in with symmetrized word alignments generated using ibm model 2 smoothing involves some recipe for modifying conditional distributions away from pure relativefrequency estimates made from joint counts in order to compensate for data sparsityin the spirit of figure 211 pg38 smoothing can be seen as a way of combining the relativefrequency estimate which is a model with high complexity high variance and low bias with another model with lower complexity lower variance and high bias in the hope of obtaining better performance on new datathere are two main ingredients in all such recipes some probability distribution that is smoother than relative frequencies and some technique for combining that distribution with relative frequency estimateswe will now discuss both these choices the distribution for carrying out smoothing and the combination techniquein this discussion we use p to denote relative frequency distributionsone can distinguish between two approaches to smoothing phrase tablesblackbox techniques do not look inside phrases but instead treat them as atomic objects that is both the s and the t in the expression p are treated as units about which nothing is known except their countsin contrast glassbox methods break phrases down into their component wordsthe blackbox approach which is the simpler of the two has received little attention in the smt literaturean interesting aspect of this approach is that it allows one to implement phrasetable smoothing techniques that are analogous to lm smoothing techniques by treating the problem of estimating p as if it were the problem of estimating a bigram conditional probabilityin this paper we give experimental results for phrasetable smoothing techniques analogous to goodturing fixeddiscount kneserney and modified kneserney lm smoothingglassbox methods for phrasetable smoothing have been described by other authors see section 33these authors decompose p into a set of lexical distributions p by making independence assumptions about the words s in sthe other possibility which is similar in spirit to ngram lm lowerorder estimates is to combine estimates made by replacing words in t with wildcards as proposed in section 34although we explored a variety of blackbox and glassbox smoothing distributions we only tried two combination techniques linear interpolation which we used for blackbox smoothing and loglinear interpolation which we used for glassbox smoothingfor blackbox smoothing we could have used a backoff scheme or an interpolation schemebackoff schemes have the form pb is a smooth backoff distribution and τ is a threshold above which counts are considered reliabletypically τ 1 and ph is version of p modified to reserve some probability mass for unseen eventsinterpolation schemes have the general form where α and β are combining coefficientsas noted in a key difference between interpolation and backoff is that the former approach uses information from the smoothing distribution to modify p for higherfrequency events whereas the latter uses it only for lowfrequency events since for phrasetable smoothing better prediction of unseen events has no direct impactonly seen events are represented in the phrasetable and thus hypothesized during decodinginterpolation seemed a more suitable approachfor combining relativefrequency estimates with glassbox smoothing distributions we employed loglinear interpolationthis is the traditional approach for glassbox smoothing to illustrate the difference between linear and loglinear interpolation consider combining two bernoulli distributions p1 and p2 using each method where qi 1 pisetting p2 05 to simulate uniform smoothing gives ploglin p1αα q1αthis is actually less smooth than the original distribution p1 it preserves extreme values 0 and 1 and makes intermediate values more extremeon the other hand plinear αp1 2 which has the opposite properties it moderates extreme values and tends to preserve intermediate valuesan advantage of loglinear interpolation is that we can tune loglinear weights so as to maximize the true objective function for instance bleu recall that our translation model is itself loglinear with weights set to minimize errorsin fact a limitation of the experiments described in this paper is that the loglinear weights for the glassbox techniques were optimized for bleu using ochs algorithm while the linear weights for blackbox techniques were set heuristicallyobviously this gives the glassbox techniques an advantage when the different smoothing techniques are compared using bleuimplementing an algorithm for optimizing linear weights according to bleu is high on our list of prioritiesthe preceding discussion implicitly assumes a single set of counts c from which conditional distributions are derivedbut as phrases of different lengths are likely to have different statistical properties it might be worthwhile to break down the global phrasetable into separate phrasetables for each value of t for the purposes of smoothingany similar strategy that does not split up 1sc 01 for any fixed t can be applied to any smoothing schemethis is another idea we are eager to try soonwe now describe the individual smoothing schemes we have implementedfour of them are blackbox techniques goodturing and three fixeddiscount techniques two of them are glassbox techniques zensney noisyor and koehnochmarcu ibm smoothingour experiments tested not only these individual schemes but also some loglinear combinations of a blackbox technique with a glassbox techniquegoodturing smoothing is a wellknown technique in which observed counts c are modified according to the formula and gale 1991we first take the log of the observed values and then use a linear least squares fit to log nc as a function of log c to ensure that the result stays close to the reliable values of nc for large c error terms are weighted by c ie c2 where n are the fitted valuesour implementation pools all counts c together to obtain n it follows directly from that the total count mass assigned to unseen phrase pairs is cgn0 n1 which we approximate by nithis mass is distributed among contexts t in proportion to c giving final estimates where p c et cfixeddiscount methods subtract a fixed discount d from all nonzero counts and distribute the resulting probability mass according to a smoothing distribution we use an interpolated version of fixeddiscount proposed by rather than the original backoff versionfor phrase pairs with nonzero counts this distribution has the general form where pb is the smoothing distributionnormalization constraints fix the value of α where cg is a modified count value used to replace c in subsequent relativefrequency estimates and nc is the number of events having count c an intuitive motivation for this formula is that it approximates relativefrequency estimates made by successively leaving out each event in the corpus and then averaging the results a practical difficulty in implementing goodturing smoothing is that the nc are noisy for large c for instance there may be only one phrase pair that occurs exactly c 347623 times in a large corpus and no pair that occurs c 347624 times leading to cg 0 clearly not what is intendedour solution to this problem is based on the technique described in is the number of phrases s for which c 0we experimented with two choices for the smoothing distribution pbthe first is a plain unigram p and the second is the kneserney lowerorder distribution ie the proportion of unique target phrases that s is associated with where n1 is defined analogously to n1intuitively the idea is that source phrases that cooccur with many different target phrases are more likely to appear in new contextsfor both unigram and kneserney smoothing distributions we used a discounting coefficient derived by on the basis of a leaveoneout analysis d n1for the kneserney smoothing distribution we also tested the modified kneserney extension suggested in in which specific coefficients dc are used for small count values c up to a maximum of three for c 2 and c 3 we used formulas given in that paperthe two glassbox techniques that we considered involve decomposing source phrases with independence assumptionsthe simplest approach assumes that all source words are conditionally independent so that we implemented two variants for p that are described in previous work describe a noisyor combination where sj is the probability that sj is not in the translation of t and p is a lexical probability obtain p from smoothed relativefrequency estimates in a wordaligned corpusour implementation simply uses ibm1 probabilities which obviate further smoothingthe noisyor combination stipulates that sj should not appear in s if it is not the translation of any of the words in tthe complement of this proposed in to say that sj should appear in s if it is the translation of at least one of the words in t where aj is a set of likely alignment connections for sjin our implementation of this method we assumed that aj 1 i ie the set of all connections and used ibm1 probabilities for pwe mentioned earlier that lm ngrams have a naturallyordered sequence of smoothing distributions obtained by successively dropping the last word in the contextfor phrasetable smoothing because no word in t is a priori less informative than any others there is no exact parallel to this techniquehowever it is clear that estimates made by replacing particular target words with wildcards will be smoother than the original relative frequenciesa simple scheme for combining them is just to average one might also consider progressively replacing the least informative remaining word in the target phrase the same idea could be applied in reverse by replacing particular source words with wildcardswe have not yet implemented this new glassbox smoothing technique but it has considerable appealthe idea is similar in spirit to collins backoff method for prepositional phrase attachment as mentioned previously give a comprehensive survey and evaluation of smoothing techniques for language modelingas also mentioned previously there is relatively little published work on smoothing for statistical mtfor the ibm models alignment probabilities need to be smoothed for combinations of sentence lengths and positions not encountered in training data moore has found that smoothing to correct overestimated ibm1 lexical probabilities for rare words can improve wordalignment performancelanglais reports negative results for synonymbased smoothing of ibm2 lexical probabilities prior to extracting phrases for phrasebased smtfor phrasebased smt the use of smoothing to avoid zero probabilities during phrase induction is reported in but no details are givenas described above and use two different variants of glassbox smoothing over the phrasetable and combine the resulting estimates with pure relativefrequency ones in a loglinear modelfinally describes the use of wittenbell smoothing for phrasetable counts but does not give a comparison to other methodsas wittenbell is reported by to be significantly worse than kneserney smoothing we have not yet tested this methodwe carried out experiments in two different settings broadcoverage ones across six european language pairs using selected smoothing techniques and relatively small training corpora and chinese to english experiments using all implemented smoothing techniques and large training corporafor the blackbox techniques the smoothed phrase table replaced the original relativefrequency phrase tablefor the glassbox techniques a phrase table was interpolated in loglinear fashion with the smoothing glassbox distribution with weights set to maximize bleu on a development corpusto estimate the significance of the results across different methods we used 1000fold pairwise bootstrap resampling at the 95 confidence levelin order to measure the benefit of phrasetable smoothing for relatively small corpora we used the data made available for the wmt06 shared task this exercise is conducted openly with access to all needed resources and is thus ideal for benchmarking statistical phrasebased translation systems on a number of language pairsthe wmt06 corpus is based on sentences extracted from the proceedings of the european parliamentseparate sentencealigned parallel corpora of about 700000 sentences are provided for the three language pairs having one of french spanish and german with englishsrilm language models based on the same source are also provided for each of the four languageswe used the provided 2000sentence devsets for tuning loglinear parameters and tested on the 3064sentence test setsresults are shown in table 1 for relativefrequency goodturing kneserney with 1 and 3 discount coefficients and loglinear combinations of both rf and kn3 phrasetables with zensneyibm1 smoothed phrasetables it is apparent from table 1 that any kind of phrase table smoothing is better than using none the minimum improvement is 045 bleu and the difference between rf and all other methods is statistically significantalso kneserney smoothing gives a statistically significant improvement over gt smoothing with a minimum gain of 030 bleuusing more discounting coefficients does not appear to helpsmoothing relative frequencies with an additional zensney phrasetable gives about the same gain as kneserney smoothing on its ownhowever combining kneserney with zensney gives a clear gain over any other method demonstrating that these approaches are complementaryto test the effects of smoothing with larger corpora we ran a set of experiments for chineseenglish translation using the corpora distributed for the nist mt05 evaluation these are summarized in table 2due to the large size of the outofdomain un corpus we trained one phrasetable on it and another on all other parallel corpora we also used a subset of the english gigaword corpus to augment the lm training materialtable 3 contains results for the chineseenglish experiments including fixeddiscount with unigram smoothing and koehnochmarcu smoothing with the ibm1 model as described in section 33as with the broadcoverage experiments all of the blackbox smoothing techniques do significantly better than the rf baselinehowever gt appears to work better in the largecorpus setting it is statistically indistinguishable from kn3 and both these methods are significantly better than all other fixeddiscount variants among which there is little differencenot surprisingly the two glassbox methods znibm1 and komibm1 do poorly when used on their ownhowever in combination with another phrasetable they yield the best results obtained by rfznibm1 and gtkomibm1 which are statistically indistinguishablein constrast to the situation in the broadcoverage setting these are not significantly better than the best blackbox method on its own although rfznibm1 is better than all other glassbox combinationsa striking difference between the broadcoverage setting and the chineseenglish setting is that in the former it appears to be beneficial to apply kn3 smoothing to the phrasetable that gets combined with the best glassbox phrasetable whereas in the latter setting it does notto test whether this was due to corpus size we calculated chineseenglish learning curves for the rfznibm1 and kn3znibm1 methods shown in figure 1the results are somewhat inconclusive although the kn3znibm1 curve is perhaps slightly flatter the most obvious characteristic is that this method appears to be highly sensitive to the particular corpus sample usedwe tested different phrasetable smoothing techniques in two different translation settings european language pairs with relatively small corpora and chinese to english translation with large corporathe smoothing techniques fall into two categories blackbox methods that work only on phrasepair counts and glassbox methods that decompose phrase probabilities into lexical probabilitiesin our implementation blackbox techniques use linear interpolation to combine relative frequency estimates with smoothing distributions while glassbox techniques are combined in loglinear fashion with either relativefrequencies or blackbox estimatesall smoothing techniques tested gave statistically significant gains over pure relativefrequency estimatesin the smallcorpus setting the best technique is a loglinear combination of kneserney count smoothing with zensney glassbox smoothing this yields an average gain of 16 bleu points over relative frequenciesin the largecorpus setting the best technique is a loglinear combination of relativefrequency estimates with zensney smoothing with a gain of 11 bleu pointsof the two glassbox smoothing methods tested zensney appears to have a slight advantage over koehnochmarcuof the blackbox methods tested kneserney is clearly better for small corpora but is equivalent to goodturing for larger corporathe paper describes several smoothing alternatives which we intend to test in future workthe authors would like to thank their colleague michel simard for stimulating discussionsthe first author would like to thank all his colleagues for encouraging him to taste a delicacy that was new to him this material is based upon work supported by the defense advanced research projects agency under contract nohr001106c0023any opinions findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the defense advanced research projects agency
W06-1607
phrasetable smoothing for statistical machine translationwe discuss different strategies for smoothing the phrasetable in statistical mt and give results over a range of translation settingswe show that any type of smoothing is a better idea than the relativefrequency estimates that are often usedthe best smoothing techniques yield consistent gains of approximately 1 according to the bleu metric
domain adaptation with structural correspondence learning discriminative learning methods are widely used in natural language processing these methods work best when their training and test data are drawn from the same distribution for many nlp tasks however we are confronted with new domains in which labeled data is scarce or nonexistent in such cases we seek to adapt existing models from a resourcerich source domain to a resourcepoor domain we introduce learning automatically induce correspondences among features from different domains we test our technique on part of speech tagging and show performance gains for varying amounts of source and target training data as well as improvements in target domain parsing accuracy using our improved tagger discriminative learning methods are ubiquitous in natural language processingdiscriminative taggers and chunkers have been the stateoftheart for more than a decade furthermore endtoend systems like speech recognizers and automatic translators use increasingly sophisticated discriminative models which generalize well to new data that is drawn from the same distribution as the training datahowever in many situations we may have a source domain with plentiful labeled training data but we need to process material from a target domain with a different distribution from the source domain and no labeled datain such cases we must take steps to adapt a model trained on the source domain for use in the target domain this work focuses on using unlabeled data from both the source and target domains to learn a common feature representation that is meaningful across both domainswe hypothesize that a discriminative model trained in the source domain using this common feature representation will generalize better to the target domainthis representation is learned using a method we call structural correspondence learning the key idea of scl is to identify correspondences among features from different domains by modeling their correlations with pivot featurespivot features are features which behave in the same way for discriminative learning in both domainsnonpivot features from different domains which are correlated with many of the same pivot features are assumed to correspond and we treat them similarly in a discriminative learnereven on the unlabeled data the cooccurrence statistics of pivot and nonpivot features are likely to be sparse and we must model them in a compact waythere are many choices for modeling cooccurrence data in this work we choose to use the technique of structural learning structural learning models the correlations which are most useful for semisupervised learningwe demonstrate how to adapt it for transfer learning and consequently the structural part of structural correspondence learning is borrowed from it1 scl is a general technique which one can apply to feature based classifiers for any taskhere tructural learning is different from learning with structured outputs a common paradigm for discriminative natural language processing modelsto avoid terminological confusion we refer throughout the paper to a specific structural learning method alternating structural optimization we investigate its use in part of speech tagging while pos tagging has been heavily studied many domains lack appropriate training corpora for pos taggingnevertheless pos tagging is an important stage in pipelined language processing systems from information extractors to speech synthesizerswe show how to use scl to transfer a pos tagger from the wall street journal to medline which use very different vocabularies and we demonstrate not only improved pos accuracy but also improved endtoend parsing accuracy while using the improved taggeran important but rarelyexplored setting in domain adaptation is when we have no labeled training data for the target domainwe first demonstrate that in this situation scl significantly improves performance over both supervised and semisupervised taggersin the case when some indomain labeled training data is available we show how to use scl together with the classifier combination techniques of florian et al to achieve even greater performancein the next section we describe a motivating example involving financial news and biomedical datasection 3 describes the structural correspondence learning algorithmsections 6 and 7 report results on adapting from the wall street journal to medlinewe discuss related work on domain adaptation in section 8 and conclude in section 9figure 1 shows two postagged sentences one each from the wall street journal and medlinewe chose these sentences for two reasonsfirst we wish to visually emphasize the difference between the two domainsthe vocabularies differ significantly and pos taggers suffer accordinglysecond we want to focus on the phrase with normal signal transduction from the medline sentence depicted in figure 2the word signal in this sentence is a noun but a tagger trained on the wsj incorrectly classifies it as an adjectivewe introduce the notion ofpivot featurespivot features are features which occur frequently in the two domains and behave similarly in bothfigure 2 shows some pivot features that occur together with the word signal in our biomedical unlabeled datain this case our pivot features are all of type note that signal is unambiguously a noun in these contextsadjectives rarely precede past tense verbs such as required or prepositions such as from and forwe now search for occurrences of the pivot features in the wsjfigure 2 shows some words that occur together with the pivot features in the wsj unlabeled datanote that investment buyouts and jail are all common nouns in the financial domainfurthermore since we have labeled wsj data we expect to be able to label at least some of these nouns correctlythis example captures the intuition behind structural correspondence learningwe want to use pivot features from our unlabeled data to put domainspecific words in correspondencethat is we want the pivot features to model the fact that in the biomedical domain the word signal behaves similarly to the words investments buyouts and jail in the financial news domainin practice we use this technique to find correspondences among all features not just word featuresstructural correspondence learning involves a source domain and a target domainboth domains have ample unlabeled data but only the source domain has labeled training datawe refer to the task for which we have labeled training data as the supervised taskin our experiments the supervised task is part of speech taggingwe require that the input x in both domains be a vector of binary features from a finite feature spacethe first step of scl is to define a set of pivot features on the unlabeled data from both domainswe then use these pivot features to learn a mapping 0 from the original feature spaces of both domains to a shared lowdimensional realvalued feature spacea high inner product in this new space indicates a high degree of correspondenceduring supervised task training we use both the transformed and original features from the source domainduring supervised task testing we use the both the transformed and original features from the target domainif we learned a good mapping 0 then the classifier we learn on the source domain will also be effective on the target domainthe scl algorithm is given in figure 3 and the remainder of this section describes it in detailpivot features should occur frequently in the unlabeled data of both domains since we must estimate their covariance with nonpivot features accurately but they must also be diverse enough to adequately characterize the nuances of the supervised taska good example of this tradeoff are determiners in pos taggingdeterminers are good pivot features since they occur frequently in any domain of written english but choosing only determiners will not help us to discriminate between nouns and adjectivespivot features correspond to the auxiliary problems of ando and zhang in section 2 we showed example pivot features of type we also use pivot features of type and in practice there are many thousands of pivot features corresponding to instantiations of these three types for frequent words in both domainswe choose m pivot features which we index with e from each pivot feature we create a binary classification problem of the form does pivot feature e occur in this instanceone such example is is required these binary classification problems can be trained from the unlabeled data since they merely represent properties of the inputif we represent our features as a binary vector x we can solve these problems using m linear predictorsnote that these predictors operate on the original feature spacethis step is shown in line 2 of figure 3here l is a realvalued loss function for binary classificationwe follow ando and zhang and use the modified huber losssince each instance contains features which are totally predictive of the pivot feature we never use these features when making the binary predictionthat is we do not use any feature derived from the right word when solving a right token pivot predictorthe pivot predictors are the key element in sclthe weight vectors t encode the covariance of the nonpivot features with the pivot featuresif the weight given to the zth feature by the eth pivot predictor is positive then feature z is positively correlated with pivot feature e since pivot features occur frequently in both domains we expect nonpivot features from both domains to be correlated with themif two nonpivot features are correlated in the same way with many of the same pivot features then they have a high degree of correspondencefinally observe that t is a linear projection of the original feature space onto r since each pivot predictor is a projection onto r we could create m new realvalued features one for each pivotfor both computational and statistical reasons though we follow ando and zhang and compute a lowdimensional linear approximation to the pivot predictor spacelet w be the matrix whose columns are the pivot predictor weight vectorsnow let w udvt be the singular value decomposition of w so that 0 ut1h is the matrix whose rows are the top left singular vectors of w the rows of 0 are the principal pivot predictors which capture the variance of the pivot predictor space as best as possible in h dimensionsfurthermore 0 is a projection from the original feature space onto rhthat is 0x is the desired mapping to the shared feature representationthis is step 3 of figure 3to perform inference and learning for the supervised task we simply augment the original feature vector with features obtained by applying the mapping 0we then use a standard discriminative learner on the augmented feature vectorfor training instance t the augmented feature vector will contain all the original features xt plus the new shared features 0xtif we have designed the pivots well then 0 should encode correspondences among features from different domains which are important for the supervised task and the classifier we train using these new features on the source domain will perform well on the target domainstructural correspondence learning uses the techniques of alternating structural optimization to learn the correlations among pivot and nonpivot featuresando and zhang describe several free paramters and extensions to aso and we briefly address our choices for these herewe set h the dimensionality of our lowrank representation to be 25as in ando and zhang we observed that setting h between 20 and 100 did not change results significantly and a lower dimensionality translated to faster runtimewe also implemented both of the extensions described in ando and zhang the first is to only use positive entries in the pivot predictor weight vectors to compute the svdthis yields a sparse representation which saves both time and space and it also performs betterthe second is to compute block svds of the matrix w where one block corresponds to one feature typewe used the same 58 feature types as ratnaparkhi this gave us a total of 1450 projection features for both semisupervised aso and sclwe found it necessary to make a change to the aso algorithm as described in ando and zhang we rescale the projection features to allow them to receive more weight from a regularized discriminative learnerwithout any rescaling we were not able to reproduce the original aso resultsthe rescaling parameter is a single number and we choose it using heldout data from our source domainin all our experiments we rescale our projection features to have average l1 norm on the training set five times that of the binaryvalued featuresfinally we also make one more change to make optimization fasterwe select only half of the aso features for use in the final modelthis is done by running a few iterations of stochastic gradient descent on the pos tagging problem then choosing the features with the largest weightvariance across the different labelsthis cut in half training time and marginally improved performance in all our experimentswe used sections 0221 of the penn treebank for trainingthis resulted in 39832 training sentencesfor the unlabeled data we used 100000 sentences from a 1988 subset of the wsjfor unlabeled data we used 200000 sentences that were chosen by searching medline for abstracts pertaining to cancer in particular genomic variations and mutationsfor labeled training and testing purposes we use 1061 sentences that have been annotated by humans as part of the penn bioie project we use the same 561sentence test set in all our experimentsthe partofspeech tag set for this data is a superset of the penn treebanks including the two new tags hyph and afx these tags were introduced due to the importance of hyphenated entities in biomedical text and are used for 18 of the words in the test setany tagger trained only on wsj text will automatically predict wrong tags for those wordssince scl is really a method for inducing a set of crossdomain features we are free to choose any featurebased classifier to use themfor our experiments we use a version of the discriminative online largemargin learning algorithm mira mira learns and outputs a linear classification score s w f where the feature representation f can contain arbitrary features of the input including the correspondence features described earlierin particular mira aims to learn weights so that the score of correct output yt for input xt is separated from the highest scoring incorrect outputs2 with a margin proportional to their hamming lossesmira has been used successfully for both sequence analysis and dependency parsing as with any structured predictor we need to factor the output space to make inference tractablewe use a firstorder markov factorization allowing for an efficient viterbi inference procedurein section 2 we claimed that good representations should encode correspondences between words like signal from medline and investment from the wsjrecall that the rows of 0 are projections from the original feature space onto the real linehere we examine word features under these projectionsfigure 4 shows a row from the matrix 0applying this projection to a word gives a real value on the horizontal dashed line axisthe words below the horizontal axis occur only in the wsjthe words above the axis occur only in medlinethe verticle line in the middle represents the value zeroticks to the left or right indicate relative positive or negative values for a word under this projectionthis projection discriminates between nouns and adjectives a tagger which gives high positive weight to the features induced by applying this projection will be able to discriminate among the associated classes of biomedical words even when it has never observed the words explicitly in the wsj source training setall the results we present in this section use the mira tagger from section 53the aso and structural correspondence results also use projection features learned using aso and sclsection 71 presents results comparing structural correspondence learning with the supervised baseline and aso in the case where we have no labeled data in the target domainsection 72 gives results for the case where we have some limited data in the target domainin this case we use classifiers as features as described in florian et al finally we show in section 73 that our scl pos tagger improves the performance of a dependency parser on the target domainfor the results in this section we trained a structural correspondence learner with 100000 sentences of unlabeled data from the wsj and 100000 sentences of unlabeled biomedical datawe use as pivot features words that occur more than 50 times in both domainsthe supervised baseline does not use unlabeled datathe aso baseline is an implementation of ando and zhang it uses 200000 sentences of unlabeled medline data but no unlabeled wsj datafor aso we used as auxiliary problems words that occur more than 500 times in the medline unlabeled datafigure 5 plots the accuracies of the three models with varying amounts of wsj training datawith one hundred sentences of training data structural correspondence learning gives a 191 relative reduction in error over the supervised baseline and it consistently outperforms both baseline modelsfigure 5 gives results for 40000 sentences and figure 5 shows corresponding significance tests with p 005 being significantwe use a mcnemar paired test for labeling disagreements even when we use all the wsj training data available the scl model significantly improves accuracy over both the supervised and aso baselinesthe second column of figure 5 gives unknown word accuracies on the biomedical dataof thirteen thousand test instances approximately three thousand were unknownfor unknown words scl gives a relative reduction in error of 195 over ratnaparkhi even with 40000 sentences of source domain training datain this section we give results for small amounts of target domain training datain this case we make use of the outofdomain data by using features of the source domain taggers predictions in training and testing the target domain tagger though other methods for incorporating small amounts of training data in the target domain were available such as those proposed by chelba and acero and by daume iii and marcu we chose this method for its simplicity and consistently good performancewe use as features the current predicted tag and all tag bigrams in a 5token window around the current tokenfigure 6 plots tagging accuracy for varying amounts of medline training datathe two horizontal lines are the fixed accuracies of the scl wsjtrained taggers using one thousand and forty thousand sentences of training datathe five learning curves are for taggers trained with varying amounts of target domain training datathey use features on the outputs of taggers from section 71the legend indicates the kinds of features used in the target domain for example 40kscl means that the tagger uses features on the outputs of an scl source tagger trained on forty thousand sentences of wsj datanosource indicates a target tagger that did not use any tagger trained on the source domainwith 1000 source domain sentences and 50 target domain sentences using scl tagger features gives a 204 relative reduction in error over using supervised tagger features and a 399 relative reduction in error over using no source featuresfigure 6 is a table of accuracies for 500 target domain training sentences and figure 6 gives corresponding significance scoreswith 1000 source domain sentences and 500 target domain sentences using supervised tagger features gives no improvement over using no source featuresusing scl features still does howeverwe emphasize the importance of pos tagging in a pipelined nlp system by incorporating our scl tagger into a wsjtrained dependency parser and and evaluate it on medline datawe use the parser described by mcdonald et al that parser assumes that a sentence has been postagged before parsingwe train the parser and pos tagger on the same size of wsj datafigure 7 shows dependency parsing accuracy on our 561sentence medline test setwe parsed the sentences using the pos tags output by our source domain supervised tagger the scl tagger from subsection 71 and the gold pos tagsall of the differences in this figure are significant according to mcnemars testthe scl tags consistently improve parsing performance over the tags output by the supervised taggerthis is a rather indirect method of improving parsing performance with sclin the future we plan on directly incorporating scl features into a discriminative parser to improve its adaptation propertiesdomain adaptation is an important and wellstudied area in natural language processinghere we outline a few recent advancesroark and bacchiani use a dirichlet prior on the multinomial parameters of a generative parsing model to combine a large amount of training data from a source corpus and small amount of training data from a target corpus aside from florian et al several authors have also given techniques for adapting classification to new domainschelba and acero first train a classifier on the source datathen they use maximum a posteriori estimation of the weights of a maximum entropy target domain classifierthe prior is gaussian with mean equal to the weights of the source domain classifierdaume iii and marcu use an empirical bayes model to estimate a latent variable model grouping instances into domainspecific or common across both domainsthey also jointly estimate the parameters of the common classification model and the domain specific classification modelsour work focuses on finding a common representation for features from different domains not instanceswe believe this is an important distinction since the same instance can contain some features which are common across domains and some which are domain specificthe key difference between the previous four pieces of work and our own is the use of unlabeled datawe do not require labeled training data in the new domain to demonstrate an improvement over our baseline modelswe believe this is essential since many domains of application in natural language processing have no labeled training datalease and charniak adapt a wsj parser to biomedical text without any biomedical treebanked datahowever they assume other labeled resources in the target domainin section 73 we give similar parsing results but we adapt a source domain tagger to obtain the pos resourcesto the best of our knowledge scl is the first method to use unlabeled data from both domains for domain adaptationby using just the unlabeled data from the target domain however we can view domain adaptation as a standard semisupervised learning problemthere are many possible approaches for semisupservised learning in natural language processing and it is beyond the scope of this paper to address them allwe chose to compare with aso because it consistently outperforms cotraining and clustering methods we did run experiments with the topk version of aso which is inspired by cotraining but consistently outperforms itthis did not outperform the supervised method for domain adaptationwe speculate that this is because biomedical and financial data are quite differentin such a situation bootstrapping techniques are likely to introduce too much noise from the source domain to be usefulstructural correspondence learning is most similar to that of ando who analyzed a situation with no target domain labeled dataher model estimated cooccurrence counts from source unlabeled data and then used the svd of this matrix to generate features for a named entity recognizerour aso baseline uses unlabeled data from the target domainsince this consistently outperforms unlabeled data from only the source domain we report only these baseline resultsto the best of our knowledge this is the first work to use unlabeled data from both domains to find feature correspondencesone important advantage that this work shares with ando is that an scl model can be easily combined with all other domain adaptation techniques we are simply inducing a feature representation that generalizes well across domainsthis feature representation can then be used in all the techniques described abovestructural correspondence learning is a marriage of ideas from single domain semisupervised learning and domain adaptationit uses unlabeled data and frequentlyoccurring pivot features from both source and target domains to find correspondences among features from these domainsfinding correspondences involves estimating the correlations between pivot and nonpivot feautres and we adapt structural learning for this taskscl is a general technique that can be applied to any featurebased discriminative learnerwe showed results using scl to transfer a pos tagger from the wall street journal to a corpus of medline abstractsscl consistently outperformed both supervised and semisupervised learning with no labeled target domain training datawe also showed how to combine an scl tagger with target domain labeled data using the classifier combination techniques from florian et al finally we improved parsing performance in the target domain when using the scl pos taggerone of our next goals is to apply scl directly to parsingwe are also focusing on other potential applications including chunking named entity recognition and speaker adaptation finally we are investigating more direct ways of applying structural correspondence learning when we have labeled data from both source and target domainsin particular the labeled data of both domains not just the unlabeled data should influence the learned representationswe thank rie kubota ando and tong zhang for their helpful advice on aso steve carroll and pete white of the childrens hospital of philadelphia for providing the medline data and the pennbioie annotation team for the annotated medline data used in our test setsthis material is based upon work partially supported by the defense advanced research projects agency under contract nonbchd030010any opinions findings and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the darpa or the department of interiornational business center additional support was provided by nsf under itr grant eia0205448
W06-1615
domain adaptation with structural correspondence learningdiscriminative learning methods are widely used in natural language processingthese methods work best when their training and test data are drawn from the same distributionfor many nlp tasks however we are confronted with new domains in which labeled data is scarce or nonexistentin such cases we seek to adapt existing models from a resourcerich source domain to a resourcepoor target domainwe introduce structural correspondence learning to automatically induce correspondences among features from different domainswe test our technique on part of speech tagging and show performance gains for varying amounts of source and target training data as well as improvements in target domain parsing accuracy using our improved taggerfollowing we present an application of structural correspondence learning to nonprojective dependency parsing our approach is to train a separate outofdomain parser and use this to generate additional features on the supervised and unsupervised indomain datawe introduce scl that is one feature representation approach that has been effective on certain highdimensional nlp problems including partofspeech tagging and sentiment classificationwe apply the multitask algorithm of to domain adaptation problems in nlpwe append the source domain labeled data with predicted pivots to adapt a pos tagger to a target domain
incremental integer linear programming for nonprojective dependency parsing integer linear programming has recently been used for decoding in a number of probabilistic models in order to enforce global constraints however in certain applications such as nonprojective dependency parsing and machine translation the complete formulation of the decoding problem as an integer linear program renders solving intractable we present an approach which solves the problem incrementally thus we avoid creating intractable integer linear programs this approach is applied to dutch dependency parsing and we show how the addition of linguistically motivated constraints can yield a significant improvement over stateoftheart many inference algorithms require models to make strong assumptions of conditional independence between variablesfor example the viterbi algorithm used for decoding in conditional random fields requires the model to be markovianstrong assumptions are also made in the case of mcdonald et als nonprojective dependency parsing modelhere attachment decisions are made independently of one anotherhowever often such assumptions can not be justifiedfor example in dependency parsing if a subject has already been identified for a given verb then the probability of attaching a second subject to the verb is zerosimilarly if we find that one coordination argument is a noun then the other arguif we ignore the constraint that dependency trees must be cyclefree ment cannot be a verbthus decisions are often codependentinteger linear programming has recently been applied to inference in sequential conditional random fields this has allowed the use of truly global constraints during inferencehowever it is not possible to use this approach directly for a complex task like nonprojective dependency parsing due to the exponential number of constraints required to prevent cycles occurring in the dependency graphto model all these constraints explicitly would result in an ilp formulation too large to solve efficiently a similar problem also occurs in an ilp formulation for machine translation which treats decoding as the travelling salesman problem in this paper we present a method which extends the applicability of ilp to a more complex set of problemsinstead of adding all the constraints we wish to capture to the formulation we first solve the program with a fraction of the constraintsthe solution is then examined and if required additional constraints are addedthis procedure is repeated until all constraints are satisfiedwe apply this dependency parsing approach to dutch due to the languages nonprojective nature and take the parser of mcdonald et al as a starting point for our modelin the following section we introduce dependency parsing and review previous workin section 3 we present our model and formulate it as an ilp problem with a set of linguistically motivated constraintswe include details of an incremental algorithm used to solve this formulationour experimental setup is provided in section 4 and is followed by results in section 5 along with runtime experimentswe finally discuss future research and potential improvements to our approachdependency parsing is the task of attaching words to their argumentsfigure 1 shows a dependency graph for the dutch sentence i will come at twelve and then you will get what you deserve in this dependency graph the verb kom is attached to its subject ikkom is referred to as the head of the dependency and ik as its childin labelled dependency parsing edges between words are labelled with the relation capturedin the case of the dependency between kom and ik the label would be subjectin a dependency tree every token must be the child of exactly one other node either another token or the dummy root nodeby definition a dependency tree is free of cyclesfor example it must not contain dependency chains such as en __ kom __ ik enfor a more formal definition see previous work an important distinction between dependency trees is whether they are projective or nonprojectivefigure 1 is an example of a projective dependency tree in such trees dependencies do not crossin dutch and other flexible word order languages such as german and czech we also encounter nonprojective trees in these cases the trees contain crossing dependenciesdependency parsing is useful for applications such as relation extraction and machine translation although less informative than lexicalised phrase structures dependency structures still capture most of the predicateargument information needed for applicationsit has the advantage of being more efficient to learn and parsemcdonald et al introduce a dependency parsing framework which treats the task as searching for the projective tree that maximises the sum of local dependency scoresthis framefigure 2 an incorrect partial dependency treethe verb krijg is incorrectly coordinated with the preposition om work is efficient and has also been extended to nonprojective trees it provides a discriminative online learning algorithm which when combined with a rich feature set reaches stateoftheart performance across multiple languageshowever within this framework one can only define features over single attachment decisionsthis leads to cases where basic linguistic constraints are not satisfied an example of this for dutch is illustrated in figure 2 which was produced by the parser of mcdonald et al here the parse contains a coordination of incompatible word classes our approach is able to include additional constraints which forbid configurations such as those in figure 2while mcdonald and pereira address the issue of local attachment decisions by defining scores over attachment pairs our solution is more generalfurthermore it is complementary in the sense that we could formulate their model using ilp and then add constraintsthe method we present is not the only one that can take global constraints into accountdeterministic dependency parsing can apply global constraints by conditioning attachment decisions on the intermediate parse builthowever for efficiency a greedy search is used which may produce suboptimal solutionsthis is not the case when using ilpour underlying model is a modified labelled version2 of mcdonald et al where x is a sentence y is a set of labelled dependencies f is a multidimensional feature vector representation of the edge from token i to token j with label l and w the corresponding weight vectorfor example a feature f101 in f could be where t is the word at token i and p the partofspeech tag at token j decoding in this model amounts to finding the y for a given x that maximises s while fulfilling the following constraints t1 for every nonroot token in x there exists exactly one head the root token has no headt2 there are no cyclesthus far the formulation follows mcdonald et al and corresponds to the maximum spanning tree problemin addition to t1 and t2 we include a set of linguistically motivated constraints a1 heads are not allowed to have more than one outgoing edge labelled l for all l in a set of labels you c1 in a symmetric coordination there is exactly one argument to the right of the conjunction and at least one argument to the leftc2 in an asymmetric coordination there are no arguments to the left of the conjunction and at least two arguments to the rightc3 there must be at least one comma between subsequent arguments to the left of a symmetric coordinationc4 arguments of a coordination must have compatible word classesp1 two dependencies must not cross if one of their labels is in a set of labels p a1 covers constraints such as there can only be one subject if you contains subject c1 applies to configurations which contain conjunctions such as enof or maar c2 will ruleout settings where a conjunction such as zowel having arguments to its leftc3 forces coordination arguments to the left of a conjunction to have commas in betweenc4 avoids parses in which incompatible word classes are coordinated such as nouns and verbsfinally p1 allows selective projective parsing we can for instance forbid the crossing of noundeterminer dependencies if we add the corresponding label type to p if we extend p to contain all labels we forbid any type of crossing dependenciesthis corresponds to projective parsingmcdonald et al use the chuliuedmonds algorithm to solve the maximum spanning tree problemhowever global constraints cannot be incorporated into the cle algorithm we alleviate this problem by presenting an equivalent integer linear programming formulation which allows us to incorporate global constraints naturallybefore giving full details of our formulation we first introduce some of the concepts of linear and integer linear programming linear programming is a tool for solving optimisation problems in which the aim is to maximise a given linear function with respect to a set of linear constraintsthe function to be maximised is referred to as the objective functiona number of decision variables are under our control which exert influence on the objective functionspecifically they have to be optimised in order to maximise the objective functionfinally a set of constraints restrict the values that the decision variables can takeinteger linear programming is an extension of linear programming where all decision variables must take integer valuesthere are several explicit formulations of the mst problem as an integer linear program in the literature they are based on the concept of eliminating subtours cuts or requiring intervertex flows however in practice these formulations because long solve times as the first two methalgorithm 1 incremental integer linear programming ods yield an exponential number of constraintsalthough the latter scales cubically it produces nonfractional solutions in its relaxed version this causes long runtimes for the branch and bound algorithm commonly used in integer linear programmingwe found out experimentally that dependency parsing models of this form do not converge on a solution after multiple hours of solving even for small sentencesas a workaround for this problem we follow an incremental approach akin to the work of warme instead of adding constraints which forbid all possible cycles in advance we first solve the problem without any cycle constraintsthe solution is then examined for cycles and if cycles are found we add constraints to forbid these cycles the solver is then run againthis process is repeated until no more violated constraints are foundthe same procedure is used for other types of constraints which are too expensive to add in advance algorithm 1 outlines our approachfor a sentence x bx is the set of constraints that we add in advance and ix are the constraints we add incrementallyox is the objective function and vx is a set of variables including integer declarations solve maximises the objective function o with respect to the set of constraints c and variables v violated inspects the proposed solution and returns all constraints in i which are violatedthe number of iterations required in this approach is at most polynomial with respect to the number of variables in practice this technique converges quickly yielding average solve times of less than 05 secondsour approach converges quickly due to the quality of the scoring functionits weights have been trained on cycle free data thus it is more likely to guide the search to a cycle free solutionin the following section we present the objective function ox variables vx and linear constraints bx and ix needed for parsing x using algorithm 1vx contains a set of binary variables to represent labelled edges where n is the number of tokens and the index 0 represents the root token bestk is the set of k labels with highest s eijl equals 1 if there is a edge with the label l between token i and j 0 otherwise k depends on the type of constraints we want to addfor the plain mst problem it is sufficient to set k 1 and only take the best scoring label for each token pairhowever if we want a constraint which forbids duplicate subjects we need to provide additional labels to fall back onvx also contains a set of binary auxiliary variables which represent the existence of a dependency between tokens i and jwe connect these to the eijl variables by the constraint given the above variables our objective function ox can be represented as we first introduce a set of base constraints bx which we add in advanceonly one head every token has exactly one head for nonroot tokens j 0 in xan exception is made for the artificial root node label uniqueness to enforce uniqueness of children with labels in you we augment our model with the constraint for every token i in x and label l in you symmetric coordination for each conjunction token i which forms a symmetric coordination we add and asymmetric coordination for each conjunction token i which forms an asymmetric coordination we add andnow we present the set of constraints ix we add incrementallythe constraints are chosen based on the two criteria adding them to the base constraints would result in an extremely large program and it must be efficient to detect whether the constraint is violated in yno cycles for every possible cycle c for the sentence x we have a constraint which forbids the case where all edges in c are active simultaneously comma coordination for each symmetric conjunction token i which forms a symmetric coordination and each set of tokens a in x to the left of i with no comma between each pair of successive tokens we add which forbids configurations where i has the argument tokens acompatible coordination arguments for each conjunction token i and each set of tokens a in x with incompatible pos tags we add a constraint to forbid configurations where i has the argument tokens aselective projective parsing for each pair of triplets and we add the constraint for training we use singlebest mira this is an online algorithm that learns by parsing each sentence and comparing the result with a gold standardtraining is complete after multiple passes through the whole corpusthus we decode using the chuliuedmonds algorithm due to its speed advantage over ilp the fact that we decode differently during training and testing may degrade performancein the presence of additional constraints weights may be able to capture other aspects of the dataour experiments were designed to answer the following questions before we try to answer these questions we briefly describe our data features used settings for you and p in our parametric constraints our working environment and our implementationwe use the alpino treebank taken from the conll shared task of multilingual dependency parsing3the conll data differs slightly from the original alpino treebank as the corpus has been partofspeech tagged using a memorybasedtagger it consists of 13300 sentences with an average length of 146 tokensthe data is nonprojective more specifically 54 of all dependencies are crossed by at least one other dependencyit contains approximately 6000 sentences more than the alpino corpus used by malouf and van noord the training set was divided into a 10 development set while the remaining 90 is used as a training and crossvalidation set feature sets constraints and training parameters were selected through training on cross and optimising against devour final accuracy scores and runtime evaluations were acquired using a ninefold crossvalidation on cross all our experiments were conducted on a intel xeon with 38 ghz and 4gb of ramwe used the open source mixed integer programming library lp solve4 to solve the integer linear programsour code ran in java and called the jniwrapper around the lp solve libraryour feature set was determined through experimentation with the development setthe features are based upon the data provided within the alpino treebankalong with pos tags the corpus contains several additional attributes such as gender number and caseour best results on the development set were achieved using the feature set of mcdonald et al and a set of features based on the additional attributesthese features combine the attributes of the head with those of the childfor example if token i has the attributes a1 and a2 and token j has the attribute a3 then we created the features and all the constraints presented in section 3 were used in our modelthe set you of unique labels constraints contained su obj1 obj2 sup ld vc predc predm pc pobj1 obcomp and bodyhere su stands for subject and obj1 for direct object the set of projective labels p contained cnj for coordination dependencies and det for determiner dependenciesone exception was added for the coordination constraint dependencies can cross when coordinated arguments are verbsone drawback of hard deterministic constraints is the undesirable effect noisy data can causewe see this most prominently with coordination argument compatibilitywords ending in en are typically wrongly tagged and because our coordination argument constraint to discard correct coordinationsas a workaround we assigned words ending in en a wildcard pos tag which is compatible with all pos tagsin this section we report our resultswe not only present our accuracy but also provide an empirical evaluation of the runtime behaviour of this approach and show how parsing can be accelerated using a simple approximationan important question to answer when using global constraints is how much of a performance boost is gained when using global constraintswe ran the system without any linguistic constraints as a baseline and compared it to a system with the additional constraints to evaluate our systems we use the accuracy over labelled attachment decisions where nl is the number of tokens with correct head and label and nt is the total number of tokensfor completeness we also report the unlabelled accuracy where nu is the number of tokens with correct head curacy using ninefold crossvalidation on cross for baseline and constraintbased systemlc and uc are the percentages of sentences with 100 labelled and unlabelled accuracy respectivelytable 1 shows our results using ninefold crossvalidation on the cross setthe baseline system gives an unlabelled accuracy of 846 and labelled accuracy of 889when we add our linguistic constraints the performance increases by 05 for both labelled and unlabelled accuracythis increase is significant according to dan bikels parse comparison script and using the sign test now we give a little insight into how our results compare with the rest of the communitythe reported stateoftheart parser of malouf and van noord achieves 844 labelled accuracy which is very close numerically to our baselinehowever they use a subset of the conll alpino treebank with a higher average number of tokens per sentences and also evaluate control relations thus results are not directly comparablewe have also run our parser on the relatively small connl test datathe best performing system achieves 792 labelled accuracy while our baseline system achieves 786 and our constrained version 798however a significant difference is only observed between our baseline and our constraintbased systemexamining the errors produced using the dev set highlight two key reasons why we do not see a greater improvement using our constraintbased systemfirstly we cannot improve on coordinations that include words ending with en based on the workaround present in section 44this problem can only be solved by improving pos taggers for dutch or by performing pos tagging within the dependency parsing frameworksecondly our system suffers from poor next best solutionsthat is if the best solution violates some constraints then we find the next best solution is typically worse than the best solution with violated constraintsthis appears to be a consequence of inaccurate local score distributions for example suppose we attach two subjects t1 and t2 to a verb where t1 is the actual subject while t2 is meant to be labelled as objectif we forbid this configuration and if the score of labelling t1 object is higher than that for t2 being labelled subject then the next best solution will label t1 incorrectly as object and t2 incorrectly as subjectthis is often the case and thus results in a drop of accuracywe now concentrate on the runtime of our methodwhile we expect a longer runtime than using the chuliuedmonds as in previous work we are interested in how large the increase istable 2 shows the average solve time for sentences with respect to the number of tokens in each sentence for our system with constraints and the chuliuedmonds algorithmall solve times do not include feature extraction as this is identical for all systemsfor cnstr we also show the number of sentences that could not be parsed after two minutes the average number of iterations and the average duration of the first iterationthe results show that parsing using our generic approach is still reasonably fast although significantly slower than using the chuliuedmonds algorithmalso only a small number of sentences take longer than two minutes to parsethus in practice we would not see a significant degradation in performance if we were to fall back on the cle algorithm after two minutes of solvingwhen we examine the average duration of the first iteration it appears that the majority of the solve time is spent within this iterationthis could be used to justify using the cle algorithm to find a initial solution as starting point for the ilp solver despite the fact that our parser can parse all sentences in a reasonable amount of time it is still significantly slower than the cle algorithmwhile this is not crucial during decoding it does make discriminative online training difficult as training requires several iterations of parsing the whole corpus time for the cross dataset using varying q values and the chuliuedmonds algorithm thus we investigate if it is possible to speed up our inference using a simple approximationfor each token we now only consider the q variables in vx with the highest scoring edgesfor example if we set q 2 the set of variables for a token j will contain two variables either both for the same head i but with different labels or two distinct heads i1 and i2 where labels l1 and l2 may be identicaltable 3 shows the effect of different q values on solve time for the full corpus cross and overall accuracywe see that solve time can be reduced by 80 while only losing a marginal amount of accuracy when we set q to 10however we are unable to reach the 20 seconds solve time of the cle algorithmdespite this when we add the time for preprocessing and feature extraction the cle system parses a corpus in around 15 minutes whereas our system with q 10 takes approximately 25 minuteswhen we view the total runtime of each system we see our system is more competitivewhile we have presented significant improvements using additional constraints one may won5even when caching feature extraction during training mcdonald et al still takes approximately 10 minutes to train der whether the improvements are large enough to justify further research in this direction especially since mcdonald and pereira present an approximate algorithm which also makes more global decisionshowever we believe that our approach is complementary to their modelwe can model higher order features by using an extended set of variables and a modified objective functionalthough this is likely to increase runtime it may still be fast enough for real world applicationsin addition it will allow exact inference even in the case of nonprojective parsingalso we argue that this approach has potential for interesting extensions and applicationsfor example during our runtime evaluations we find that a large fraction of solve time is spent in the first iteration of our incremental algorithmafter the first iteration the solver uses its last state to efficiently search for solutions in the presence of new constraintssome solvers allow the specification of an initial solution as a starting point thus it is expected that significant improvements in terms of speed can be made by using the cle algorithm to provide an initial solutionour approach uses a generic algorithm to solve a complex taskthus other applications may benefit from itfor instance germann et al present an ilp formulation of the machine translation decoding task in order to conduct exact inferencehowever their model suffers from the same type of exponential blowup we observe when we add all our cycle constraints in advancein fact the constraints which because the exponential explosion in their graphically formulation are of the same nature as our cycle constraintswe hope that the incremental approach will allow exact mt decoding for longer sentencesin this paper we have presented a novel approach for inference using ilpwhile previous approaches which use ilp for decoding have solved each integer linear program in one run we incrementally add constraints and solve the resulting program until no more constraints are violatedthis allows us to efficiently use ilp for dependency parsing and add constraints which provide a significant improvement over the current stateoftheart parser on the dutch alpino corpus although slower than the baseline approach our method can still parse large sentences in a reasonable amount of time we have shown that parsing time can be significantly reduced using a simple approximation which only marginally degrades performancefurthermore we believe that the method has potential for further extensions and applicationsthanks to ivan mezaruiz ruken c akıcı beata kouchnir and abhishek arun for their contribution during the conll shared task and to mirella lapata for helpful comments and suggestions
W06-1616
incremental integer linear programming for nonprojective dependency parsinginteger linear programming has recently been used for decoding in a number of probabilistic models in order to enforce global constraintshowever in certain applications such as nonprojective dependency parsing and machine translation the complete formulation of the decoding problem as an integer linear program renders solving intractablewe present an approach which solves the problem incrementally thus we avoid creating intractable integer linear programsthis approach is applied to dutch dependency parsing and we show how the addition of linguistically motivated constraints can yield a significant improvement over stateoftheartfor dependency parsing we study a method using integer linear programming which can incorporate global linguistic constraintsour work in dependency parsing demonstrate that it is possible to use ilp to perform efficient inference for very large programs when used in an incremental mannerwe show that even exponentially large decoding problems may be solved efficiently using ilp solvers if a cuttingplane algorithm is usedwe tackle the map problem for dependency parsing by an incremental approach that starts with a relaxation of the problem solves it and adds additional constraints only if they are violated
get out the vote determining support or opposition from congressional floordebate transcripts we investigate whether one can determine from the transcripts of yous congressional floor debates whether the speeches represent support of or opposition to proposed legislation to address this problem we exploit the fact that these speeches occur as part of a discussion this allows us to use sources of information regarding relationships between discourse segments such as whether a given utterance indicates agreement with the opinion expressed by another we find that the incorporation of such information yields substantial improvements over classifying speeches in isolation one ought to recognize that the present political chaos is connected with the decay of language and that one can probably bring about some improvement by starting at the verbal end orwell politics and the english language we have entered an era where very large amounts of politically oriented text are now available onlinethis includes both official documents such as the full text of laws and the proceedings of legislative bodies and unofficial documents such as postings on weblogs devoted to politicsin some sense the availability of such data is simply a manifestation of a general trend of everybody putting their records on the internet1 the online accessibility of politically oriented texts in particular however is a phenomenon that some have gone so far as to say will have a potentially societychanging effectin the united states for example governmental bodies are providing and soliciting political documents via the internet with lofty goals in mind electronic rulemaking initiatives involving the electronic collection distribution synthesis and analysis of public commentary in the regulatory rulemaking process may alter the citizengovernment relationship additionally much media attention has been focused recently on the potential impact that internet sites may have on politics2 or at least on political journalism3regardless of whether one views such claims as clearsighted prophecy or mere hype it is obviously important to help people understand and analyze politically oriented text given the importance of enabling informed participation in the political processevaluative and persuasive documents such as a politicians speech regarding a bill or a bloggers commentary on a legislative proposal form a particularly interesting type of politically oriented textpeople are much more likely to consult such evaluative statements than the actual text of a bill or law under discussion given the dense nature of legislative language and the fact that bills often reach several hundred pages in length moreover political opinions are exsional bills and related data was launched in january 1995 when mosaic was not quite two years old and altavista did not yet exist plicitly solicited in the erulemaking scenarioin the analysis of evaluative language it is fundamentally necessary to determine whether the authorspeaker supports or disapproves of the topic of discussionin this paper we investigate the following specific instantiation of this problem we seek to determine from the transcripts of yous congressional floor debates whether each speech represents support for or opposition to a proposed piece of legislationnote that from an experimental point of view this is a very convenient problem to work with because we can automatically determine ground truth simply by consulting publicly available voting recordstask properties determining whether or not a speaker supports a proposal falls within the realm of sentiment analysis an extremely active research area devoted to the computational treatment of subjective or opinionoriented language hearst sack and wiebe see esuli for an active bibliographyin particular since we treat each individual speech within a debate as a single document we are considering a version of documentlevel sentimentpolarity classification namely automatically distinguishing between positive and negative documents most sentimentpolarity classifiers proposed in the recent literature categorize each document independentlya few others incorporate various measures of interdocument similarity between the texts to be labeled many interesting opinionoriented documents however can be linked through certain relationships that occur in the context of evaluative discussionsfor example we may find textual4 evidence of a high likelihood of agreement be4because we are most interested in techniques applicable across domains we restrict consideration to nlp aspects of the problem ignoring external problemspecific informationfor example although most votes in our corpus were almost completely along party lines we did not use partyaffiliation dataindeed in other settings one may not be able to determine the participants political leanings and such information may not lead to significantly improved results even if it were available tween two speakers such as explicit assertions or quotation of messages in emails or postings but cfagrawal et al agreement evidence can be a powerful aid in our classification task for example we can easily categorize a complicated document if we find within it indications of agreement with a clearly positive textobviously incorporating agreement information provides additional benefit only when the input documents are relatively difficult to classify individuallyintuition suggests that this is true of the data with which we experiment for several reasonsfirst yous congressional debates contain very rich language and cover an extremely wide variety of topics ranging from flag burning to international policy to the federal budgetdebates are also subject to digressions some fairly natural and others less so second an important characteristic of persuasive language is that speakers may spend more time presenting evidence in support of their positions than directly stating their attitudesan extreme example will illustrate the problems involvedconsider a speech that describes the yous flag as deeply inspirational and thus contains only positive languageif the bill under discussion is a proposed flagburning ban then the speech is supportive but if the bill under discussion is aimed at rescinding an existing flagburning ban the speech may represent opposition to the legislationgiven the current state of the art in sentiment analysis it is doubtful that one could determine the relationship between presented evidence and speaker opinionqualitative summary of results the above difficulties underscore the importance of enhancing standard classification techniques with new information sources that promise to improve accuracy such as interdocument relationships between the documents to be labeledin this paper we demonstrate that the incorporation of agreement modeling can provide substantial improvements over the application of support vector machines in isolation which represents the state of the art in the individual classification of documentsthe enhanced accuracies are obtained via a fairly primitive automaticallyacquired agreement detector and a conceptually simple method for integrating isolateddocument and agreementbased informationwe thus view our results as demonstrating the potentially large benefits of exploiting sentimentrelated discoursesegment relationships in sentimentanalysis tasksthis section outlines the main steps of the process by which we created our corpus govtrack is an independent website run by joshua tauberer that collects publicly available data on the legislative and fundraising activities of yous congresspeopledue to its extensive crossreferencing and collating of information it was nominated for a 2006 webby awarda crucial characteristic of govtrack from our point of view is that the information is provided in a very convenient format for instance the floordebate transcripts are broken into separate html files according to the subject of the debate so we can trivially derive long sequences of speeches guaranteed to cover the same topicwe extracted from govtrack all available transcripts of yous floor debates in the house of representatives for the year 2005 together with voting records for all rollcall votes during that yearwe concentrated on debates regarding controversial bills because these debates should presumably exhibit more interesting discourse structureeach debate consists of a series of speech segments where each segment is a sequence of uninterrupted utterances by a single speakersince speech segments represent natural discourse units we treat them as the basic unit to be classifiedeach speech segment was labeled by the vote cast for the proposed bill by the person who uttered the speech segmentwe automatically discarded those speech segments belonging to a class of formulaic generally onesentence utterances focused on the yielding of time on the house floor as such speech segments are clearly offtopicwe also removed speech segments containing the term amendment since we found during initial inspection that these speeches generally reflect a speakers opinion on an amendment and this opinion may differ from the speakers opinion on the underlying bill under discussionwe randomly split the data into training test and development sets representing roughly 70 20 and 10 of our data respectively the speech segments remained grouped by debate with 38 debates assigned to the training set 10 to the test set and 5 to the development set we require that the speech segments from an individual debate all appear in the same set because our goal is to examine classification of speech segments in the context of the surrounding discussionthe supportoppose classification problem can be approached through the use of standard classifiers such as support vector machines which consider each text unit in isolationas discussed in section 1 however the conversational nature of our data implies the existence of various relationships that can be exploited to improve cumulative classification accuracy for speech segments belonging to the same debateour classification framework directly inspired by blum and chawla integrates both perspectives optimizing its labeling of speech segments based on both individual speechsegment classification scores and preferences for groups of speech segments to receive the same labelin this section we discuss the specific classification framework that we adopt and the set of mechanisms that we propose for modeling specific types of relationshipslet s1 s2 sn be the sequence of speech segments within a given debate and let y and n stand for the yea and nay class respectivelyassume we have a nonnegative function ind indicating the degree of preference that an individualdocument classifier such as an svm has for placing speechsegment s in class c also assume that some pairs of speech segments have weighted links between them where the nonnegative strength str for a link indicates the degree to which it is preferable that the linked speech segments receive the same labelthen any class assignment c c c c can be assigned a cost where c is the opposite class from ca minimumcost assignment thus represents an optimum way to classify the speech segments so that each one tends not to be put into the class that the individualdocument classifier disprefers but at the same time highly associated speech segments tend not to be put in different classesas has been previously observed and exploited in the nlp literature the above optimization function unlike many others that have been proposed for graph or set partitioning can be solved exactly in an provably efficient manner via methods for finding minimum cuts in graphsin our view the contribution of our work is the examination of new types of relationships not the method by which such relationships are incorporated into the classification decisionin our experiments we employed the wellknown classifier svmquotght to obtain individualdocument classification scores treating y as the positive class and using plain unigrams as features5 following standard practice in sentiment analysis the input to svmquotght consisted of normalized presenceoffeature vectorsthe ind value 5svmlight is available at svmlightjoachimsorgdefault parameters were used although experimentation with different parameter settings is an important direction for future work for each speech segment s was based on the signed distance d from the vector representing s to the trained svm decision plane where σs is the standard deviation of d over all speech segments s in the debate in question and ind def 1 indwe now turn to the more interesting problem of representing the preferences that speech segments may have for being assigned to the same classa wide range of relationships between text segments can be modeled as positivestrength linkshere we discuss two types of constraints that are considered in this worksamespeaker constraints in congressional debates and in general socialdiscourse contexts a single speaker may make a number of comments regarding a topicit is reasonable to expect that in many settings the participants in a discussion may be convinced to change their opinions midway through a debatehence in the general case we wish to be able to express soft preferences for all of an authors statements to receive the same label where the strengths of such constraints could for instance vary according to the time elapsed between the statementsweighted links are an appropriate means to express such variationhowever if we assume that most speakers do not change their positions in the course of a discussion we can conclude that all comments made by the same speaker must receive the same labelthis assumption holds by fiat for the groundtruth labels in our dataset because these labels were derived from the single vote cast by the speaker on the bill being discussed6 we can implement this assumption via links whose weights are essentially infinitealthough one can also implement this assumption via concatenation of samespeaker speech segments we view the fact that our graphbased framework incorporates both hard and soft constraints in a principled fashion as an advantage of our approachdifferentspeaker agreements in house discourse it is common for one speaker to make reference to another in the context of an agreement or disagreement over the topic of discussionthe systematic identification of instances of agreement can as we have discussed be a powerful tool for the development of intelligently selected weights for links between speech segmentsthe problem of agreement identification can be decomposed into two subproblems identifying references and their targets and deciding whether each reference represents an instance of agreementin our case the first task is straightforward because we focused solely on byname references7 hence we will now concentrate on the second more interesting taskwe approach the problem of classifying references by representing each reference with a wordpresence vector derived from a window of text surrounding the reference8 in the training set we classify each reference connecting two speakers with a positive or negative label depending on whether the two voted the same way on the bill under discussion9these labels are then used to train an svm classifier the output of which is subsequently used to create weights on agreement links in the test set as followslet d denote the distance from the vector representing reference r to the agreementdetector svms decision plane and let you be the standard deviation of d over all references in the debate in questionwe then define the strength agr of the agreement link corresponding to the reference as agr def 0 d 4u the free parameter α specifies the relative importance of the agr scoresthe threshold bagr controls the precision of the agreement links in that values of bagr greater than zero mean that greater confidence is required before an agreement link can be added10this section presents experiments testing the utility of using speechsegment relationships evaluating against a number of baselinesall reported results use values for the free parameter α derived via tuning on the development setin the tables boldface indicates the development and testset results for the developmentsetoptimal parameter settings as one would make algorithmic choices based on developmentset performancerecall that to gather interspeaker agreement information the strategy employed in this paper is to classify byname references to other speakers as to whether they indicate agreement or notto train our agreement classifier we experimented with undoing the deletion of amendmentrelated speech segments in the training setnote that such speech segments were never included in the development or test set since as discussed in section 2 their labels are probably noisy however including them in the training set allows the classifier to examine more instances even though some of them are labeled incorrectlyas table 2 shows using more if noisy data yields better agreementclassification results on the development set and so we use that policy in all subsequent experiments11 10our implementation puts a link between just one arbitrary pair of speech segments among all those uttered by a given pair of apparently agreeing speakersthe infiniteweight samespeaker links propagate the agreement information to all other such pairs11unfortunately this policy leads to inferior testset agreean important observation is that precision may be more important than accuracy in deciding which agreement links to add false positives with respect to agreement can cause speech segments to be incorrectly assigned the same label whereas false negatives mean only that agreementbased information about other speech segments is not employedas described above we can raise agreement precision by increasing the threshold bagr which specifies the required confidence for the addition of an agreement linkindeed table 3 shows that we can improve agreement precision by setting bagr to the mean agreement score µ assigned by the svm agreementclassifier over all references in the given debate12however this comes at the cost of greatly reducing agreement accuracy due to lowered recall levelswhether or not better speechsegment classification is ultimately achieved is discussed in the next sectionsbaselines the first two data rows of table 4 depict baseline performance resultsthe baseline is meant to explore whether the speechsegment classification task can be reduced to simple lexical checksspecifically this method uses the signed difference between the number of words containing the stem support and the number of words containing the stem oppos no better than 6267 testset accuracy is obtained by either baselineusing relationship information applying an svm to classify each speech segment in isolation leads to clear improvements over the two baseline methods as demonstrated in table 4when we impose the constraint that all speech segments uttered by the same speaker receive the same label via samespeaker links both testset and tion accuracy in percenthere the initial svm is run on the concatenation of all of a given speakers speech segments but the results are computed over speech segments so that they can be compared to those in table 4 developmentset accuracy increase even more in the latter case quite substantially sothe last two lines of table 4 show that the best results are obtained by incorporating agreement information as wellthe highest testset result 7116 is obtained by using a highprecision threshold to determine which agreement links to addwhile the developmentset results would induce us to utilize the standard threshold value of 0 which is suboptimal on the test set the bagr 0 agreementlink policy still achieves noticeable improvement over not using agreement links we use speech segments as the unit of classification because they represent natural discourse unitsas a consequence we are able to exploit relationships at the speechsegment levelhowever it is interesting to consider whether we really need to consider relationships specifically between speech segments themselves or whether it suffices to simply consider relationships between the speakers of the speech segmentsin particular as an alternative to using samespeaker links we tried a speakerbased approach wherein the way we determine the initial individualdocument classification score for each speech segment uttered by a person p in a given debate is to run an svm on the concatenation of all of ps speech segments within that debatehow does the use of samespeaker links compare to the concatenation of each speakers speech segmentstables 4 and 5 show that not surprisingly the svm individualdocument classifier works better on the concatenated speech segments than on the speech segments in isolationhowever the effect on overall classification accuracy is less clear the development set favors samespeaker links over concatenation while the test set does notbut we stress that the most important observation we can make from table 5 is that once again the addition of agreement information leads to substantial improvements in accuracyrecall that in in our experiments we created finiteweight agreement links so that speech segments appearing in pairs flagged by our agreement detector can potentially receive different labelswe also experimented with forcing such speech segments to receive the same label either through infiniteweight agreement links or through a speechsegment concatenation strategy similar to that described in the previous subsectionboth strategies resulted in clear degradation in performance on both the development and test sets a finding that validates our encoding of agreement information as soft preferenceswe have seen several cases in which the method that performs best on the development set does not yield the best testset performancehowever we felt that it would be illegitimate to change the traindevelopmenttest sets in a post hoc fashion that is after seeing the experimental resultsmoreover and crucially it is very clear that using agreement information encoded as preferences within our graphbased approach rather than as hard constraints yields substantial improvements on both the development and test set this we believe is our most important findingpoliticallyoriented text sentiment analysis has specifically been proposed as a key enabling technology in erulemaking allowing the automatic analysis of the opinions that people submit there has also been work focused upon determining the political leaning of a document or author where most previouslyproposed methods make no direct use of relationships between the documents to be classified an exception is grefenstette et al who experimented with determining the political orientation of websites essentially by classifying the concatenation of all the documents found on that siteothers have applied the nlp technologies of nearduplicate detection and topicbased text categorization to politically oriented text detecting agreement we used a simple method to learn to identify crossspeaker references indicating agreementmore sophisticated approaches have been proposed including an extension that in an interesting reversal of our problem makes use of sentimentpolarity indicators within speech segments also relevant is work on the general problems of dialogact tagging citation analysis and computational rhetorical analysis we currently do not have an efficient means to encode disagreement information as hard constraints we plan to investigate incorporating such information in future workrelationships between the unlabeled items carvalho and cohen consider sequential relations between different types of emails to classify messages and thus also explicitly exploit the structure of conversationsprevious sentimentanalysis work in different domains has considered interdocument similarity or explicit interdocument references in the form of hyperlinks notable early papers on graphbased semisupervised learning include blum and chawla bansal et al kondor and lafferty and joachims zhu maintains a survey of this arearecently several alternative often quite sophisticated approaches to collective classification have been proposed it would be interesting to investigate the application of such methods to our problemhowever we also believe that our approach has important advantages including conceptual simplicity and the fact that it is based on an underlying optimization problem that is provably and in practice easy to solvein this study we focused on very general types of crossdocument classification preferences utilizing constraints based only on speaker identity and on direct textual references between statementswe showed that the integration of even very limited information regarding interdocument relationships can significantly increase the accuracy of supportopposition classificationthe simple constraints modeled in our study however represent just a small portion of the rich network of relationships that connect statements and speakers across the political universe and in the wider realm of opinionated social discourseone intriguing possibility is to take advantage of information regarding interpersonal relationships making use of speakerauthor affiliations positions within a social hierarchy and so onor we could even attempt to model relationships between topics or concepts in a kind of extension of collaborative filteringfor example perhaps we could infer that two speakers sharing a common opinion on evolutionary biologist richard dawkins will be likely to agree in a debate centered on intelligent designwhile such functionality is well beyond the scope of our current study we are optimistic that we can develop methods to exploit additional types of relationships in future workacknowledgments we thank claire cardie jon kleinberg michael macy andrew myers and the six anonymous emnlp referees for valuable discussions and commentswe also thank reviewer 1 for generously providing additional post hoc feedback and the emnlp chairs eric gaussier and dan jurafsky for facilitating the process this paper is based upon work supported in part by the national science foundation under grant noiis0329064any opinions findings and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views or official policies either expressed or implied of any sponsoring institutions the yous government or any other entity
W06-1639
get out the vote determining support or opposition from congressional floordebate transcriptswe investigate whether one can determine from the transcripts of yous congressional floor debates whether the speeches represent support of or opposition to proposed legislationto address this problem we exploit the fact that these speeches occur as part of a discussion this allows us to use sources of information regarding relationships between discourse segments such as whether a given utterance indicates agreement with the opinion expressed by anotherwe find that the incorporation of such information yields substantial improvements over classifying speeches in isolationwe present a method based on support vector machines to determine whether the speeches made by participants represent support or opposition to proposed legislation using transcripts of yous congressional floor debates
fully automatic lexicon expansion for domainoriented sentiment analysis this paper proposes an unsupervised lexicon building method for the detecof which convey positive or negative aspects in a specific domain the lexical entries to be acare called the minimum humanunderstandable syntactic structures that specify the polarity of clauses as a clue to obtain candidate atoms we use the tendency for same polarities to appear successively in contexts using the overall density and precision of coherency in the corpus the statistical estimation picks up appropriate polar atoms among candidates without any manual tuning of the threshold values the experimental results show that the precision of polarity assignment with the automatically acquired lexicon was 94 on average and our method is robust for corpora in diverse domains and for the size of the initial lexicon sentiment analysis is a task to recognize writers feelings as expressed in positive or negative comments by analyzing unreadably large numbers of documentsextensive syntactic patterns enable us to detect sentiment expressions and to convert them into semantic structures with high precision as reported by kanayama et al from the example japanese sentence in the digital camera domain the sa system extracts a sentiment representation as which consists of a predicate and an argument with positive polaritysa in general tends to focus on subjective sentiment expressions which explicitly describe an authors preference as in the above example objective expressions such as in the following examples and may be out of scope even though they describe desirable aspects in a specific domainhowever when customers or corporate users use sa system for their commercial activities such domainspecific expressions have a more important role since they convey strong or weak points of the product more directly and may influence their choice to purchase a specific product as an examplethis paper addresses the japanese version of domainoriented sentiment analysis which identifies polar clauses conveying goodness and badness in a specific domain including rather objective expressionsbuilding domaindependent lexicons for many domains is much harder work than preparing domainindependent lexicons and syntactic patterns because the possible lexical entries are too numerous and they may differ in each domainto solve this problem we have devised an unsupervised method to acquire domaindependent lexical knowledge where a user has only to collect unannotated domain corporathe knowledge to be acquired is a domaindependent set of polar atomsa polar atom is a minimum syntactic structure specifying polarity in a predicative expressionfor example to detect polar clauses in the sentences and i the following polar atoms and should appear in the lexicon the polar atom specified the positive polarity of the verb kukkirisuruthis atom can be generally used for this verb regardless of its argumentsin the polar atom on the other hand the nominative case of the verb tsuku is limited to a specific noun zuumu since the verb tsuku does not hold the polarity in itselfthe automatic decision for the scopes of the atoms is one of the major issuesfor lexical learning from unannotated corpora our method uses context coherency in terms of polarity an assumption that polar clauses with the same polarity appear successively unless the context is changed with adversative expressionsexploiting this tendency we can collect candidate polar atoms with their tentative polarities as those adjacent to the polar clauses which have been identified by their domainindependent polar atoms in the initial lexiconwe use both intrasentential and intersentential contexts to obtain more candidate polar atomsour assumption is intuitively reasonable but there are many nonpolar clauses adjacent to polar clauseserrors in sentence delimitation or syntactic parsing also result in false candidate atomsthus to adopt a candidate polar atom for the new lexicon some threshold values for the frequencies or ratios are required but they depend on the type of the corpus the size of the initial lexicon etcour algorithm is fully automatic in the sense that the criteria for the adoption of polar atoms are set automatically by statistical estimation based on the distributions of coherency coherent precision and coherent densityno manual tuning process is required so the algorithm only needs unannotated domain corpora and the initial lexiconthus our learning method can be used not only by the developers of the system but also by endusersthis feature is very helpful for users to the english translations are included only for convenience analyze documents in new domainsin the next section we review related work and section 3 describes our runtime sa systemin section 4 our assumption for unsupervised learning context coherency and its key metrics coherent precision and coherent density are discussedsection 5 describes our unsupervised learning methodexperimental results are shown in section 6 and we conclude in section 7sentiment analysis has been extensively studied in recent yearsthe target of sa in this paper is wider than in previous workfor example yu and hatzivassiloglou separated facts from opinions and assigned polarities only to opinionsin contrast our system detects factual polar clauses as well as sentimentsunsupervised learning for sentiment analysis is also being studiedfor example hatzivassiloglou and mckeown labeled adjectives as positive or negative relying on semantic orientationturney used collocation with excellent or poor to obtain positive and negative clues for document classificationin this paper we use contextual information which is wider than for the contexts they used and address the problem of acquiring lexical entries from the noisy cluesintersentential contexts as in our approach were used as a clue also for subjectivity analysis which is twofold classification into subjective and objective sentencescompared to it this paper solves a more difficult problem threefold classification into positive negative and nonpolar expressions using imperfect coherency in terms of sentiment polaritylearning methods for phraselevel sentiment analysis closely share an objective of our approachpopescu and etzioni achieved highprecision opinion phrases extraction by using relaxation labelingtheir method iteratively assigns a polarity to a phrase relying on semantic orientation of cooccurring words in specific relations in a sentence but the scope of semantic orientation is limited to within a sentencewilson et al proposed supervised learning dividing the resources into prior polarity and context polarity which are similar to polar atoms and syntactic patterns in this paper respectivelywilson et al prepared prior polarities from existing resources and learned the context polarities by using prior polarities and annotated corporatherefore the prerequisite data and learned data are opposite from those in our approachwe took the approach used in this paper because we want to acquire more domaindependent knowledge and context polarity is easier to access in japaneseour approach and their work can complement each otheras figure 1 illustrates the flow of our sentiment analysis system involves three stepsthe first step is sentence delimitation the input document is divided into sentencesthe second step is proposition detection propositions which can form polar clauses are identified in each sentencethe third step is polarity assignment the polarity of each proposition is examined by considering the polar atomsthis section describes the last two processes which are based on a deep sentiment analysis method analogous to machine translation our basic tactic for clauselevel sa is the highprecision detection of polar clauses based on deep syntactic analysisclauselevel means that only predicative verbs and adjectives such as in are detected and adnominal usages of verbs and adjectives as in are ignored because utsukushii in does not convey a positive polarityhere we use the notion of a proposition as a clause without modality led by a predicative verb or a predicative adjectivethe propositions detected from a sentence are subject to the assignment of polaritiesbasically we detect a proposition only at the head of a syntactic tree3however this limitation reduces the recall of sentiment analysis to a very low levelin the example above utsukushii is the head of the tree while those initial clauses in to below are notin order to achieve higher recall while maintaining high precision we apply two types of syntactic patterns modality patterns and conjunctive patterns4 to the tree structures from the fullparsingmodality patterns match some auxiliary verbs or corresponding sentencefinal expressions to allow for specific kinds of modality and negationone of the typical patterns is v to omou 5 which allows utsukushii in to be a propositionalso negation is handled with a modality pattern such as v nai in this case a neg feature is attached to the proposition to identify utsukushii in as a negated propositionon the other hand no proposition is identified in due to the deliberate absence of a pattern v to yoi we used a total of 103 domainindependent modality patterns most of which are derived from the coordinative te shi ueni dakedenaku nominarazu causal tame kara node adversative ga kedo keredo monono nodaga another type of pattern is conjunctive patterns which allow multiple propositions in a sentencewe used a total of 22 conjunctive patterns also derived from the mt method as exemplified in table 1in such cases of coordinative clauses and causal clauses both clauses can be polar clauseson the other hand no proposition is identified in a conditional clause due to the absence of corresponding conjunctive patternsto assign a polarity to each proposition polar atoms in the lexicon are compared to the propositiona polar atom consists of polarity verb or adjective and optionally its argumentsexample is a simple polar atom where no argument is specifiedthis atom matches any proposition whose head is utsukushiiexample is a complex polar atom which assigns a negative polarity to any proposition whose head is the verb kaku and where the accusative case is miryokuto lack t attractionacc a polarity is assigned if there exists a polar atom for which verbadjective and the arguments coincide with the proposition and otherwise no polarity is assignedthe opposite polarity of the polar atom is assigned to a proposition which has the neg featurewe used a total of 3275 polar atoms most of which are derived from an english sentiment lexicon according to the evaluation of the mt method highprecision sentiment analysis had been achieved using the polar atoms and patterns where the system never took positive sentiment for negative and vice versa and judged positive or negative to neutral expressions in only about 10 caseshowever the recall is too low and most of the lexicon is for domainindependent expressions and thus we need more lexical entries to grasp the positive and negative aspects in a specific domainthis section introduces the intra and intersentential contexts in which we assume context coherency for polarity and describes some preliminary analysis of the assumptionthe identification of propositions described in section 31 clarifies our viewpoint of the contextshere we consider two types of contexts intrasentential context and intersentential contextfigure 2 illustrates the context coherency in a sample discourse where the polarities are perfectly coherentit is light and has a zoom lens though the lcd is small i am satisfiedtada nedanga chotto takaibut the price is a little high the intrasentential context is the link between propositions in a sentence which are detected as coordinative or causal clausesif there is an adversative conjunction such as kedo in the third sentence in a flag is attached to the relation as denoted with 0 in figure 2though there are differences in syntactic phenomena this is simshikashi demo sorenanoni tadashi dakedo gyakuni tohaie keredomo ippou used in this paperthe post and sent columns denote the numbers of postings and sentences respectivelylenquot is the average length of sentences ilar to the semantic orientation proposed by hatzivassiloglou and mckeown the intersentential context is the link between propositions in the main clauses of pairs of adjacent sentences in a discoursethe polarities are assumed to be the same in the intersentential context unless there is an adversative expression as those listed in table 2if no proposition is detected as in a nominal sentence the context is splitthat is there is no link between the proposition of the previous sentence and that of the next sentencewe claim these two types of context can be used for unsupervised learning as clues to assign a tentative polarity to unknown expressionsto validate our assumption we conducted preliminary observations using various corporathroughout this paper we used japanese corpora from discussion boards in four different domains whose features are shown in table 3all of the corpora have clues to the boundaries of postings so they were suitable to identify the discourseshow strong is the coherency in the context proposed in section 41using the polar clauses detected by the sa system with the initial lexicon we observed the coherent precision of domain d with lexicon l defined as where and are occurrence counts of the same and opposite polarities observed between two polar clauses as observed in the discourseas the two polar clauses we consider the following types windowa polar clause and the nearest polar clause which is found in the preceding n sentences in the discoursecontexttwo polar clauses in the intrasentential andor intersentential context described in section 41this is the viewpoint of context in our methodtable 4 shows the frequencies of coherent pairs conflicting pairs and the coherent precision for half of the digital camera domain corpusbaseline is the percentage of positive clauses among the polar clauses6for the window method we tested for n0 1 2 and oc0quot means two propositions within a sentenceapparently the larger the window size the smaller the cp valuewhen the window size is ooquot implying anywhere within a discourse the ratio is larger than the baseline by only 27 and thus these types of coherency are not reliable even though the number of clues is relatively largecontext shows the coherency of the two types of context that we consideredthe cp values are much higher than those in the window methods because the relationships between adjacent pairs of clauses are handled more appropriately by considering syntactic trees adversative conjunctions etcthe cp values for intersentential and intrasentential contexts are almost the same and thus both contexts can be used to obtain 25 times more clues for the intrasentential contextin the rest of this paper we will use both contextswe also observed the coherent precision for each domain corpusthe results in the center column of table 5 indicate the number is slightly different among corpora but all of them are far from perfect coherencybesides the conflicting cases there are many more cases where a polar clause does not appear in the polar contextwe also observed the coherent density of the domain d with the lexicon l defined as this indicates the ratio of polar clauses that appear in the coherent context among all of the polar clauses detected by the systemthe right column of table 5 shows the coherent density in each domainthe movie domain has notably higher coherent density than the othersthis indicates the sentiment expressions are more frequently used in the movie domainthe next section describes the method of our unsupervised learning using this imperfect context coherencyfigure 3 shows the flow of our unsupervised learning methodfirst the runtime sa system identifies the polar clauses and the candidate polar atoms are collectedthen each candidate atom is validated using the two metrics in the previous section cp and cd which are calculated from all of the polar clauses found in the domain corpustable 6 examples of candidate polar atoms and their frequencies denotes that it should not be added to the lexicon f p and n denote the frequency of the atom and in positive and negative contexts respectivelyfrom each proposition which does not have a polarity candidate polar atoms in the form of simple atoms or complex atoms are extractedfor each candidate polar atom a the total appearances f and the occurrences in positive contexts p and negative contexts n are counted based on the context of the adjacent clauses if the proposition has the neg feature the polarity is invertedtable 6 shows examples of candidate polar atoms with their frequenciesamong the located candidate polar atoms how can we distinguish true polar atoms which should be added to the lexicon from fake polar atoms which should be discardedas shown in section 4 both the coherent precision and the coherent density are so small that we cannot rely on each single appearance of the atom in the polar contextone possible approach is to set the threshold values for frequency in a polar context max n and for the ratio of appearances in polar contexts among the total appearances maxn f however the optimum threshold values should depend on the corpus and the initial lexiconin order to set general criteria here we assume that a true positive polar atom a should have higher p f than its average ie coherent density cd and also have higher p than its average ie coherent precision cp and these criteria should be met with 90 confidence where la is the initial lexicon with a addedassuming the binomial distribution a candidate polar atom is adopted as a positive polar atom7 if both and are satisfied where we can assume cd _ cd and cp _ cp when l is largewe compute the confidence interval using approximation with the fdistribution these criteria solve the problems in minimum frequency and scope of the polar atoms simultaneouslyin the example of table 6 the simple atom chiisai is discarded because it does not meet while the complex atom chiisai bodiiga is adopted as a positive atom shikkarisuru is adopted as a positive simple atom even though 10 cases out of 64 were observed in the negative contexton the other hand todoku mokuyouni is discarded because it does not meet even though n ie always observed in negative contextsfirst we propose a method of evaluation of the lexical learning ments of 200 polar atomsk083it is costly to make consistent and large gold standards in multiple domains especially in identification tasks such as clauselevel sa therefore we evaluated the learning results by asking human annotators to classify the acquired polar atoms as positive negative and neutral instead of the instances of polar clauses detected with the new lexiconthis can be done because the polar atoms themselves are informative enough to imply to humans whether the expressions hold positive or negative meanings in the domainto justify the reliability of this evaluation method two annotators9 evaluated 200 randomly selected candidate polar atoms in the digital camera domainthe agreement results are shown in table 7the manual classification was agreed upon in 89 of the cases and the kappa value was 083 which is high enough to be considered consistentusing manual judgment of the polar atoms we evaluated the performance with the following three metricstype precisionthe coincidence rate of the polarity between the acquired polar atom and the human evaluators judgmentsit is always false if the evaluators judged it as neutral token precisionthe coincidence rate of the polarity weighted by its frequency in the corpusthis metric emulates the precision of the detection of polar clauses with newly acquired poler atoms in the runtime sa systemrelative recallthe estimated ratio of the number of detected polar clauses with the expanded lexicon to the number of detected polar clauses with the initial lexiconrelative recall will be 1 when no new polar atom is acquiredsince the precision was high enough this metric can be used for approximation of the recall which is hard to evaluate in extraction tasks such as clausephraselevel safor each of the four domain corpora the annotators evaluated 100 randomly selected polar atoms which were newly acquired by our method to measure the precisionsrelative recall is estimated by comparing the numbers of detected polar clauses from randomly selected 2000 sentences with and without the acquired polar atomstable 8 shows the resultsthe token precision is higher than 90 in all of the corpora including the movie domain which is considered to be difficult for sa this is extremely high precision for this task because the correctness of both the extraction and polarity assignment was evaluated simultaneouslythe relative recall 128 in the digital camera domain means the recall is increased from 4310 to 55the difference was smaller in other domains but the domaindependent polar clauses are much informative than general ones thus the highprecision detection significantly enhances the systemto see the effects of our method we conducted a control experiment which used preset criteriato adopt the candidate atom a the frequency of polarity max n was required to be 3 or more and the ratio of polarity maxn was required to be higher than the threshold 0varying 0 from 005 to with various preset threshold values 0 for the digital camera and movie domainsthe rightmost star and circle denote the performance of our method08 we evaluated the token precision and the relative recall in the domains of digital cameras and moviesfigure 4 shows the resultsthe results showed both relative recall and token precision were lower than in our method for every 0 in both corporathe optimum 0 was 03 in the movie domain and 01 in the digital camera domaintherefore in this preset approach a tuning process is necessary for each domainour method does not require this tuning and thus fully automatic learning was possibleunlike the normal precisionrecall tradeoff the token precision in the movie domain got lower when the 0 is strictthis is due to the frequent polar atoms which can be acquired at the low ratios of the polarityour method does not discard these important polar atomswe also tested the performance while varying the size of the initial lexicon l we prepared three subsets of the initial lexicon l08 l05 and l02 removing polar atoms randomlythese lexicons had 08 05 02 times the polar atoms respectively compared to l table 9 shows the precisions and recalls using these lexicons for the learning processthough the cd values vary the precision was stable which means that our method was robust even for different sizes of the lexiconthe smaller the initial lexicon the higher the relative recall because the polar atoms which were removed from l were recovered in the learning processthis result suggests the possibility of the initial lexicon the bootstrapping method from a small initial lexiconas seen in the agreement study the polar atoms used in our study were intrinsically meaningful to humansthis is because the atoms are predicateargument structures derived from predicative clauses and thus humans could imagine the meaning of a polar atom by generating the corresponding sentence in its predicative formin the evaluation process some interesting results were observedfor example a negative atom nai kerarega was acquired in the digital camera domaineven the evaluator who was familiar with digital cameras did not know the term kerare but after looking up the dictionary she labeled it as negativeour learning method could pick up such technical terms and labeled them appropriatelyalso there were discoveries in the error analysisan evaluator assigned positive to aru kameraga in the mobile phone domain but the acquired polar atom had the negative polaritythis was actually an insight from the recent opinions that many users want phones without camera functions11we proposed an unsupervised method to acquire polar atoms for domainoriented sa and demonstrated its high performancethe lexicon can be expanded automatically by using unannotated corpora and tuning of the threshold values is not requiredtherefore even endusers can use this approach to improve the sentiment analysisthese features allow them to do ondemand analysis of more narrow domains such as the domain of digital quotperhaps because cameras tend to consume battery power and some users do not need them cameras of a specific manufacturer or the domain of mobile phones from the female users point of view
W06-1642
fully automatic lexicon expansion for domainoriented sentiment analysisthis paper proposes an unsupervised lexicon building method for the detection of polar clauses which convey positive or negative aspects in a specific domainthe lexical entries to be acquired are called polar atoms the minimum humanunderstandable syntactic structures that specify the polarity of clausesas a clue to obtain candidate polar atoms we use context coherency the tendency for same polarities to appear successively in contextsusing the overall density and precision of coherency in the corpus the statistical estimation picks up appropriate polar atoms among candidates without any manual tuning of the threshold valuesthe experimental results show that the precision of polarity assignment with the automatically acquired lexicon was 94 on average and our method is robust for corpora in diverse domains and for the size of the initial lexiconwe validate that polar text units with the same polarity tend to appear together to make contexts coherentwe propose an algorithm to automatically expand an initial opinion lexicon based on context coherency the tendency for same polarities to appear successively in contextswe use conjunction rules to solve this problem from large domain corporawe adopt domain knowledge by extracting sentiment words from the domainspecific corpus
joint extraction of entities and relations for opinion recognition we present an approach for the joint extraction of entities and relations in the con text of opinion recognition and analysis we identify two types of opinionrelated entities expressions of opinions andsources of opinions along with the linking relation that exists between them in spired by roth and yih we employ an integer linear programming approach to solve the joint opinion recognition taskand show that global constraintbased inference can significantly boost the perfor mance of both relation extraction and theextraction of opinionrelated entities performance further improves when a seman tic role labeling system is incorporated the resulting system achieves fmeasuresof 79 and 69 for entity and relation extrac tion respectively improving substantially over prior results in the area information extraction tasks such as recognizing entities and relations have long been considered critical to many domainspecific nlp tasks prager et al white et al researchers have furthershown that opinionoriented information extrac tion can provide analogous benefits to a variety of practical applications including product reputation tracking opinionoriented question answering and opinionoriented summarization liu et al moreover muchprogress has been made in the area of opinion extraction it is possible to identify sources of opin ions and kim and hovy to determine the polarity and strength of opinion expressions and to recognize propo sitional opinions and their sources with reasonable accuracyto datehowever there has been no effort to simultane ously identify arbitrary opinion expressions their sources and the relations between themwithoutprogress on the joint extraction of opinion entities and their relations the capabilities of opinion based applications will remain limitedfortunately research in machine learning has produced methods for global inference and jointclassification that can help to address this defi ciency roth and yih moreover it has been shown thatexploiting dependencies among entities andor re lations via global inference not only solves the joint extraction task but often boosts performanceon the individual tasks when compared to clas sifiers that handle the tasks independently for semantic role labeling information extraction and sequence tagging in this paper we present a global inference ap proach to the extractionof opinionrelated entities and relationsin par ticular we aim to identify two types of entities entities that express opin ions and entities that denote sources of opinionsmore specifically we use the term opinion expression to denote all direct expressions of subjectivity including opinions emotions beliefs senti ment etc as well as all speech expressions that introduce subjective propositions and use the termsource to denote the person or entity that holds the opinion1 in addition weaim to identify the relations between opinion ex pression entities and source entitiesthat is for a given opinion expression oi and source entity sj we determine whether the relation lij def obtains ie whether sj is the source of opinion expression oiwe refer to this particular relation as the link relation in the rest of the paperconsider for example the following sentences s1bush intends to curb the increase in harmful gas emissions and is counting on the good will of us industrialists s2by questioning the imams edict the islamic republic of iran made the people of the world understandthe underlined phrases above are opinion expres sions and phrases marked with square brackets aresource entitiesthe numeric superscripts on en tities indicate link relations a source entity andan opinion expression with the same number satisfy the link relationfor instance the source en tity bushand the opinion expression intendssatisfy the link relation and so do bushandcounting onnotice that a sentence may con tain more than one link relation and link relations are not onetoone mappings between sources andopinionsalso the pair of entities in a link rela tion may not be the closest entities to each other asis the case in the second sentence between ques tioningand the islamic republic of iranwe expect the extraction of opinion relations tobe critical for many opinionoriented nlp applicationsfor instance consider the following ques tion that might be given to a questionanswering system what is the imams opinion toward the islamic republic of iranwithout indepth opinion analysis the question answering system might mistake example s2 as relevant to the query even though s2 exhibits the opinion of the islamic republic of iran toward imam not the other way aroundinspired by roth and yih we model our task as global constraintbased inference over separately trained entity and relation classifiersin particular we develop three base classifiers two sequencetagging classifiers for the extraction 1see wiebe et al for additional detailsof opinion expressions and sources and a binary classifier to identify the link relationthe global inference procedure is implemented via integer linear programming to produce an optimal and coherent extraction of entities and relationsbecause many opinionsource relations appear as predicateargument relations where thepredicate is a verb we also hypothesize that se mantic role labeling will be very useful for our taskwe present two baseline methods for the joint opinionsource recognition task that use a stateoftheart srl system and describe two additional methods for in corporating srl into our ilpbased systemour experiments show that the global inference approach not only improves relation extractionover the base classifier but does the same for in dividual entity extractionsfor source extraction in particular our system achieves an fmeasure of 781 significantly outperforming previous results in this area which obtained an fmeasure of 694 on the same corpusin addition we achieve an fmeasure of 689 for link relationidentification and 820 for opinion expression ex traction for the latter task our system achieves humanlevel performance2work our system operates in three phasesopinion and source entity extraction we begin by developing two separate tokenlevelsequencetagging classifiers for opinion expression extraction and source extraction using linearchain conditional random fields the sequencetagging classifiers are trained using only local syntactic and lexical information to extract each type of entity without knowledge of any nearby or neighboring enti ties or relationswe collect nbest sequences from each sequence tagger in order to boost the recall of the final systemlink relation classification we also develop a relation classifier that is trained and tested on all pairs of opinion and source entities extractedfrom the aforementioned nbest opinion expres sion and source sequencesthe relation classifier is modeled using markov order0 crfs reports human annotation agreement for opinion expression as 820 by f1 measure432et al 2001 which are equivalent to maximum entropy modelsit is trained using only local syntac tic information potentially useful for connecting a pair of entities but has no knowledge of nearby or neighboring extracted entities and link relationsinteger linear programming finally we for mulate an integer linear programming problem for each sentence using the results from the previous two phasesin particular we specify a number of soft and hard constraints among relations andentities that take into account the confidence val ues provided by the supporting entity and relation classifiers and that encode a number of heuristics to ensure coherent outputgiven these constraintsglobal inference via ilp finds the optimal coherent set of opinionsource pairs by exploiting mu tual dependencies among the entities and relationswhile good performance in entity or relation extraction can contribute to better performance ofthe final system this is not always the casepun yakanok et al notes that in general it isbetter to have high recall from the classifiers in cluded in the ilp formulationfor this reason it is not our goal to directly optimize the performance of our opinion and source entity extraction models or our relation classifierthe rest of the paper is organized as followsrelated work is outlined belowsection 3 de scribes the components of the first phase of oursystem the opinion and source extraction classi fierssection 4 describes the construction of the link relation classifier for phase twosection 5 describes the ilp formulation to perform global inference over the results from the previous two phasesexperimental results that compare our ilp approach to a number of baselines are presented insection 6section 7 describes how srl can be incorporated into our global inference system to fur ther improve the performancefinal experimental results and discussion comprise section 8related work the definition of our source expressesopinion task is similar to that of bethardet al however our definition of opin ion and source entities are much more extensive going beyond single sentences and propositional opinion expressionsin particular we evaluate our approach with respect to a wide variety of opinion expressions explicit and implicit3 sources multiple opinionsource link relations3implicit sources are those that are not explicitly men tionedsee section 8 for more detailsper sentence and link relations that span morethan one sentencein addition the link relation model explicitly exploits mutual dependen cies among entities and relations while bethard et al does not directly capture the potential influence among entitieskim and hovy and choi et al focus only on the extraction of sources of opinions without extracting opinion expressionsspecifically kim and hovy assume a priori existence of the opinion expressions and ex tract a single source for each while choi et al do not explicitly extract opinion expres sions nor link an opinion expression to a sourceeven though their model implicitly learns approxi mations of opinion expressions in order to identify opinion sourcesother previous research focuses only on the extraction of opinion expressions munson et al andwilson et al omitting source identifica tion altogetherthere have also been previous efforts to simultaneously extract entities and relations by ex ploiting their mutual dependenciesroth and yih formulated global inference using abayesian network where they captured the influ ence between a relation and a pair of entities via the conditional probability of a relation given a pair of entitiesthis approach however could not exploit dependencies between relationsroth and yih later formulated global inference using integer linear programming which is the approach that we apply herein contrast to our work roth and yih operated in the domain of factualinformation extraction rather than opinion extraction and assumed that the exact boundaries of en tities from the gold standard are known a priori which may not be available in practiceentitieswe develop two separate sequence tagging classi fiers for opinion extraction and source extraction using linearchain conditional random fields the sequence tag ging is encoded as the typical bioscheme4each training or test instance represents a sen tence encoded as a linear chain of tokens and their 4b is for the token that begins an entity i is for to kens that are inside an entity and o is for tokens outside an entity433 associated featuresour feature set is based on that of choi et al for source extraction5but we include additional lexical and wordnet based featuresfor simplicity we use the same features for opinion entity extraction and sourceextraction and let the crfs learn appropriate fea ture weights for each task31 entity extraction featuresfor each token xi we include the following fea turesfor details see choi et al word words in a 4 4 window centered on xipartofspeech pos tags in a 2 2 window6grammatical role grammatical role of xi derived from a dependency parse7dictionary whether xi is in the opinion expres sion dictionary culled from the training data and augmented by approximately 500 opinion words from the mpqa final report8also computed for tokens in a 1 1 window and for xis parent chunkin the dependency parsesemantic class xis semantic class9 wordnet the wordnet hypernym of xi10we also develop a maximum entropy binary classifier for opinionsource link relation classificationgiven an opinionsource pair oisj the re lation classifier decides whether the pair exhibits a valid link relation lij the relation classifier focuses only on the syntactic structure and lexical properties between the two entities of a given pair without knowing whether the proposed entities are correctopinion and source entities are taken fromthe nbest sequences of the entity extraction mod els therefore some are invariably incorrectfrom each sentence we create training and test instances for all possible opinionsource pairings that do not overlap we create an instance for lij only if the span of oi and sj do not overlapfor training we also filter out instances forwhich neither the proposed opinion nor source en 5we omit only the extraction pattern features6using gate httpgateacuk 7provided by rebecca hwa based on the collins parser ftpftpcisupennedupubmcollinsparsertargz 8httpsrrcmitreorgpubsmpqafinalreportpdf 9using sundance 10httpwordnetprincetonedutity overlaps with a correct opinion or source en tity per the gold standardthis training instancefiltering helps to avoid confusion between exam ples like the following the president s1 walked away from themeeting o1 revealing o2 his disap pointment o3 with the deal the monster s2 walked away revealing o4 a little box hidden underneathfor these sentences we construct training in stances for l11 l12 and l13 but not l24 which in fact has very similar sentential structureas l12 and hence could confuse the learning al gorithm41 relation extraction featuresthe training and test instances for each link lij include the following featuresopinion entity word the words contained in oiphrase type the syntactic category of the con stituent in which the entity is embedded eg np or vpwe encode separate features for oi and sj grammatical role the grammatical role of the constituent in which the entity is embeddedgrammatical roles are derived from dependencyparse trees as done for the entity extraction classi fierswe encode separate features for oi and sj position a boolean value indicating whether sj precedes oidistance the distance between oi and sj in numbers of tokenswe use four coarse categories ad jacent very near near fardependency path the path through the depen dency tree from the head of sj to the head of oifor instance subjverbor subjverbobjvoice whether the voice of oi is passive or activesyntactic frame key intrasentential relations be tween oi and sj the syntactic frames that we use are e1role distance e2role where distance adjacent very near near far and eirole is the grammatical role of eieither e1 is an opinion entity and e2 is a source or vice versae1phrase distance e2phrase where eiphrase is the phrasal type of entity ei434 e1phrase e2headword where e2 must bethe opinion entity and e1 must be the source en tity e1 and e2 can be contiguouse1role e2headword where e2 must be the opinion entity and e1 must be the source entitye1phrase np e2phrase indicates the presence of specific syntactic patterns eg vp np vpdepending on the possible phrase types of opinion and source entitiesthe three phrases do not need to be contiguouse1phrase vp e2phrase e1phrase whword e2phrase src distance x distance op where x by of from for between among and have be will not when a syntactic frame is matched to a sen tence the bracketed items should be instantiatedwith particular values corresponding to the sen tencepattern elements without square brackets are constantsfor instance the syntactic frame e1phrase np e2phrasemay be instantiated as vp np vpsome frames are lexicalized with respect to the head of an opinion entity to reflectthe fact that different verbs expect source enti ties in different argument positions approach as noted in the introduction we model our taskas global constraintbased inference over the sep arately trained entity and relation classifiers andimplement the inference procedure as binary in teger linear programming ilp consists of an objective function which is a dot product between a vector of variables and a vector ofweights and a set of equality and inequality constraints among variablesgiven an objective function and a set of constraints lp finds the opti mal assignment of values to variables ie one that minimizes the objective functionin binary ilp the assignments to variables must be either 0 or 1the variables and constraints defined for the opin ion recognition task are summarized in table 1 and explained belowentity variables and weights for each opinion entity we add two variables oi and oi where oi 1 means to extract the opinion entity and objective function f i i j j ij i i oi oi 1 j sj sj 1 i j lij lij 1 i oi j lij j sj aj i lij j aj sj 0 i j i 1 proposed entities can overlapbecause this should not be the case in the final result we add an equality constraint xi xj 1 x so for all pairs of entities with overlapping spansadjustments to weights to balance the precision and recall and to take into account the performance of different base classifiers we apply ad justments to weights as follows1 we define six coefficients cx and cx where x osl to modify a group of weights as followsi x wxi wxi cx i x wxi wxi cx in general increasing cx will promote recall while increasing cx will promote precisionalso setting co cs will put higher confi dence on the opinion extraction classifier than the source extraction classifier2 we also define one constant ca to set the weights for auxiliary variable aithat is i wai ca3 finally we adjust the confidence of the linkvariable based on nthbest sequences of the en tity extraction classifiers as followsi wlij wlij d where d def 4 when oi is from an mth sequence and sj is from a nth sequence11 11this will smoothly degrade the confidence of a link based on the entities from higher nth sequencesvalues of d decrease as 44 45 46 47we evaluate our system using the nrrc multi perspective question answering corpusthat contains 535 newswire articles that are man ually annotated for opinionrelated informationin particular our gold standard opinion entitiescorrespond to direct subjective expression anno tations and subjective speech event annotations in thempqa corpus gold standard source entities and link relations can be ex tracted from the agent attribute associated with each opinion entitywe use 135 documents as adevelopment set and report 10fold cross valida tion results on the remaining 400 documents in all experiments belowwe evaluate entity and link extraction usingboth an overlap and exact matching scheme12 because the exact start and endpoints of the manual annotations are somewhat arbitrary the over lap scheme is more reasonable for our task we report results according to bothmatching schemes but focus our discussion on re sults obtained using overlap matching13 we use the mallet14 implementation of crfsfor brevity we will refer to the opinion extractionclassifier as crfop the source extraction classi fier as crfsrc and the link relation classifier ascrflinkfor ilp we use matlab which pro duced the optimal assignment in a matter of few seconds for each sentencethe weight adjustmentconstants defined for ilp are based on the devel opment data15 the linknearest baselines for baselines we first consider a linknearest heuristic for eachopinion entity extracted by crfop the link nearest heuristic creates a link relation with theclosest source entity extracted by crfsrcre call that crfsrc and crfop extract entities from nbest sequenceswe test the linknearest heuristic with n 1 2 10 where larger n will boost recall at the cost of precisionresults for the 12given two links l11 and l22 exact matching requires the spans of o1 and o2 and the spans of s1 and s2 to match exactly while overlap matching requires the spans to overlap13wiebe et al also reports the human annotation agreement study via the overlap scheme14available at httpmalletcsumassedu 15co 25 co 10 cs 15 cs 10 cl 25 cl 25 ca 02values are picked so as to boost recall while reasonably suppressing incorrect links436 overlap match exact match r p f r p f nearest1 516 714 599 262 369 307 nearest2 607 458 522 297 190 231 nearest10 663 209 317 282 000 000 srl 597 363 452 326 193 242 srlcrfop 456 832 589 276 497 355 ilp1 516 808 630 264 420 324 ilp10 640 724 680 310 348 328 table 2 relation extraction performance nearestn linknearest heuristic w nbest srl all va0 frames from srl srlcrfop all va0 filtered by crfop ilpn ilp applied to nbest sequenceslinknearest heuristic on the full sourceexpresses opinion relation extraction task are shown in the first three rows of table 2nearest1 performs the best in overlapmatch fmeasure reaching 599nearest10 has higher recall but the precision is really low performance of the opinion and source entity classifiers will be discussed in section 8srl baselines next we consider two baselines that use a stateoftheart srl system in many link relations the opinion expression entity is a verb phrase andthe source entity is in an agent argument posi tionhence our second baseline srl extracts all verbagent frames from the output of the srl system and provides an upper bound onrecall for systems that use srl in isola tion for our taska more sophisticated baseline srlcrfop extracts only those va0 frames whose verb overlaps with entities extracted by the opinion expression extractor crfopas shownin table 2 filtering out va0 frames that are incompatible with the opinion extractor boosts pre cision to 832 but the fmeasure is lower than that of nearest1ilp results the ilpn system in table 2 de notes the results of the ilp approach applied to the nbest sequencesilp10 reaches an fmeasureof 680 a significant improvement over the high est performing baseline16 and also a substantialimprovement over ilp1note that the perfor mance of nearest10 was much worse than that 16statistically significant by pairedt test where p 0001overlap match exact match r p f r p f ilp1 516 808 630 264 420 324 ilp10 640 724 680 310 348 328 ilpsrlf 1 517 815 633 266 425 327 ilpsrlf 10 657 724 689 315 343 329 ilpsrlfc10 640 735 684 284 313 298 table 3 relation extraction with ilp and srl ilpn ilp applied to nbest sequences ilpsrlf n ilp w srl features nbest ilpsrlfcn ilp w srl features and srl constraints nbestof nearest1 because the 10best sequences include many incorrect entities whereas the corresponding ilp formulation can discard the bad en tities by considering dependencies among entities and relations17we next explore two approaches for more directly incorporating srl into our systemextra srl features for the link classifier we incorporate srl into the link classifier by addingextra features based on srlwe add boolean features to check whether the span of an srl argu ment and an entity matches exactlyin addition we include syntactic frame features as followse1srlarg e2srlarg where eisrlarg indi cates the srl argument type of entity eie1srlarg e1headword e2srlarg where e1 must be an opinion entity and e2 must be a source entityextra srl constraints for the ilp phase we also incorporate srl into the ilp phase of our system by adding extra constraints based on srlin particular we assign very high weights for linksthat match va0 frames generated by srl in or der to force the extraction of va0 frames17a potential issue with overlap precision and recall is thatthe measures may drastically overestimate the systems performance as follows a system predicting a single link rela tion whose source and opinion expression both overlap withevery token of a document would achieve 100 overlap precision and recallwe can ensure this does not happen by mea suring the average number of pairs to which each correct or predicted pair is aligned in our data this does not exceed 108 so we can conclude these evaluation measures are behaving reasonably437 opinion source link r p f r p f r p f before ilp crfopsrclink with 1 best 764 884 819 673 819 739 605 505 550 merged 10 best 957 312 470 953 245 389 na after ilp ilpsrlf 10 751 829 788 806 757 781 657 724 689 ilpsrlf 10 crfopsrc with 1 best 823 817 820 815 734 773 na table 4 entity extraction performance results using srl are shown in table 3 in the table ilpsrlf denotes the ilp approach using the link classifier withthe extra srl f eatures and ilpsrlfc de notes the ilp approach using both the extra srlf eatures and the srl constraintsfor compar ison the ilp1 and ilp10 results from table 2 are shown in rows 1 and 2the fmeasure score of ilpsrlf 10 is 689 about a 1 point increase from that of ilp10 which shows that extra srl features for the link classifier further improve the performance over our previous best results18 ilpsrlfc10 alsoperforms better than ilp10 in fmeasure al though it is slightly worse than ilpsrlf 10this indicates that the link classifier with extra srl features already makes good use of the va0 frames from the srl system so that forcing the extraction of such frames via extra ilp constraintsonly hurts performance by not allowing the extrac tion of nonva0 pairs in the neighborhood that could have been better choicescontribution of the ilp phase in order to highlight the contribution of the ilp phase for our task we present beforeand afterperformance in table 4the first row shows the performanceof the individual crfop crfsrc and crf link classifiers before the ilp phasewithout the ilp phase the 1best sequence generates the best scoreshowever we also present the performance with merged 10best entity sequences19 in orderto demonstrate that using 10best sequences with out ilp will only hurt performancethe precision of the merged 10best sequences system is very low however the recall level is above 95 for both 18statistically significant by pairedt test where p 000119if an entity ei extracted by the ithbest sequence over laps with an entity ej extracted by the jthbest sequencewhere i j then we discard ej if ei and ej do not over lap then we extract both entitiescrfop and crfsrc giving an upper bound for recall for our approachthe third row presentsresults after the ilp phase is applied for the 10 best sequences and we see that in addition to the improved link extraction described in section 7the performance on source extraction is substan tially improved from fmeasure of 739 to 781performance on opinion expression extraction decreases from fmeasure of 819 to 788this de crease is largely due to implicit links which we will explain belowthe fourth row takes the union of the entities from ilpsrlf 10 and the entitiesfrom the best sequences from crfop and crfsrcthis process brings the fmeasure of crf op up to 820 with a different precisionrecallbreak down from those of 1best sequences with out ilp phasein particular the recall on opinionexpressions now reaches 823 while maintain ing a high precision of 817overlap match exact match r p f r p f devconf 657 724 689 315 343 329 noconf 637 762 694 309 367 335table 5 relation extraction with ilp weight ad justment effects of ilp weight adjustment finally we show the effect of weight adjustment in the ilp formulation in table 5the devconf row shows relation extraction performance using a weight configuration based from the development datain order to see the effect of weight adjustmentwe ran an experiment noconf using fixed default weights20 not surprisingly our weight ad justment tuned from the development set is not theoptimal choice for crossvalidation setneverthe less the weight adjustment helps to balance the precision and recall ie it improves recall at the 20to be precise cx 10 cx 10 for x o s l but ca 02 is the same as before438 cost of precisionthe weight adjustment is moreeffective when the gap between precision and re call is large as was the case with the development dataimplicit links a good portion of errors stem from the implicit link relation which our system did not model directlyan implicit link relation holds for an opinion entity without an associated source entityin this case the opinion entity islinked to an implicit sourceconsider the follow ing exampleantisoviet hysteria was firmly oppressednotice that opinion expressions such as anti soviet hysteriaand firmly oppresseddo not have associated source entities because sources ofthese opinion expressions are not explicitly men tioned in the textbecause our system forces each opinion to be linked with an explicit sourceentity opinion expressions that do not have ex plicit source entities will be dropped during the global inference phase of our systemimplicit links amount to 7 of the link relations in our corpus so the upper bound for recall for our ilp system is 93in the future we will extend our system to handle implicit links as wellnote thatwe report results against a gold standard that in cludes implicit linksexcluding them from thegold standard the performance of our final sys tem ilpsrlf 10 is 726 in recall 724 in precision and 725 in fmeasurethis paper presented a global inference approachto jointly extract entities and relations in the con text of opinion oriented information extractionthe final system achieves performance levels that are potentially good enough for many practical nlp applicationsacknowledgments we thank the reviewers for their many helpful comments and vasin punyakanok for runningour data through his srl systemthis work was sup ported by the advanced research and development activity by nsf grants iis0535099 and iis0208028 and by gifts from google and the xerox foundation
W06-1651
joint extraction of entities and relations for opinion recognitionwe present an approach for the joint extraction of entities and relations in the context of opinion recognition and analysiswe identify two types of opinionrelated entities expressions of opinions and sources of opinions along with the linking relation that exists between theminspired by roth and yih we employ an integer linear programming approach to solve the joint opinion recognition task and show that global constraintbased inference can significantly boost the performance of both relation extraction and the extraction of opinionrelated entitiesperformance further improves when a semantic role labeling system is incorporatedthe resulting system achieves fmeasures of 79 and 69 for entity and relation extraction respectively improving substantially over prior results in the areawe propose an ilp approach to jointly identify opinion holders opinion expressions and their isfrom linking relations and demonstrated the effectiveness of joint inferenceothers extend the tokenlevel approach to jointly identify opinion holders and to determine the polarity and intensity of the opinion expressions
broadcoverage sense disambiguation and information extraction with a supersense sequence tagger in this paper we approach word sense disambiguation and information extraction as a unified tagging problem the task consists of annotating text with the tagset defined by the 41 wordnet supersense classes for nouns and verbs since the tagset is directly related to wordnet synsets the tagger returns partial word sense disambiguation furthermore since the noun tags include the standard named entity detection classes person location organization time etc the tagger as a byproduct returns extended named entity information we cast the problem of supersense tagging as a sequential labeling task and investigate it empirically with a discriminativelytrained hidden markov model experimental evaluation on the main senseannotated datasets available ie semcor and senseval shows considerable improvements over the best known firstsense baseline named entity recognition is the most studied information extraction taskner typically focuses on detecting instances of person location organization names and optionally instances of miscellaneous or time categoriesthe scalability of statistical ner allowed researchers to apply it successfully on large collections of newswire text in several languages and biomedical literaturenewswire ner performance in terms of fscore is in the upper the first author is now at yahooresearchthe tagger described in this paper is free software and can be downloaded from httpwwwloacnritciaramitahtml80s while bioner accuracy ranges between the low 70s and 80s depending on the dataset used for trainingevaluation one shortcoming of ner is its oversimplified ontological model leaving instances of other potentially informative categories unidentifiedhence the utility of named entity information is limitedin addition instances to be detected are mainly restricted to proper nounsword sense disambiguation is the task of deciding the intended sense for ambiguous words in contextwith respect to ner wsd lies at the other end of the semantic tagging spectrum since the dictionary defines tens of thousand of very specific word senses including ner categorieswordnet 1 possibly the most used resource for wsd defines word senses for verbs common and proper nounsword sense disambiguation at this level of granularity is a complex task which resisted all attempts of robust broadcoverage solutionsmany distinctions are too subtle to be captured automatically and the magnitude of the class space several orders larger than ners makes it hard to approach the problem with sophisticated but scalable machine learning methodslastly even if the methods would scale up there are not enough manually tagged data at the word sense level for training a modelthe performance of state of the art wsd systems on realistic evaluations is only comparable to the first sense baseline notwithstanding much research the benefits of disambiguated lexical information for language processing are still mostly speculativethis paper presents a novel approach to broadcoverage information extraction and word sense disambiguationour goal is to simplify the disambiguation task for both nouns and verbs to a level at which it can be approached as any other tagging problem and can be solved with state of the art methodsas a byproduct this task includes and extends nerwe define a tagset based on wordnets lexicographers classes or supersenses cftable 1the size of the supersense tagset allows us to adopt a structured learning approach which takes local dependencies between labels into accountto this extent we cast the supersense tagging problem as a sequence labeling task and train a discriminative hidden markov model based on that of collins on the manually annotated semcor corpus in two experiments we evaluate the accuracy of the tagger on the semcor corpus itself and on the english all words senseval 3 shared task data the model outperforms remarkably the best known baseline the first sense heuristic to the best of our knowledge for the first time on the most realistic all words evaluation settingthe paper is organized as followssection 2 introduces the tagset section 3 discusses related work and section 4 the learning modelsection 5 reports on experimental settings and resultsin section 6 we summarize our contribution and consider directions for further researchwordnet is a broadcoverage machinereadable dictionary which includes 11306 verbs mapped to 13508 word senses called synsets and 114648 common and proper nouns mapped to 79689 synsetseach noun or verb synset is associated with one of 41 broad semantic categories in order to organize the lexicographers work of updating and managing the lexicon since each lexicographer category groups together many synsets they have been also called supersenses there are 26 supersenses for nouns 15 for verbsthis coarsegrained ontology has a number of attractive features for the purpose of natural language processingfirst the small size of the set makes it possible to build a single tagger which has positive consequences on robustnesssecond classes although fairly general are easily recognizable and not too abstract or vaguemore importantly similar word senses tend to be merged togetheras an example table 2 summarizes all senses of the noun boxthe 10 synsets are mapped to 6 supersenses artifact quantity shape state plant and actthree similar senses and and the probably related are merged in the artifact supersensethis process can help disambiguation because it removes subtle distinctions which are hard to discriminate and increase the size of the class spaceone possible drawback is that senses which one might want to keep separate eg the most common sense boxcontainer can be collapsed with othersone might argue that all artifact senses share semantic properties which differentiate them from the other senses and can support useful semantic inferencesunfortunately there are no general solutions to the problem of sense granularityhowever major senses identified by wordnet are maintained at the supersense levelhence supersensedisambiguated words are also at least partially synsetdisambiguatedsince wordnet includes both proper and common nouns the new tagset suggests an extended notion of named entityas well as the usual ner categories person group location and time2 supersenses include categories such as artifacts which can be fairly frequent but usually neglectedto a greater extent than in standard ner research in bioner has focused on the adoption of richer ontologies for information extractiongenia for example is an ontology of 46 classes with annotated 2the supersense category group is rather a superordinate of organization and has wider scope corpus designed for supporting information extraction in the molecular biology domainin addition there is growing interest for extracting relations between entities as a more useful type of ie supersense tagging is inspired by similar considerations but in a domainindependent setting eg verb supersenses can label semantic interactions between nominal conceptsthe following sentence extracted from the data further described in section 51 shows the information captured by the supersense tagset guestsnperson in the boxnartifact stood upvmotion and demandedvcommunication waternsubstanceas example 1 shows there is more information that can be extracted from a sentence than just the names eg the fact that clara harris and the following guests are both tagged as person might suggest some sort of coreferentiality while the coordination of verbs of motion and communication as in stood up and demanded might be useful for language modeling purposesin such a setting structured learning methods eg sequential can help tagging by taking the senses of the neighboring words into accountsequential models are common in ner pos tagging shallow parsing etcmost of the work in wsd instead has focused on labeling each word individually possibly revising the assignments of senses at the document level eg following the one sense per discourse hypothesis although it seems reasonable to assume that occurrences of word senses in a sentence can be correlated hence that structured learning methods could be successful there has not been much work on sequential wsdsegond et al are possibly the first to have applied an hmm tagger to semantic disambiguationinterestingly to make the method more tractable they also used the supersense tagset and estimated the model on semcorby crossvalidation they show a marked improvement over the first sense baselinehowever in the tagset is used differently by defining equivalence classes of words with the same set of sensesfrom a similar perspective de loupy et al also investigated the potential advantages of using hmms for disambiguationmore recently variants of the generative hmm have been applied to wsd and evaluated also on senseval data showing performance comparable to the first sense baselineprevious work on prediction at the supersense level has focused on lexical acquisition thus aiming at word type classification rather than taggingas far as applications are concerned it has been shown that supersense information can support supervised wsd by providing a partial disambiguation step in syntactic parse reranking supersenses have been used to build useful latent semantic features we believe that supersense tagging has the potential to be useful in combination with other sources of information such as part of speech domainspecific ner models chunking or shallow parsing in tasks such as question answering and information extraction and retrieval where large amounts of text need to be processedit is also possible that this kind of shallow semantic information can help building more sophisticated linguistic analysis as in full syntactic parsing and semantic role labelingwe take a sequence labeling approach to learning a model for supersense taggingour goal is to learn a function from input vectors the observations from labeled data to response variables the supersense labelspos tagging shallow parsing npchunking and ner are all examples of sequence labeling tasks in which performance can be significantly improved by optimizing the choice of labeling over whole sequences of words rather than individual wordsthe limitations of the generative approach to sequence tagging i e hidden markov models have been overcome by discriminative approaches proposed in recent years in this paper we apply perceptron trained hmms originally proposed in hmms define a probabilistic model for observationlabel sequencesthe joint model of an observationlabel sequence is defined as where yi is the ith label in the sequence and xi is the ith wordin the nlp literature a common approach is to model the conditional distribution of label sequences given the label sequencesthese models have several advantages over generative models such as not requiring questionable independence assumptions optimizing the conditional likelihood directly and employing richer feature representationsthis task can be represented as learning a discriminant function f x x y ir on a training data of observationlabel sequences where f is linear in a feature representation b defined over the joint inputoutput space b is a global feature representation mapping each pair to a vector of feature counts b e ird where d is the total number of featuresthis vector is given by each individual feature φi typically represents a morphological contextual or syntactic property or also the interdependence of consecutive labelsthese features are described in detail in section 42given an observation sequence x we make a prediction by maximizing f over the response variables this involves computing the viterbi decoding with respect to the parameter vector w e irdthe complexity of the viterbi algorithm scales linearly with the length of the sequencethere are different ways of estimating w for the described modelwe use the perceptron algorithm for sequence tagging the perceptron algorithm focuses on minimizing the error rate without involving any normalization factorsthis property makes it very efficient which is a desirable feature in a task dealing with a large tagset such as oursadditionally the performance of perceptrontrained hmms is very competitive on a number of tasks eg in shallow parsing where algorithm 1 hidden markov average perceptron algorithm the perceptron performance is comparable to that of conditional random field models the tendency to overfit of the perceptron can be mitigated in a number of ways including regularization and votinghere we apply averaging and straightforwardly extended collins algorithm summarized in algorithm 1we used the following combination of spellingmorphological and contextual featuresfor each observed word xi in the data extracts the following features described belowin addition sh low if the first character of xi is lowercase sh cap brk if the first character of xi is uppercase and xi1 is a full stop question or exclamation mark or xi is the first word of the sentence sh cap nobrk otherwise word features are morphologically simplified using the morphological functions of the wordnet librarythe first sense feature is the label predicted for xi by the baseline model cfsection 53pos labels were generated using brants tnt tagger pos features of the form pos0 extract the first character from the pos label thus providing a simplified representation of the pos tagfinally word shape features are regular expressionlike transformation in which each character c of a string s is substituted with x if c is uppercase if lowercase c is substituted with x if c is a digit it is substituted with d and left as it is otherwisein addition each sequence of two or more identical characters c is substituted with cfor example for s merrill lynch co sh xx xx xx exploratory experiments with richer feature sets including syntactic information affixes and topic labels associated with words did not result in improvements in terms of performancewhile more experiments are needed to investigate the usefulness of other sources of information the feature set described above while basic offers good generalization propertieswe experimented with the following datasets3the semcor corpus a fraction of the brown corpus which has been manually annotated with wordnet synset labelsnamed entities of the categories person location and group are also annotatedthe original annotation with wordnet 16 synset ids has been converted to the most recent version 20 of wordnetsemcor is divided in three parts brown1 and brown2 here referred to as sem in which nouns verbs adjectives and adverbs are annotatedin addition the section brownv semv here contains annotations only for verbswe also experimented with the senseval3 english allwords tasks data here called se3the senseval allwords task evaluates the performance of wsd systems on all open class words in complete documentsthe senseval3 data consists of two wall street journal articles wsj 1778 and senses lists the number of instances of supersense labels partitioned in the following two rows between verb and noun supersense labelsthe lowest four rows summarize average polysemy figures at the synset and supersense level for both nouns and verbswsj 1695 and a fiction excerpt cl 23 from the unannotated portion of the brown corpustable 3 summarizes a few statistics about the composition of the datasetsthe four lower rows report the average polysemy of nouns and verbs in each dataset both at the synset level and supersense levelthe average number of senses decreases significantly when the more general sense inventory is consideredwe substituted the corresponding supersense to each noun and verb synset in all three datasets sem semv and se3all other tokens were labeled 0the supersense label nountops refers to 45 synsets which lie at the very top of the wordnet noun hierarchysome of these synsets are expressed by very general nouns such as biont benthos whole and nothinghowever others undoubtedly refer to other supersenses for which they provide the label such as food person plant or animalsince these nouns tend to be fairly frequent it is confusing and inconsistent to label them nountops eg nouns such as chowder and swedish meatball would be tagged as nounfood but the noun food would be tagged as nountopsfor this reason in all obvious cases we substituted the nountops label with the more specific supersense label for the noun4the semv dataset only includes supersense labels for verbsin order to avoid unwanted false negatives that is thousands of nouns labeled 0 we applied the following procedurerather than using the full sentences from the semv dataset from each sentence we generated the fragments including a verb but no common or proper nouns eg from a sentence such as karns ruling pertainedverbstative to eight of the 10 cases only the fragment pertainedverbstative to eight of the 10 is extracted and used for trainingsometimes more than one label is assigned to a word in all datasetsin these cases we adopted the heuristic of only using the first label in the data as the correct synsetsupersensewe leave the extension of the tagger to the multilabel case for future researchas for now we can expect that this solution will simply lower somewhat both the baseline and the tagger performancefinally we adopted a beginning and continuation of entity plus no label encoding ie the actual class space defines 83 labelsthe supersense tagger was trained on the semcor datasets sem and semvthe only free parameter to set in evaluation is the number of iterations to perform t we evaluated the models accuracy on semcor by splitting the sem data randomly in training development and evaluationin a 5fold crossvalidation setup the tagger was trained on 45 of the sem data the remaining data was split in two halves one used to fix t the other for evaluating performance on testthe full semv data was always added to the training portion of semwe also evaluated the model on the senseval3 data using the same value for t set by crossvalidation on the sem data5the ordering of the training instances is randomized across different runs therefore the algorithm outputs different results after each run even if the evaluation set is fixed as is the case for the senseval evaluationthe variance in the results on the se3 data was measured in this waythe first sense baseline is the supersense of the most frequent synset for a word according to wordnets sense rankingthis baseline is very competitive in wsd tasks and it is extremely hard to improve upon even slightlyin fact the baseline has been proposed as a good alternative to wsd altogether for this reason we include the first sense prediction as one of the features of our tagging modelwe apply the heuristic as followsfirst in each sentence we identify the longest sequence which has an entry in wordnet as either noun or verbwe carry out this step using the wordnets library functions which perform also morphological simplificationhence in example 1 the entry stand up is detected although also stand has an entry in wordnetthen each word identified in this way is assigned its most frequent sense the only one available if the word is unambiguousto reduce the number of candidate supersenses we distinguish between common and proper nouns egsavannah is distinguished from savannah this method improves slightly the accuracy of the baseline which does not distinguish between different types of nounstable 4 summarizes overall performance6the first line shows the accuracy of a baseline which assigns possible supersenses of identified words at randomthe second line shows the performance of the first sense baseline the marked difference between the two is a measure of the robustness of the first sense heuristicon the semcor data the tagger improves over the baseline by 1071 3119 error reduction while on senseval3 the tagger improves over the baseline by 645 1796 error reductionwe can put these results in context although indirectly by comparison with the results of the senseval3 all words task systemsthere with a baseline of 6240 only 4 out of 26 systems performed above the baseline with the two best systems achieving an fscore of 652 the system based on the hmm tagger 6scoring was performed with a reimplementation of the conlleval script achieved an fscore of 609the supersense tagger improves mostly on precision while also improving on recalloverall the tagger achieves fscores between 705 and 772if we compare these figures with the accuracy of ner taggers the results are very encouraginggiven the considerably larger one order of magnitude class space some loss has to be expectedexperiments with augmented tagsets in the biomedical domain also show performance loss with respect to smaller tagsets eg kazama et al report an fscore of 562 on a tagset of 25 genia classes compared to the 759 achieved on the simplest binary casethe sequence fragments from semv contribute about 1 fscore improvementtable 5 focuses on subsets of the evaluationthe upper part summarizes the results on semcor for the classes comparable to standard ners person group location and timehowever these categories here are composed of common nouns as well as proper namesnamed entitieson this four tags the tagger achieves an average 8246 fscore not too far from ner resultsthe lower portion of table 5 summarizes the results on the five most frequent noun and verb supersense labels on the senseval3 data providing more specific evidence for the supersense taggers disambiguation accuracythe tagger outperforms the first sense baseline on all categories with the exception of verbcognition and nounpersonthe latter case has a straightforward explanation named entities are not annotated in the senseval data while they are in semcorhence the tagger learns a different model for nouns than the one used to annotate the senseval databecause of this discrepancy the tagger tends to return false positives for some categoriesin fact the other noun categories on which the tagger performs poorly in se3 are group and location naturally the lower performance on senseval is also explained by the fact that the evaluation comes from different sources than trainingin this paper we presented a novel approach to broadcoverage word sense disambiguation and information extractionwe defined a tagset based on wordnet supersenses a much simpler and general semantic model than wordnet which however preserves significant polysemy information and includes standard named entity recognition categorieswe showed that in this framework it is possible to perform accurate broadcoverage tagging with state of the art sequence learning methodsthe tagger considerably outperformed the most competitive baseline on both semcor and senseval datato the best of our knowledge the results on senseval data provide the first convincing evidence of the possibility of improving by considerable amounts over the first sense baselinewe believe both the tagset and the structured learning approach contribute to these resultsthe simplified representation obviously helps by reducing the number of possible senses for each word interestingly the relative improvement in performance is not as large as the relative reduction in polysemythis indicates that sense granularity is only one of the problems in wsdmore needs to be understood concerning sources of information and processes that affect word sense selection in contextas far as the tagger is concerned we applied the simplest feature representation more sophisticated features can be used eg based on kernels which might contribute significantly by allowing complex feature combinationsthese results also suggest new directions of research within this modelin particular the labels occurring in each sequence tend to coincide with predicates and arguments a sequential dependency model might not be the most accurate at capturing the grammatical dependencies between these elementsother conditional models eg designed on head to head or similar dependencies could prove more appropriateanother interesting issue is the granularity of the tagsetsupersenses seem more practical then synsets for investigating the impact of broadcoverage semantic tagging but they define a very simplistic ontological modela natural evolution of this kind of approach might be one which starts by defining a semantic model at an intermediate level of abstraction
W06-1670
broadcoverage sense disambiguation and information extraction with a supersense sequence taggerin this paper we approach word sense disambiguation and information extraction as a unified tagging problemthe task consists of annotating text with the tagset defined by the 41 wordnet supersense classes for nouns and verbssince the tagset is directly related to wordnet synsets the tagger returns partial word sense disambiguationfurthermore since the noun tags include the standard named entity detection classes person location organization time etc the tagger as a byproduct returns extended named entity informationwe cast the problem of supersense tagging as a sequential labeling task and investigate it empirically with a discriminativelytrained hidden markov modelexperimental evaluation on the main senseannotated datasets available ie semcor and senseval shows considerable improvements over the best known firstsense baselineour supersense tagger annotates text with a 46label tag set of wnss categories
using wordnetbased context vectors to estimate the semantic relatedness of concepts in this paper we introduce a wordnetbased measure of semantic relatedness by combining the structure and content of wordnet with cooccurrence information derived from raw text we use the cooccurrence information along with the definitions to build vectors corresponding to each concept in word net numeric scores of relatedness are assigned to a pair of concepts by measuring the cosine of the angle between their respective gloss vectors we show that this measure compares favorably to other measures with respect to human judgments of semantic relatedness and that it performs well when used in a word sense disambiguation algorithm that relies on semantic relatedness this measure is flexible in that it can make comparisons between any two concepts without regard to their part of speech in addition it can be adapted to different domains since any plain text corpus can be used to derive the cooccurrence information humans are able to quickly judge the relative semantic relatedness of pairs of conceptsfor example most would agree that feather is more related to bird than it is to treethis ability to assess the semantic relatedness among concepts is important for natural language understandingconsider the following sentence he swung the bat hitting the ball into the standsa reader likely uses domain knowledge of sports along with the realization that the baseball senses of hitting bat ball and stands are all semantically related in order to determine that the event being described is a baseball gameconsequently a number of techniques have been proposed over the years that attempt to automatically compute the semantic relatedness of concepts to correspond closely with human judgments it has also been shown that these techniques prove useful for tasks such as word sense disambiguation realword spelling correction and information extraction among othersin this paper we introduce a wordnetbased measure of semantic relatedness inspired by harris distributional hypothesis the distributional hypothesis suggests that words that are similar in meaning tend to occur in similar linguistic contextsadditionally numerous studies have shown that context plays a vital role in defining the meanings of words describe a context vectorbased method that simulates learning of word meanings from raw text has also shown that vectors built from the contexts of words are useful representations of word meaningsour gloss vector measure of semantic relatedness is based on second order cooccurrence vectors in combination with the structure and content of wordnet a semantic network of conceptsthis measure captures semantic information for concepts from contextual information drawn from corpora of textwe show that this measure compares favorably to other measures with respect to human judgments of semantic relatedness and that it performs well when used in a word sense disambiguation algorithm that relies on semantic relatednessthis measure is flexible in that it can make comparisons between any two concepts without regard to their part of speechin addition it is adaptable since any corpora can be used to derive the word vectorsthis paper is organized as followswe start with a description of second order context vectors in general and then define the gloss vector measure in particularwe present an extensive evaluation of the measure both with respect to human relatedness judgments and also relative to its performance when used in a word sense disambiguation algorithm based on semantic relatednessthe paper concludes with an analysis of our results and some discussion of related and future workcontext vectors are widely used in information retrieval and natural language processingmost often they represent first order cooccurrences which are simply words that occur near each other in a corpus of textfor example police and car are likely first order cooccurrences since they commonly occur togethera first order context vector for a given word would simply indicate all the first order cooccurrences of that word as found in a corpushowever our gloss vector measure is based on second order cooccurrences for example if car and mechanic are first order co occurrences then mechanic and police would be second order cooccurrences since they are both first order cooccurrences of carschutzes method starts by creating a word space which is a cooccurrence matrix where each row can be viewed as a first order context vectoreach cell in this matrix represents the frequency with which two words occur near one another in a corpus of textthe word space is usually quite large and sparse since there are many words in the corpus and most of them do not occur near each otherin order to reduce the dimensionality and the amount of noise noncontent stop words such as the for a etc are excluded from being rows or columns in the word spacegiven a word space a context can then be represented by second order cooccurrences this is done by finding the resultant of the first order context vectors corresponding to each of the words in that contextif a word in a context does not have a first order context vector created for it or if it is a stop word then it is excluded from the resultantfor example suppose we have the following context the paintings were displayed in the art gallerythe second order context vector would be the resultant of the first order context vectors for painting display art and gallerythe words were in and the are excluded from the resultant since we consider them as stop words in this examplefigure 1 shows how the second order context vector might be visualized in a 2dimensional spaceintuitively the orientation of each second order context vector is an indicator of the domains or topics that the context is associated withtwo context vectors that lie close together indicate a considerable contextual overlap which suggests that they are pertaining to the same meaning of the target wordin this research we create a gloss vector for each concept represented in a dictionarywhile we use wordnet as our dictionary the method can apply to other lexical resourcesa gloss vector is a second order context vector formed by treating the dictionary definition of a concept as a context and finding the resultant of the first order context vectors of the words in the definitionin particular we define a word space by creating first order context vectors for every word w that is not a stop word and that occurs above a minimum frequency in our corpusthe specific steps are as follows the first order context vector w therefore encodes the cooccurrence information of word w for example consider the gloss of lamp an artificial source of visible illuminationthe gloss vector for lamp would be formed by adding the first order context vectors of artificial source visible and illuminationin these experiments we use wordnet as the corpus of text for deriving first order context vectorswe take the glosses for all of the concepts in wordnet and view that as a large corpus of textthis corpus consists of approximately 14 million words and results in a word space of approximately 20000 dimensions once low frequency and stop words are removedwe chose the wordnet glosses as a corpus because we felt the glosses were likely to contain content rich terms that would distinguish between the various concepts more distinctly than would text drawn from a more generic corpushowever in our future work we will experiment with other corpora as the source of first order context vectors and other dictionaries as the source of glossesthe first order context vectors as well as the gloss vectors usually have a very large number of dimensions and it is not easy to visualize this spacefigure 2 attempts to illustrate these vectors in two dimensionsthe words tennis and food are the dimensions of this 2dimensional spacewe see that the first order context vector for serve is approximately halfway between tennis and food since the word serve could mean to serve the ball in the context of tennis or could mean to serve food in another contextthe first order context vectors for eat and cutlery are very close to food since they do not have a sense that is related to tennisthe gloss for the word fork cutlery used to serve and eat food contains the words cutlery serve eat and foodthe gloss vector for fork is formed by adding the first order context vectors of cutlery serve eat and foodthus fork has a gloss vector which is heavily weighted towards foodthe concept of food therefore is in the same semantic space as and is related to the concept of forksimilarly we expect that in a high dimensional space the gloss vector of fork would be heavily weighted towards all concepts that are semantically related to the concept of forkadditionally the previous demonstration involved a small gloss for representing forkusing augmented glosses described in section 32 we achieve better representations of concepts to build gloss vectors uponthe formulation of the gloss vector measure described above is independent of the dictionary used and is independent of the corpus usedhowever dictionary glosses tend to be rather short and it is possible that even closely related concepts will be defined using different sets of wordsour belief is that two synonyms that are used in different glosses will tend to have similar word vectors however the brevity of dictionary glosses may still make it difficult to create gloss vectors that are truly representative of the concept word vector gloss vector encounter a similar issue when measuring semantic relatedness by counting the number of matching words between the glosses of two different conceptsthey expand the glosses of concepts in wordnet with the glosses of concepts that are directly linked by a wordnet relationwe adopt the same technique here and use the relations in wordnet to augment glosses for the gloss vector measurewe take the gloss of a given concept and concatenate to it the glosses of all the concepts to which it is directly related according to wordnetthe gloss vector for that concept is then created from this big concatenated glossbelow we briefly describe five alternative measures of semantic relatedness and then go on to include them as points of comparison in our experimental evaluation of the gloss vector measureall of these measures depend in some way upon wordnetfour of them limit their measurements to nouns located in the wordnet isa hierarchyeach of these measures takes two wordnet concepts c1 and c2 as input and return a numeric score that quantifies their degree of relatedness finds the path length between c1 and c2 in the isa hierarchy of wordnetthe path length is then scaled by the depth of the hierarchy in which they reside to obtain the relatedness of the two concepts introduced a measure that is based on information content which are numeric quantities that indicate the specificity of conceptsthese values are derived from corpora and are used to augment the concepts in wordnets isa hierarchythe measure of relatedness between two concepts is the information content of the most specific concept that both concepts have in common extends resniks measure to combine the information contents of c1 c2 and their lowest common subsumer also extends resniks measure by taking the ratio of the shared information content to that of the individual concepts introduce extended gloss overlaps which is a measure that determines the relatedness of concepts proportional to the extent of overlap of their wordnet glossesthis simple definition is extended to take advantage of the complex network of relations in wordnet and allows the glosses of concepts to include the glosses of synsets to which they are directly related in wordnetas was done by we evaluated the measures of relatedness in two waysfirst they were compared against human judgments of relatednesssecond they were used in an application that would benefit from the measuresthe effectiveness of the particular application was an indirect indicator of the accuracy of the relatedness measure usedone obvious metric for evaluating a measure of semantic relatedness is its correspondence with the human perception of relatednesssince semantic relatedness is subjective and depends on the human view of the world comparison with human judgments is a selfevident metric for evaluationthis was done by in their comparison of five measures of semantic relatednesswe follow a similar approach in evaluating the gloss vector measurewe use a set of 30 word pairs from a study carried out by these word pairs are a subset of 65 word pairs used by in a similar study almost 25 years earlierin this study human subjects assigned relatedness scores to the selected word pairsthe word pairs selected for this study ranged from highly related pairs to unrelated pairswe use these human judgments for our evaluationeach of the word pairs have been scored by humans on a scale of 0 to 5 where 5 is the most relatedthe mean of the scores of each pair from all subjects is considered as the human relatedness score for that pairthe pairs are then ranked with respect to their scoresthe most related pair is the first on the list and the least related pair is at the end of the listwe then have each of the measures of relatedness score the word pairs and a another ranking of the word pairs is created corresponding to each of the measuresspearmans correlation coefficient is used to assess the equivalence of two rankingsif the two rankings are exactly the same the spearmans correlation coefficient between these two rankings is 1a completely reversed ranking gets a value of 1the value is 0 when there is no relation between the rankingswe determine the correlation coefficient of the ranking of each measure with that of the human relatednesswe use the relatedness scores from both the human studies the miller and charles study as well as the rubenstein and goodenough researchtable 1 summarizes the results of our experimentwe observe that the gloss vector has the highest correlation with humans in both casesnote that in our experiments with the gloss vector measure we have used not only the gloss of the concept but augmented that with the gloss of all the concepts directly related to it according to wordnetwe observed a significant drop in performance when we used just the glosses of the concept alone showing that the expansion is necessaryin addition the frequency cutoffs used to construct the word space played a critical rolethe best setting of the frequency cutoffs removed both low and high frequency words which eliminates two different sources of noisevery low frequency words do not occur enough to draw distinctions among different glosses whereas high frequency words occur in many glosses and again do not provide useful information to distinguish among glossesan applicationoriented comparison of five measures of semantic relatedness was presented in in that study they evaluate five wordnetbased measures of semantic relatedness with respect to their performance in context sensitive spelling correctionwe present the results of an applicationoriented evaluation of the measures of semantic relatednesseach of the seven measures of semantic relatedness was used in a word sense disambiguation algorithm described by word sense disambiguation is the task of determining the meaning of a word in its given contextfor example in the sentence the excons broke into the bank on elm street the word bank has the financial institution sense as opposed to the edge of a river sensebanerjee and pedersen attempt to perform this task by measuring the relatedness of the senses of the target word to those of the words in its contextthe sense of the target word that is most related to its context is selected as the intended sense of the target wordthe experimental data used for this evaluation is the senseval2 test datait consists of 4328 instances that each includes a single ambiguous target wordeach instance consists of approximately 23 sentences and one occurrence of a target word1754 of the instances include nouns as target words while 1806 are verbs and 768 are adjectiveswe use the noun data to compare all six of the measures since four of the measures are limited to nouns as inputthe accuracy of disambiguation when performed using each of the measures for nouns is shown in table 2as discussed in earlier sections the gloss vector measure builds a word space consisting of first order context vectors corresponding to every word in a corpusgloss vectors are the resultant of a number of first order context vectorsall of these vectors encode semantic information about the concepts or the glosses that the vectors representwe note that the quality of the words used as the dimensions of these vectors plays a pivotal role in getting accurate relatedness scoreswe find that words corresponding to very specific concepts and are highly indicative of a few topics make good dimensionswords that are very general in nature and that appear all over the place add noise to the vectorsin an earlier section we discussed using stop words and frequency cutoffs to keep only the high information content wordsin addition to those we also experimented with a term frequency inverse document frequency cutoffterm frequency and inverse document frequency are commonly used metrics in information retrievalfor a given word term frequency is the number of times a word appears in the corpusthe document frequency is number of documents in which the word occursinverse document frequency is then computed as the tf idf value is an indicator of the specificity of a wordthe higher the tf idf value the lower the specificityfigure 3 shows a plot of tf idf cutoff on the xaxis against the correlation of the gloss vector measure with human judgments on the yaxisthe tf idf values ranged from 0 to about 4200note that we get lower correlation as the cutoff is raisedwe observe from the experimental results that the gloss vector measure corresponds the most with human judgment of relatedness we believe this is probably because the gloss vector measure most closely imitates the representation of concepts in the human mind suggest that the cognitive representation of a word is an abstraction derived from its contexts their study also suggested the semantic similarity of two words depends on the overlap between their contextual representationsthe gloss vector measure uses the contexts of the words and creates a vector representation of thesethe overlap between these vector representations is used to compute the semantic similarity of concepts additionally perform singular value decomposition on their context vector representation of words and they show that reducing the number of dimensions of the vectors using svd more accurately simulates learning in humanswe plan to try svd on the gloss vector measure in future workin the applicationoriented evaluation the gloss vector measure performed relatively well however unlike the human study it did not outperform all the other measureswe think there are two possible explanations for thisfirst the word pairs used in the human relatedness study are all nouns and it is possible that the gloss vector measure performs better on nouns than on other parts of speechin the applicationoriented evaluation the measure had to make judgments for all parts of speechsecond the application itself affects the performance of the measurethe word sense disambiguation algorithm starts by selecting a context of 5 words from around the target wordthese context words contain words from all parts of speechsince the jiangconrath measure assigns relatedness scores only to noun concepts its behavior would differ from that of the vector measure which would accept all words and would be affected by the noise introduced from unrelated conceptsthus the context selection factors into the accuracy obtainedhowever for evaluating the measure as being suitable for use in real applications the gloss vector measure proves relatively accuratethe gloss vector measure can draw conclusions about any two concepts irrespective of partofspeechthe only other measure that can make this same claim is the extended gloss overlaps measurewe would argue that gloss vectors present certain advantages over itthe extended gloss overlap measure looks for exact string overlaps to measure relatednessthis exactness works against the measure in that it misses potential matches that intuitively would contribute to the score the gloss vector measure is more robust than the extended gloss overlap measure in that exact matches are not required to identify relatednessthe gloss vector measure attempts to overcome this exactness by using vectors that capture the contextual representation of all wordsso even though silverware and spoon do not overlap their contextual representations would overlap to some extent describe a word sense disambiguation algorithm that also uses vectors to determine the intended sense of an ambiguous wordin their approach they use dictionary definitions from ldoce the words in these definitions are used to build a cooccurrence matrix which is very similar to our technique of using the wordnet glosses for our word spacethey augment their dictionary definitions with similar words which are determined using the co occurrence matrixeach concept in ldoce is then represented by an aggregate vector created by adding the cooccurrence counts for each of the words in the augmented definition of the conceptthe next step in their algorithm is to form a context vectorthe context of the ambiguous word is first augmented using the cooccurrence matrix just like the definitionsthe context vector is formed by taking the aggregate of the word vectors of the words in the augmented contextto disambiguate the target word the context vector is compared to the vectors corresponding to each meaning of the target word in ldoce and that meaning is selected whose vector is mathematically closest to that of the contextour approach differs from theirs in two primary respectsfirst rather than creating an aggregate vector for the context we compare the vector of each meaning of the ambiguous word with the vectors of each of the meanings of the words in the contextthis adds another level of indirection in the comparison and attempts to use only the relevant meanings of the context wordssecondly we use the structure of wordnet to augment the short glosses with other related glosses compare dictionary based vectors with cooccurrence based vectors where the vector of a word is the probability that an origin word occurs in the context of the wordthese two representations are evaluated by applying them to real world applications and quantifying the resultsboth measures are first applied to word sense disambiguation and then to the learning of positives or negatives where it is required to determine whether a word has a positive or negative connotationit was observed that the co occurrence based idea works better for the word sense disambiguation and the dictionary based approach gives better results for the learning ofpositives or negativesfrom this the conclusion is that the dictionary based vectors contain some different semantic information about the words and warrants further investigationit is also observed that for the dictionary based vectors the network of words is almost independent of the dictionary that is used ie any dictionary should give us almost the same network also use glossbased context vectors in their work on the disambiguation of nearsynonyms words whose senses are almost indistinguishablethey disambiguate nearsynonyms in text using various indicators one of which is contextvectorbasedcontext vectors are created for the context of the target word and also for the glosses of each sense of the target wordeach gloss is considered as a bag of words where each word has a corresponding word vectorthese vectors for the words in a gloss are averaged to get a context vector corresponding to the glossthe distance between the vector corresponding to the text and that corresponding to the gloss is measured the nearness of the vectors is used as an indicator to pick the correct sense of the target wordwe introduced a new measure of semantic relatedness based on the idea of creating a gloss vector that combines dictionary content with corpus based datawe find that this measure correlates extremely well with the results of these human studies and this is indeed encouragingwe believe that this is due to the fact that the context vector may be closer to the semantic representation of concepts in humansthis measure can be tailored to particular domains depending on the corpus used to derive the cooccurrence matrices and makes no restrictions on the parts of speech of the concept pairs to be comparedwe also demonstrated that the vector measure performs relatively well in an applicationoriented setup and can be conveniently deployed in a real world applicationit can be easily tweaked and modified to work in a restricted domain such as bioinformatics or medicine by selecting a specialized corpus to build the vectorsthis research was partially supported by a national science foundation faculty early career development award all of the experiments in this paper were carried out with the wordnetsimilarity package which is freely available for download from httpsearchcpanorgdistwordnetsimilarity
W06-2501
using wordnetbased context vectors to estimate the semantic relatedness of conceptsin this paper we introduce a wordnetbased measure of semantic relatedness by combining the structure and content of wordnet with cooccurrence information derived from raw textwe use the cooccurrence information along with the wordnet definitions to build gloss vectors corresponding to each concept in wordnetnumeric scores of relatedness are assigned to a pair of concepts by measuring the cosine of the angle between their respective gloss vectorswe show that this measure compares favorably to other measures with respect to human judgments of semantic relatedness and that it performs well when used in a word sense disambiguation algorithm that relies on semantic relatednessthis measure is flexible in that it can make comparisons between any two concepts without regard to their part of speechin addition it can be adapted to different domains since any plain text corpus can be used to derive the cooccurrence informationwe create aggregate cooccurrence vectors for a wordnet sense by adding the cooccurrence vectors of the words in its wordnet glosswe introduce a vector measure to determine the relatedness between pairs of concepts
which side are you on identifying perspectives at the document and sentence levels in this paper we investigate a new problem identifying the which a document is written by perspective we mean a point of view for example from the perspective of democrats or republicans can computers learn to identify the perspective of a document not every sentence is written strongly from a perspective can computers learn to identify which sentences strongly convey a particular perspective we develop statistical models to capture how perspectives are expressed at the document and sentence levels and evaluate the proposed models on articles about the israelipalestinian conflict the results show that the proposed models successfully learn how perspectives are reflected in word usage and can identify the perspective of a document with high accuracy in this paper we investigate a new problem of automatically identifying the perspective from which a document is writtenby perspective we mean a subjective evaluation of relative significance a pointofview1 for example documents about the palestinianisraeli conflict may appear to be about the same topic but reveal different perspectives the inadvertent killing by israeli forces of palestinian civilians usually in the course of shooting at palestinian terrorists is considered no different at the moral and ethical level than the deliberate targeting of israeli civilians by palestinian suicide bombers in the first weeks of the intifada for example palestinian public protests and civilian demonstrations were answered brutally by israel which killed tens of unarmed protestersexample 1 is written from an israeli perspective example 2 is written from a palestinian perspectiveanyone knowledgeable about the issues of the israelipalestinian conflict can easily identify the perspectives from which the above examples were writtenhowever can computers learn to identify the perspective of a document given a training corpuswhen an issue is discussed from different perspectives not every sentence strongly reflects the perspective of the authorfor example the following sentences were written by a palestinian and an israeliexamples 3 and 4 both factually introduce the background of the issue of the green line without expressing explicit perspectivescan we develop a system to automatically discriminate between sentences that strongly indicate a perspective and sentences that only reflect shared background informationa system that can automatically identify the perspective from which a document is written will be a valuable tool for people analyzing huge collections of documents from different perspectivespolitical analysts regularly monitor the positions that countries take on international and domestic issuesmedia analysts frequently survey broadcast news newspapers and weblogs for differing viewpointswithout the assistance of computers analysts have no choice but to read each document in order to identify those from a perspective of interest which is extremely timeconsumingwhat these analysts need is to find strong statements from different perspectives and to ignore statements that reflect little or no perspectivein this paper we approach the problem of learning individual perspectives in a statistical frameworkwe develop statistical models to learn how perspectives are reflected in word usage and we treat the problem of identifying perspectives as a classification taskalthough our corpus contains documentlevel perspective annotations it lacks sentencelevel annotations creating a challenge for learning the perspective of sentenceswe propose a novel statistical model to overcome this problemthe experimental results show that the proposed statistical models can successfully identify the perspective from which a document is written with high accuracyidentifying the perspective from which a document is written is a subtask in the growing area of automatic opinion recognition and extractionsubjective language is used to express opinions emotions and sentimentsso far research in automatic opinion recognition has primarily addressed learning subjective language identifying opinionated documents and sentences and discriminating between positive and negative language while by its very nature we expect much of the language that is used when presenting a perspective or pointofview to be subjective labeling a document or a sentence as subjective is not enough to identify the perspective from which it is writtenmoreover the ideology and beliefs authors possess are often expressed in ways other than positive or negative language toward specific targetsresearch on the automatic classification of movie or product reviews as positive or negative is perhaps the most similar to our workas with review classification we treat perspective identification as a documentlevel classification task discriminating in a sense between different types of opinionshowever there is a key differencea positive or negative opinion toward a particular movie or product is fundamentally different from an overall perspectiveones opinion will change from movie to movie whereas ones perspective can be seen as more static often underpinned by ones ideology or beliefs about the worldthere has been research in discourse analysis that examines how different perspectives are expressed in political discourse although their research may have some similar goals they do not take a computational approach to analyzing large collections of documentsto the best of our knowledge our approach to automatically identifying perspectives in discourse is uniqueour corpus consists of articles published on the bitterlemons website2the website is set up to contribute to mutual understanding between palestinians and israelis through the open exchange of ideas3 every week an issue about the israelipalestinian conflict is selected for discussion and a palestinian editor and an israeli editor each contribute one article addressing the issuein addition the israeli and palestinian editors invite one israeli and one palestinian to express their views on the issue resulting in a total of four articles in a weekly editionwe choose the bitterlemons website for two reasonsfirst each article is already labeled as either palestinian or israeli by the editors allowing us to exploit existing annotationssecond the bitterlemons corpus enables us to test the generalizability of the proposed models in a very realistic setting training on articles written by a small number of writers and testing on articles from a much larger group of writers we collected a total of 594 articles published on the website from late 2001 to early 2005the distribution of documents and sentences are listed in cluding edition numbers publication dates topics titles author names and biographic informationwe used opennlp tools4 to automatically extract sentence boundaries and reduced word variants using the porter stemming algorithmwe evaluated the subjectivity of each sentence using the automatic subjective sentence classifier from and find that 656 of palestinian sentences and 662 of israeli sentences are classified as subjectivethe high but almost equivalent percentages of subjective sentences in the two perspectives support our observation in section 2 that a perspective is largely expressed using subjective language but that the amount of subjectivity in a document is not necessarily indicative of its perspectivewe develop algorithms for learning perspectives using a statistical frameworkdenote a training corpus as a set of documents wn and their perspectives labels dn n 1 n where n is the total number of documents in the corpusgiven a new document w with a unknown document perspective the perspective d is calculated based on the following conditional probabilitywe are also interested in how strongly each sentence in a document conveys perspective informationdenote the intensity of the mth sentence of the nth document as a binary random variable smnto evaluate smn how strongly a sentence reflects a particular perspective we calculate the following conditional probabilitywe model the process of generating documents from a particular perspective as follows first the parameters π and θ are sampled once from prior distributions for the whole corpusbeta and dirichlet are chosen because they are conjugate priors for binomial and multinomial distributions respectivelywe set the hyperparameters απ βπ and αθ to one resulting in noninformative priorsa document perspective dn is then sampled from a binomial distribution with the parameter πthe value of dn is either d0 or d1 words in the document are then sampled from a multinomial distribution where ln is the length of the documenta graphical representation of the model is shown in figure 1the model described above is commonly known as a naive bayes modelnb models have been widely used for various classification tasks including text categorization the nb model is also a building block for the model described later that incorporates sentencelevel perspective informationto predict the perspective of an unseen document using naive bayes we calculate the posterior distribution of d in by integrating out the parameters however the above integral is difficult to computeas an alternative we use markov chain monte carlo methods to obtain samples from the posterior distributiondetails about mcmc methods can be found in appendix awe introduce a new binary random variable s to model how strongly a perspective is reflected at the sentence levelthe value of s is either s1 or s0 where s1 indicates a sentence is written strongly from a perspective while s0 indicates it is notthe whole generative process is modeled as follows the parameters π and θ have the same semantics as in the naive bayes models is naturally modeled as a binomial variable where τ is the parameter of s s represents how likely it is that a sentence strongly conveys a perspectivewe call this model the latent sentence perspective model because s is not directly observedthe graphical model representation of lspm is shown in figure 2to use lspm to identify the perspective of a new document d with unknown sentence perspectives s we calculate posterior probabilities by summing out possible combinations of sentence perspective in the document and parametersas before we resort to mcmc methods to sample from the posterior distributions given in equations and as is often encountered in mixture models there is an identifiability issue in lspmbecause the values of s can be permuted without changing the likelihood function the meanings of s0 and s1 are ambiguousin figure 3a four θ values are used to represent the four possible combinations of document perspective d and sentence perspective intensity s if we do not impose any constraints s1 and s0 are exchangeable and we can no longer strictly interpret s1 as indicating a strong sentencelevel perspective and s0 as indicating that a sentence carries little or no perspective informationthe other problem of this parameterization is that any improvement from lspm over the naive bayes model is not necessarily due to the explicit modeling of sentencelevel perspective5 may capture aspects of the document collection that we never intended to modelfor example s0 may capture the editors writing styles and s1 the guests writing styles in the bitterlemons corpuswe solve the identifiability problem by forcing bd1s0 and bd0s0 to be identical and reducing the number of b parameters to threeas shown in figure 3b there are separate b parameters conditioned on the document perspective but there is single b parameter when 5 s0 shared by both documentlevel perspectives we assume that the sentences with little or no perspective information ie 5 s0 are generated independently of the perspective of a documentin other words sentences that are presenting common background information or introducing an issue and that do not strongly convey any perspective should look similar whether they are in palestinian or israeli documentsby forcing this constraint we become more confident that s0 represents sentences of little perspectives and s1 represents sentences of strong perspectives from d1 and d0 documentswe evaluate three different models for the task of identifying perspective at the document level two naive bayes models with different inference methods and support vector machines nbb uses full bayesian inference and nbm uses maximum a posteriori we compare nb with svm not only because svm has been very effective for classifying topical documents but also to contrast generative models like nb with discriminative models like svmfor training svm we represent each document as a vdimensional feature vector where v is the vocabulary size and each coordinate is the normalized term frequency within the documentwe use a linear kernel for svm and search for the best parameters using grid methodsto evaluate the statistical models we train them on the documents in the bitterlemons corpus and calculate how accurately each model predicts document perspective in tenfold crossvalidation experimentstable 2 reports the average classification accuracy across the the 10 folds for each modelthe accuracy of a baseline classifier which randomly assigns the perspective of a document as palestinian or israeli is 05 because there are equivalent numbers of documents from the two perspectivesthe last column of table 2 is error reduction relative to svmthe results show that the naive bayes models and svm perform surprisingly well on both the editors and guests subsets of the bitterlemons corpusthe naive bayes models perform slightly better than svm possibly because generative models achieve optimal performance with a smaller number of training examples than discriminative models and the size of the bitterlemons corpus is indeed smallnbb which performs full bayesian inference improves on nbm which only performs point estimationthe results suggest that the choice of words made by the authors either consciously or subconsciously reflects much of their political perspectivesstatistical models can capture word usage well and can identify the perspective of documents with high accuracygiven the performance gap between editors and guests one may argue that there exist distinct editing artifacts or writing styles of the editors and guests and that the statistical models are capturing these things rather than perspectives to test if the statistical models truly are learning perspectives we conduct experiments in which the training and testing data are mismatched ie from different subsets of the corpusif what the svm and naive bayes models learn are writing styles or editing artifacts the classification performance under the mismatched conditions will be considerably degradedthe results on the mismatched training and testing experiments are shown in table 3both svm and the two variants of naive bayes perform well on the different combinations of training and testing dataas in table 2 the naive bayes models perform better than svm with larger error reductions and nbb slightly outperforms nbmthe high accuracy on the mismatched experiments suggests that statistical models are not learning writing styles or editing artifactsthis reaffirms that document perspective is reflected in the words that are chosen by the writerswe list the most frequent words learned by the the nbm model in table 4the frequent words overlap greatly between the palestinian and israeli perspectives including state peace process secure and govern this is in contrast to what we expect from topical text classification in which frequent words seldom overlapauthors from different perspectives often choose words from a similar vocabulary but emphasize them differentlyfor example in documents that are written from the palestinian perspective the word palestinian is mentioned more frequently than the word israel it is however the reverse for documents that are written from the israeli perspectiveperspectives are also expressed in how frequently certain people countries and actions are mentionedwhile one might solicit these contrasting word pairs from domain experts our results show that statistical models such as svm and naive bayes can automatically acquire themin addition to identifying the perspective of a document we are interested in knowing which sentences of the document strongly conveys perspective informationsentencelevel perspective annotations do not exist in the bitterlemons corpus which makes estimating parameters for the proposed latent sentence perspective model difficultthe posterior probability that a sentence strongly covey a perspective is of the most interest but we can not directly evaluate this model without gold standard annotationsas an alternative we evaluate how accurately lspm predicts the perspective of a document again using 10fold cross validationalthough lspm predicts the perspective of both documents and sentences we will doubt the quality of the sentencelevel predictions if the documentlevel predictions are incorrectthe experimental results are shown in table 5we include the results for the naive bayes models from table 3 for easy comparisonthe accuracy of lspm is comparable or even slightly better than that of the naive bayes modelsthis is very encouraging and suggests that the proposed lspm closely captures how perspectives are reflected at both the document and sentence levelsexamples 1 and 2 from the introduction were predicted by lspm as likely to contain strong perspectives ie large prexamples 3 and 4 from the introduction were predicted by lspm as likely to contain little or no perspective information ie high prthe comparable performance between the naive bayes models and lspm is in fact surprisingwe can train a naive bayes model directly on the sentences and attempt to classify a sentence as reflecting either a palestinian or israeli perspectivea sentence is correctly classified if the predicted perspective for the sentence is the same as the perspective of the document from which it was extractedusing this model we obtain a classification accuracy of only 07529 which is much lower than the accuracy previously achieved at the document levelidentifying perspectives at the sentence level is thus more difficult than identifying perspectives at the document levelthe high accuracy at the document level shows that lspm is very effective in pooling evidence from sentences that individually contain little perspective informationin this paper we study a new problem of learning to identify the perspective from which a text is written at the document and sentence levelswe show that much of a documents perspective is expressed in word usage and statistical learning algorithms such as svm and naive bayes models can successfully uncover the word patterns that reflect author perspective with high accuracyin addition we develop a novel statistical model to estimate how strongly a sentence conveys perspective in the absence of sentencelevel annotationsby introducing latent variables and sharing parameters the latent sentence perspective model is shown to capture well how perspectives are reflected at the document and sentence levelsthe small but positive improvement due to sentencelevel modeling in lspm is encouragingin the future we plan to investigate how consistently lspm sentencelevel predictions are with human annotationsthis material is based on work supported by the advanced research and development activity under contract number nbchc040037a gibbs samplers based the model specification described in section 42 we derive the gibbs samplers for the latent sentence perspective model as follows where dbinom and dmultinom are the density functions of binomial and multinomial distributions respectivelythe superscript t indicates that a sample is from the tth iterationwe run three chains and collect 5000 samplesthe first half of burnin samples are discarded
W06-2915
which side are you on identifying perspectives at the document and sentence levelsin this paper we investigate a new problem of identifying the perspective from which a document is writtenby perspective we mean a point of view for example from the perspective of democrats or republicanscan computers learn to identify the perspective of a documentnot every sentence is written strongly from a perspectivecan computers learn to identify which sentences strongly convey a particular perspectivewe develop statistical models to capture how perspectives are expressed at the document and sentence levels and evaluate the proposed models on articles about the israelipalestinian conflictthe results show that the proposed models successfully learn how perspectives are reflected in word usage and can identify the perspective of a document with high accuracywe use hierarchical bayesian modelling for opinion modelling our experiments were conducted in political debate corpus we explore relationships between sentencelevel and documentlevel classification for a stancelike prediction taskwe introduce implicit sentiment a topic of study in computational linguistics under the rubric of identifying perspective though similar work had begun earlier in the realm of political science
conllx shared task on multilingual dependency parsing each year the conference on computational natural language learning features a shared task in which participants train and test their systems on exactly the same data sets in order to better compare systems the tenth conll saw a shared task on multilingual dependency parsing in this paper we describe how treebanks for 13 languages were converted into the same dependency format and how parsing performance was measured we also give an overview of the parsing approaches that participants took and the results that they achieved finally we try to draw general conclusions about multilingual parsing what makes a particular language treebank or annotation scheme easier or harder to parse and which phenomena are challenging for any dependency parser acknowledgement many thanks to amit dubey and yuval krymolowski the other two organizers of the shared task for discussions converting treebanks writing and helping with the also to alexander yeh for additional help with the paper reviews his work was made possible by the mitre cor many thanks to amit dubey and yuval krymolowski the other two organizers of the shared task for discussions converting treebanks writing software and helping with the papers2previous conll shared tasks focused on np chunking general chunking clause identification named entity recognition and semantic role labeling this shared task on full parsing is the logical next stepparsing is an important preprocessing step for many nlp applications and therefore of considerable practical interestit is a complex task and as it is not straightforwardly mappable to a classical segmentation classification or sequence prediction problem it also poses theoretical challenges to machine learning researchersduring the last decade much research has been done on datadriven parsing and performance has increased steadilyfor training these parsers syntactically annotated corpora of thousands to tens of thousands of sentences are necessary so initially research has focused on englishduring the last few years however treebanks for other languages have become available and some parsers have been applied to several different languagessee section 2 for a more detailed overview of related previous researchso far there has not been much comparison between different dependency parsers on exactly the same data sets one of the reasons is the lack of a defacto standard for an evaluation metric for splitting the data into training and testing portions and in the case of constituency treebanks converted to dependency format for this conversionanother reason are the various annotation schemes and logical data formats used by different treebanks which make it tedious to apply a parser to many treebankswe hope that this shared task will improve the situation by introducing a uniform approach to dependency parsingsee section 3 for the detailed task definition and section 4 for information about the conversion of all 13 treebanksin this shared task participants had two to three months3 to implement a parsing system that could be trained for all these languages and four days to parse unseen test data for each19 participant groups submitted parsed test dataof these all but one parsed all 12 required languages and 13 also parsed the optional bulgarian dataa wide variety of parsing approaches were used some are extensions of previously published approaches others are newsee section 5 for an overviewsystems were scored by computing the labeled attachment score ie the percentage of scoring tokens for which the system had predicted the correct head and dependency labelpunctuation tokens were excluded from scoringresults across languages and systems varied widely from 378 to 917 see section 6 for detailed resultshowever variations are consistent enough to allow us to draw some general conclusionssection 7 discusses the implications of the results and analyzes the remaining problemsfinally section 8 describes possible directions for future researchtesniere introduced the idea of a dependency tree in which words stand in direct headdependent relations for representing the syntactic structure of a sentencehays and gaifman studied the formal properties of projective dependency grammars ie those where dependency links are not allowed to crossmelˇcuk describes a multistratal dependency grammar ie one that distinguishes between several types of dependency relations other theories related to dependency grammar are word grammar and link grammar some relatively recent rulebased full dependency parsers are kurohashi and nagao for japanese oflazer for turkish tapanainen and jarvinen for english and elworthy for english and japanesewhile phrase structure parsers are usually evaluated with the geigparseval measures of precision and recall over constituents lin and others have argued for an alternative dependencybased evaluationthat approach is based on a conversion from constituent structure to dependency structure by recursively defining a head for each constituentthe same idea was used by magerman who developed the first head table for the penn treebank and collins whose constituent parser is internally based on probabilities of bilexical dependencies ie dependencies between two wordscollins s parser and its reimplementation and extension by bikel have by now been applied to a variety of languages english czech german spanish french chinese and according to dan bikels web page arabiceisner introduced a datadriven dependency parser and compared several probability models on penn treebank datakudo and matsumoto describe a dependency parser for japanese and yamada and matsumoto an extension for englishnivres parser has been tested for swedish english czech bulgarian and chinese cheng et al while mcdonalds parser has been applied to english czech and very recently danish the training data derived from the original treebanks and given to the shared task participants was in a simple columnbased format that is an extension of joakim nivres malttab format4 for the shared task and was chosen for its processing simplicityall the sentences are in one text file and they are separated by a blank line after each sentencea sentence consists of one or more tokenseach token is represented on one line consisting of 10 fieldsfields are separated from each other by a tab5 the 10 fields are resulting from the phead column is guaranteed to be projective whereas the structure resulting from the head column will be nonprojective for some sentences of some languages as should be obvious from the description above our format assumes that each token has exactly one headsome dependency grammars and also some treebanks allow tokens to have more than one head although often there is a distinction between primary and optional secondary relations eg in the danish dependency treebank the dutch alpino treebank and the german tiger treebank for this shared task we decided to ignore any additional relationshowever the data format could easily be extended with additional optional columns in the futurecycles do not occur in the shared task data but are scored as normal if predicted by parsersthe character encoding of all data files is unicode which is the only encoding to cover all languages and therefore ideally suited for multilingual parsingwhile the training data contained all 10 columns the test data given to participants contained only the first 6participants parsers then predicted the head and deprel columns the predicted values were compared to the gold standard head and deprel6 the official evaluation metric is the labeled attachment score ie the percentage of scoring tokens for which the system has predicted the correct head and deprelthe evaluation script defines a nonscoring token as a token where all characters of the form value have the unicode category property punctuation7 we tried to take a test set that was representative of the genres in a treebank and did not cut through text sampleswe also tried to document how we selected this set8 we aimed at having roughly the same size for the test sets of all languages 5000 scoring tokensthis is not an exact requirement as we do not want to cut sentences in halfthe relatively small size of the test set means that even for the smallest treebanks the majority of tokens is available for training and the equal size means that for the overall ranking of participants we can simply compute the score on the concatenation of all test setsin selecting the treebanks practical considerations were the major factortreebanks had to be actually available large enough have a license that allowed free use for research or kind treebank providers who temporarily waived the fee for the shared task and be suitable for conversion into the common format within the limited timein addition we aimed at a broad coverage of different language families9 as a general rule we did not manually correct errors in treebanks if we discovered some during the conversion see also buchholz and green although we did report them to the treebank providers and several got corrected by themwe used the following six dependency treebanks czech prague dependency treebank10 arabic prague arabic dependency treebank11 slovene slovene dependency treebank12 danish danish dependency treebank13 swedish talbanken0514 turkish metusabancı treebank15 the conversion of these treebanks was the easiest task as the linguistic representation was already what we needed so the information only had to be converted from sgml or xml to the shared task formatalso the relevant information had to be distributed appropriately over the cpostag postag and feats columnsfor the swedish data no predefined distinction into coarse and finegrained pos was available so the two columns contain identical values in our formatfor the czech data we sampled both our training and test data from the official training partition because only that one contains gold standard pos tags which is also what is used in most other data setsthe czech deprel values include the suffixes to mark coordination apposition and parenthesis while these have been ignored during the conversion of the much smaller slovene datafor the arabic data sentences with missing annotation were filtered out during the conversionthe turkish treebank posed a special problem because it analyzes each word as a sequence of one or more inflectional groups each ig consists of either a stem or a derivational suffix plus all the inflectional suffixes belonging to that stemderivational suffixthe head of a whole word is not just another word but a specific ig of another word16 one can easily map this representation to one in which the head of a word is a word but that treebank for us13many thanks to matthias trautner kromann and assistants for creating the ddt and releasing it under the gnu general public license and to joakim nivre johan hall and jens nilsson for the conversion of ddt to maltxml14many thanks to jens nilsson johan hall and joakim nivre for the conversion of the original talbanken to talbanken05 and for making it freely available for research purposes and to joakim nivre again for prompt and proper respons to all our questions15many thanks to bilge say and kemal oflazer for granting the license for conllx and answering questions and to gulsen eryiˇgit for making many corrections to the treebank and discussing some aspects of the conversion mapping would lose information and it is not clear whether the result is linguistically meaningful practically useful or even easier to parse because in the original representation each ig has its own pos and morphological features so it is not clear how that information should be represented if all igs of a word are conflatedwe therefore chose to represent each ig as a separate token in our formatto make the result a connected dependency structure we defined the head of each nonwordfinal ig to be the following ig and the deprel to be derivwe assigned the stem of the word to the first igs lemma column with all nonfirst igs having lemma and the actual word form to the last ig with all nonlast igs having form as already mentioned in section 3 the underscore has the punctuation character property therefore nonlast igs are not scoring tokenswe also attached or reattached punctuation we used the following five treebanks of this type german tiger treebank17 japanese japanese verbmobil treebank18 portuguese the bosque part of the floresta sintatica19 dutch alpino treebank20 chinese sinica 17many thanks to the tiger team for allowing us to use the treebank for the shared task and to amit dubey for converting the treebank18many thanks to yasuhiro kawata julia bartels and colleagues from tubingen university for the construction of the original verbmobil treebank for japanese and to sandra kubler for providing the data and granting the special license for conllx19many thanks to diana santos eckhard bick and other floresta sinttica project members for creating the treebank and making it publicly available for answering many questions about the treebank for correcting problems and making new releases and for sharing scripts and explaining the head rules implemented in them thanks also to jason baldridge for useful discussions and to ben wing for independently reporting problems which diana then fixed20many thanks to gertjan van noord and the other people at the university of groningen for creating the alpino treebank and releasing it for free to gertjan van noord for answering all our questions and for providing extra test material and to antal van den bosch for help with the memorybased tagger treebank21 their conversion to dependency format required the definition of a head tablefortunately in contrast to the penn treebank for which the head table is based on pos22 we could use the grammatical functions annotated in these treebankstherefore head rules are often of the form the head child of a vpclause is the child with the hdpredicatorhdhead functionthe deprel value for a token is the function of the biggest constituent of which this token is the lexical headif the constituent comprising the complete sentence did not have a function we gave its lexical head token the deprel rootfor the chinese treebank most functions are not grammatical functions but semantic roles for the portuguese treebank the conversion was complicated by the fact that a detailed specification existed which tokens should be the head of which other tokens eg the finite verb must be the head of the subject and the complementzier but the main verb must be the head of the complements and adjuncts23 given that the floresta sintatica does not use traditional vp constituents but rather verbal chunks a simple magermancollinsstyle head table was not sufficient to derive the required dependency structureinstead we used a head table that defined several types of heads and a link table that specified what linked to which type of head24 another problem existed with the dutch treebankits original pos tag set is very coarse and the pos and the word stem information is not very reliable25 we therefore decided to retag the treebank automatically using the memorybased tagger which uses a very finegrained tag sethowever this created a problem with multiwordsmbt does not have the concept of multiwords and therefore tags all of their 21many thanks to academia sinica for granting the temporary license for conllx to kehjiann chen for answering our questions and to amit dubey for converting the treebank22containing rules such as the head child of a vp is the leftmost to or else the leftmost past tense verb or else etc components individuallyas alpino does not provide an internal structure for multiwords we had to treat multiwords as one tokenhowever we then lack a proper pos for the multiwordafter much discussion we decided to assign each multiword the cpostag mwu and a postag which is the concatenation of the pos of all the components as predicted by mbt likewise the feats are a concatenation of the morphological features of all componentsthis approach resulted in many different postag values for the training set and even in unseen values in the test setit remains to be tested whether our approach resulted in data sets better suited for parsing than the originalwe used two treebanks of this type spanish cast3lb26 bulgarian bultreebank27 converting a phrase structure treebank with only a few functions to a dependency format usually requires linguistic competence in the treebanks language in order to create the head table and missing function labelswe are grateful to chanev et al for converting the bultreebank to the shared task format and to montserrat civit for providing us with a head table and a function mapping for cast3lb28 table 1 shows details of all data setsfollowing nivre and nilsson we use the following definition an arc is projective iff all nodes occurring between i and j are dominated by i 29table 2 tries to give an overview of the wide variety of parsing approaches used by participantswe refer to the individual papers for detailsthere are several dimensions along which to classify approachesphrase structure parsers are often classified in terms of the parsing order topdown bottomup or various combinationsfor dependency parsing there seem to be two different interpretations of the term bottomupnivre and scholz uses this term with reference to yamada and matsumoto whose parser has to find all children of a token before it can attach that token to its headwe will refer to this as bottomuptreesanother use of bottomup is due to eisner who introduced the notion of a spana span consists of a potential dependency arc r between two tokens i and j and all those dependency arcs that would be spanned by r ie all arcs between tokens k and l with i k l j parsing in this order means that the parser has to find all children and siblings on one side of a token before it can attach that token to a head on the same sidethis approach assumes projective dependency structureseisner called this approach simply bottomup while nivre whose parser implicitly also follows this order called it topdownbottomup to distinguish it from the pure bottomup order of yamada and matsumoto to avoid confusion we will refer to this order as bottomupspansgiven that the parser needs to predict the head as well as the deprel value different approaches are possible predict the heads of all tokens first or predict the deprels of all tokens first or predict the head and deprel of one token before predicting these values for the next tokenwithin the first approach each dependency can be labeled independently or a type of annotation for the training data number of tokens percentage of nonscoring tokens number of parse tree units average number of tokens per parse tree unit whether a lemma or stem is available how many different cpostag values postag values feats components and deprel values occur for scoring tokens how many different values for deprel scoring tokens with head0 can have for tokens with head0 percentage of scoring tokens with head0 a head that precedes or a head that follows the token the average number of scoring tokens with head0 per parse tree unit the percentage of nonprojective relations and of parse tree units with at least one nonprojective relationfor the test data number of scoring tokens percentage of scoring tokens with a form or a lemma that does not occur in the training data afinal punctuation was deliberately left out during the conversion bthe nonlast igs of a word are nonscoring see section 41 cin many cases the parse tree unit in padt is not a sentence but a paragraph din many cases the unit in sinica is not a sentence but a commaseparated clause or phrase ethe treebank consists of transcribed dialogues in which some sentences are very short eg just hai fonly part of the arabic data has nonunderscore values for the lemma column gno mapping from finegrained to coarsegrained tags was available same for swedish of the first author algorithm ilp integer linear programming vertical direction horizontal direction search labeling nonprojective learner preprocessing postprocessing learner parameter optimization per language anonprojectivity through approximate search used for some languages b20 averaged perceptrons combined into a bayes point machine cintroduced a single pos tag aux for all swedish auxiliary and model verbs dby having no projectivity constraint eselective projectivity constraint for japanese fseveral approaches to nonprojectivity gusing some feats components to create some finergrained postag values hreattachment rules for some types of nonprojectivity ihead automaton grammar jdetermined the maximally allowed distance for relations kthrough special parser actions lpseudoprojectivizing training data only mgreedy prepend algorithm nbut two separate learners used for unlabeled parsing versus labeling oboth foward and backward then combined into a single tree with cle pbut two separate svms used for unlabeled parsing versus labeling qforward parsing for japanese and turkish backward for the rest rattaching remaining unattached tokens through exhaustive search sequence classifier can label all children of a token together within the third approach head and deprel can be predicted simultaneously or in two separate steps at the highest level of abstraction there are two fundamental approaches which we will call all pairs and stepwisein an all pairs approach every possible pair of two tokens in a sentence is considered and some score is assigned to the possibility of this pair having a dependency relationusing that information as building blocks the parser then searches for the best parse for the sentencethis approach is one of those described in eisner the definition of best parse depends on the precise model usedthat model can be one that defines the score of a complete dependency tree as the sum of the scores of all dependency arcs in itthe search for the best parse can then be formalized as the search for the maximum spanning tree if the parse has to be projective eisners bottomupspan algorithm can be used for the searchfor nonprojective parses mcdonald et al propose using the chuliuedmonds algorithm and mcdonald and pereira describe an approximate extension of eisners algorithmthere are also alternatives to mst which allow imposing additional constraints on the dependency structure egthat at most one dependent of a token can have a certain label such as subject see riedel et al and bick by contrast canisius et al do not even enforce the tree constraint ie they allow cyclesin a variant of the all pairs approach only those pairs of tokens are considered that are not too distant in a stepwise approach not all pairs are consideredinstead the dependency tree is built stepwise and the decision about what step to take next can be based on information about in theory all previous steps and their results call this the historystepwise approaches can use an explicit probability model over next steps eg a generative one or train a machine learner to predict thosethe approach can be deterministic or employ various types of searchin addition parsing can be done in a bottomupconstituent or a bottomupspans fashion finally parsing can start at the first or the last token of a sentencewhen talking about languages that are written from left to right this distinction is normally referred to as lefttoright versus righttolefthowever for multilingual parsing which includes languages that are written from right to left or sometimes top to bottom this terminology is confusing because it is not always clear whether a lefttoright parser for arabic would really start with the leftmost token of a sentence or like for other languages with the first in general starting with the first token makes more sense from a psycholinguistic point of view but starting with the last might be beneficial for some languages the parsing order directly determines what information will be available from the history when the next decision needs to be madestepwise parsers tend to interleave the prediction of head and deprelall data sets except the chinese one contain some nonprojective dependency arcs although their proportion varies from 01 to 54participants took the following approaches to nonprojectivity or if the classifier chooses a special action or the parser predicts a trace training trees to projective ones but encode the information necessary to make the inverse transformation in the deprel so that this inverse transformation can also be carried out on the test trees table 3 shows which column values have been used by participantsnobody used the pheadpdeprel column in any wayit is likely that those who did not use any of the other columns did so mainly for practical reasons such as the limited time andor the difficulty to integrate it into an existing parserlemma or stem information has often been ignored in previous dependency parsersin the shared task data it was available in just over half the data setsboth lemma and form encode lexical informationthere is therefore a certain redundancyparticipants have used these two columns in different ways pating groups a column value was not used at all used in at least some features variant of form used only if lemma is missing or only parts of feats used used more extensively than another column containing related information eg also in combination features or features for context tokens in addition to features for the focus tokenrer used in the reranker onlyfor the last column atomic comp components crpr crossproduct aalso prefix and suffix for labeler binstead of form for arabic and spanish cinstead of postag for dutch and turkish dfor labeler unlab parsing only some for global features ealso prefix falso 1st character of postag gonly as backoff hreranker also suffix if no lemma use prefix of form ilemma postag feats only for backoff smoothing all data sets except german and swedish had different values for cpostag and postag although the granularity varied widelyagain there are different approaches to dealing with the redundancy by design a feats column value has internal structuresplitting it at the 30 results in a set of componentsthe following approaches have been used when deciding whether there should be a dependency relation between tokens i and j all parsers use at least information about these two tokensin addition the following sources of information can be used token context a limited number of tokens directly preceding or following i or j children information about the already found children of i and j siblings in a setup where the decision is not is there a relation between i and j but is i the head of j or in a separate labeling step the siblings of i are the already found children of j structural context other than childrensiblings neighboring subtreesspans or ancestors of i and j distance from i to j information derived from all the tokens in between i and j global features explicit feature combinations for classifierbased parsers the previous actions ie classifications whether information about labels is used as input for other decisionsfinally the precise set of features can be optimized per languagetable 5 shows the official results for submitted parser outputs31 the two participant groups with the highest total score are mcdonald et al and nivre et alas both groups had much prior experience in multilingual dependency parsing it is not too surprising that they both achieved good resultsit is surprising however how similar their total scores are given that their approaches are quite different the results show that experiments on just one or two languages certainly give an indication of the usefulness of a parsing approach but should not be taken as proof that one algorithm is better for parsing than another that performs slightly worsethe bulgarian scores suggest that rankings would not have been very different had it been the 13th obligatory languagestable 6 shows that the same holds had we used another evaluation metricnote that a negative number in both the third and fifth column indicates that errors on head and deprel occur together on the same token more often than for other parsersfinally we checked that had we also scored on punctuation tokens total scores as well as rankings would only have shown very minor differencesthe average las over all data sets varies between 560 for turkish and 859 for japanesetop scores vary between 657 for turkish and 917 for japanesein general there is a high correlation between the best scores and the average scoresthis means that data sets are inherently easy or difficult no matter what the parsing approachthe easiest one is clearly the japanese data sethowever it would be wrong to conclude from this that japanese in general is easy to parseit is more likely that the effect stems from the characteristics of the datathe japanese verbmobil treebank contains dialogue within a restricted domain as 31unfortunately urgent other obligations prevented two participants from submitting a paper about their shared task worktheir results are indicated by a smaller fontsagae used a bestfirst probabilistic version of ym official metric the third column shows how the ranking for each participant changes if the unlabeled attachment scores as shown in the fourth column are usedthe fifth column shows how the ranking changes if the label accuracies as shown in the sixth column are used ain bicks method preference is given to the assignment of dependency labels bschiehlen derived the constituent labels for his pcfg approach from the deprel values cdue to the bug can be seen in table 1 there are very few new form values in the test data which is an indication of many dialogues in the treebank being similarin addition parsing units are short on averagefinally the set of deprel values is very small and consequently the ratio between postag and deprel values is extremely favorableit would be interesting to apply the shared task parsers to the kyoto university corpus which is the standard treebank for japanese and has also been used by kudo and matsumoto tions from the average per participant are calculated over the 12 obligatory languages note that due to the equal sizes of the test sets for all languages the total scores ie the las over the concatenation of the 12 obligatory test sets are identical to the average las over the 12 test setsaverages and standard deviations per data set are calculated ignoring zero scores the highest score for each column and those not significantly worse are shown in bold facesignificance was computed using the official scoring script evalpl and dan bikels randomized parsing evaluation comparator which implements stratified shuffling aattardis submitted results contained an unfortunate bug which caused the deprel values of all tokens with head0 to be an underscore using the simple heuristic of assigning the deprel value that most frequently occured with head0 in training would have resulted in a total las of 675 or to the domainrestricted japanese dialogues of the atr corpus 32 other relatively easy data sets are portuguese bulgarian german and chinese chinese also has the second highest top score33 and chinese parsing units 32unfortunately both these treebanks need to be bought so they could not be used for the shared tasknote also that japanese dependency parsers often operate on bunsetsus instead of wordsbunsetsus are related to chunks and consist of a content word and following particles 33although this seems to be somewhat of a mystery compared to the ranking according to the average scoresriedel et are the shortest and chinese parsing units are the shortestwe note that all easier data sets offer large to middlesized training setsthe most difficult data set is clearly the turkish oneit is rather small and in contrast to arabic and slovene which are equally small or smaller it covers 8 genres which results in a high percentage of new form and lemma values in the test setit is also possible that parsers get confused by the high proportion of nonscoring tokens al s top score is more than 3 absolute above the second highest score and they offer no clear explanation for their success and the many tokens with as either the form or lemmathere is a clear need for further research to check whether other representations result in better performancethe secondmost difficult data set is arabicit is quite small and has by far the longest parsing unitsthe thirdmost difficult data set is sloveneit has the smallest training sethowever its average as well as top score far exceed those for arabic and turkish which are largerinterestingly although the treebank text comes from a single source there is quite a high proportion of new form and lemma values in the test setthe fourthmost difficult data set is czech in terms of the average score and dutch in terms of the top scorethe diffence in ranking for czech is probably due to the fact that it has by far the largest training set and ironically several participants could not train on all data within the limited time or else had to partition the data and train one model for each partitionlikely problems with the dutch data set are noisy postag and lemma postag for multiwords and the highest proportion of nonprojectivityfactors that have been discussed so far are the size of the training data the proportion of new form and lemma values in the test set the ratio of postag to deprel values the average length of the parsing unit the proportion of nonprojective arcsparsing unitsit would be interesting to derive a formula based on those factors that fits the shared task data and see how well it predicts results on new data setsone factor that seems to be irrelevant is the headfinal versus headinitial distinction as both the easiest and the most difficult data sets are for headfinal languagesthere is also no clear proof that some language families are easier than othersit would be interesting to test parsers on the hebrew treebank to compare performance to arabic the other semitic language in the shared task or on the hungarian szeged corpus for another agglutinative languagefor most parsers their ranking for a specific language differs at most a few places from their overall rankingthere are some outliers thoughfor example johansson and nugues and yuret are seven ranks higher for turkish than overall while riedel et al are five ranks lowercanisius et al are six and schiehlen and spranger even eight ranks higher for dutch than overall while riedel et al are six ranks lower for czech and johansson and nugues also six for chinesesome of the higher rankings could be related to native speaker competence and resulting better parameter tuning but other outliers remain a mysteryeven though mcdonald et al and nivre et al obtained very similar overall scores a more detailed look at their performance shows clear differencestaken over all 12 obligatory languages both obtain a recall of more than 89 on root tokens but nivres precision on them is much lower than mcdonalds this is likely to be an effect of the different parsing approacheswhen breaking down by partofspeech the results of all participants on all data sets one can observe some patterns of easy and difficult partsofspeech at least in so far as tag sets are comparable across treebanksthe one pos that everybody got 100 correct are the german infinitival markers accuracy on the swedish equivalent is not far off at 98other easy pos are articles with accuracies in the nineties for german dutch swedish portuguese and spanishas several participants have remarked in their papers prepositions are much more difficult with typical accuracies in the fifties or sixtiessimilarly conjunctions typically score low with accuracies even in the forties for arabic and dutchthere are many directions for interesting research building on the work done in this shared taskone is the question which factors make data sets easy or difficultanother is finding out how much of parsing performance depends on annotations such as the lemma and morphological features which are not yet routinely part of treebanking effortsin this respect it would be interesting to repeat experiments with the recently released new version of the tiger treebank which now contains this informationone line of research that does not require additional annotation effort is defining or improving the mapping from coarsegrained to finegrained pos tags34 another is harvesting and using largescale distributional data from the internetwe also hope that by combining parsers we can achieve even better performance which in turn would facilitate the semiautomatic enlargement of existing treebanks and possibly the detection of remaining errorsthis would create a positive feedback loopfinally one must not forget that almost all of the lemma postag and feats values and even part of the form column have been manually created or corrected and that the general parsing task has to integrate automatic tokenization morphological analysis and taggingwe hope that the resources created and lessons learned during this shared task will be valuable for many years to come but also that they will be extended and improved by others in the future and that the shared task website will grow into an informational hub on multilingual dependency parsing
W06-2920
conllx shared task on multilingual dependency parsingeach year the conference on computational natural language learning features a shared task in which participants train and test their systems on exactly the same data sets in order to better compare systemsthe tenth conll saw a shared task on multilingual dependency parsingin this paper we describe how treebanks for 13 languages were converted into the same dependency format and how parsing performance was measuredwe also give an overview of the parsing approaches that participants took and the results that they achievedfinally we try to draw general conclusions about multilingual parsing what makes a particular language treebank or annotation scheme easier or harder to parse and which phenomena are challenging for any dependency parserthe conllx shared tasks focused on multilingual dependency parsing
experiments with a multilanguage nonprojective dependency parser parsing natural language is an essential step in several applications that involve document analysis eg knowledge extraction question answering summarization filteringthe best performing systems at the trec question answering track employ parsing for analyzing sentences in order to identify the query focus to extract relations and to disambiguate meanings of wordsthese are often demanding applications which need to handle large collections and to provide results in a fraction of a seconddependency parsers are promising for these applications since a dependency tree provides predicateargument relations which are convenient for use in the later stagesrecently statistical dependency parsing techniques have been proposed which are deterministic andor linear these parsers are based on learning the correct sequence of shiftreduce actions used to construct the dependency treelearning is based on techniques like svm or memory based learning which provide high accuracy but are often computationally expensivekudo and matsumoto report a two week learning time on a japanese corpus of about 8000 sentences with svmusing maximum entropy classifiers i built a parser that achieves a throughput of over 200 sentences per second with a small loss in accuracy of about 23 the efficiency of maximum entropy classifiers seems to leave a large margin that can be exploited to regain accuracy by other meansi performed a series of experiments to determine whether increasing the number of features or combining several classifiers could allow regaining the best accuracyan experiment cycle in our setting requires less than 15 minutes for a treebank of moderate size like the portuguese treebank and this allows evaluating the effectiveness of addingremoving features that hopefully might apply also when using other learning techniquesi extended the yamadamatsumoto parser to handle labeled dependencies i tried two approaches using a single classifier to predict pairs of actions and labels and using two separate classifiers one for actions and one for labelsfinally i extended the repertoire of actions used by the parser in order to handle nonprojective relationstests on the pdt show that the added actions are sufficient to handle all cases of nonprojectivityhowever since the cases of nonprojectivity are quite rare in the corpus the general learner is not supplied enough of them to learn how to classify them accurately hence it may be worthwhile to exploit a second classifier trained specifically in handling nonprojective situationsthe overall parsing algorithm is an inductive statistical parser which extends the approach by yamada and matsumoto by adding six new reduce actions for handling nonprojective relations and also performs dependency labelingparsing is deterministic and proceeds bottomuplabeling is integrated within a single processing stepproceedings of the 10th conference on computational natural language learning pages 166170 new york city june 2006 c2006 association for computational linguistics the parser is modular it can use several learning algorithms maximum entropy svm winnow voted perceptron memory based learning as well as combinations thereofthe submitted runs used maximum entropy and i present accuracy and performance comparisons with other learning algorithmsno additional resources are usedno preprocessing or postprocessing is used except stemming for danish german and swedishlemma was used in features whenever available otherwise the form was usedfor danish german and swedish the snowball stemmer was used to generate a value for lemmathis use of stemming slightly improved both accuracy and performanceonly cpostag were usedpheadpdeprel were not usedfeats were used to extract a single token combining gender number person and case through a language specific algorithmthe selection of features to be used in the parser is controlled by a number of parametersfor example the parameter posfeatures determines for which tokens the pos tag will be included in the context posleftchi7dren determines how many left outermost children of a token to consider pastactions tells how many previous actions to include as featuresthe settings used in the submitted runs are listed below and configure the parser for not using any word formspositive numbers refer to input tokens negative ones to token on the stackthe context for pos tags consisted of 1 token left and 3 tokens to the right of the focus words except for czech and chinese were 2 tokens to the left and 4 tokens to the right were usedthese values were chosen by performing experiments on the training data using 10 of the sentences as heldout data for developmentthe parser constructs dependency trees employing a deterministic bottomup algorithm which performs shiftreduce actions while analyzing input sentences in lefttoright orderusing a notation similar to the state of the parser is represented by a quadruple where s is the stack i is the list of input tokens t is a stack of temporary tokens and a is the arc relation for the dependency graphgiven an input string w the parser is initialized to w and terminates when it reaches a configuration athe parser by yamada and matsumoto used the following actions shift in a configuration pushes n to the stack producing the configuration right1 in a configuration adds an arc from s1 to n and pops s1 from the stack producing the configuration left in a configuration adds an arc from n to s1 pops n from input pops s1 from the stack and moves it back to i producing the configuration at each step the parser uses classifiers trained on treebank data in order to predict which action to perform and which dependency label to assign given the current configurationfor handling nonprojective relations nivre and nilsson suggested applying a preprocessing step to a dependency parser which consists in lifting nonprojective arcs to their head repeatedly until the tree becomes pseudoprojectivea postprocessing step is then required to restore the arcs to the proper headsi adopted a novel approach which consists in adding six new parsing actions right2 in a configuration adds an arc from s2 to n and removes s2 from the stack producing the configuration left2 in a configuration adds an arc from n to s2 pops n from input pops s1 from the stack and moves it back to i producing the configuration right3 in a configuration adds an arc from s3 to n and removes s3 from the stack producing the configuration left3 in a configuration adds an arc from n to s3 pops n from input pops s1 from the stack and moves it back to i producing the configuration extract in a configuration move s2 from the stack to the temporary stack then shift producing the configuration insert in a configuration pops s1 from t and pushes it to the stack producing the configuration the actions right2 and left2 are sufficient to handle almost all cases of nonprojectivity for instance the training data for czech contain 28081 nonprojective relations of which 26346 can be handled by left2right2 1683 by left3right3 and just 52 require extractinserthere is an example of nonprojectivity that can be handled with right2 and left3 vetšinu techto prístroju lze take používat nejen jako fax ale soucasne v6tšinu t6chto pfístroju lze take používat nejen jako fax ale the remaining cases are handled with the last two actions extract is used to postpone the creation of a link by saving the token in a temporary stack insert restores the token from the temporary stack and resumes normal processing zou gemaakt moeten worden in this fragment in dutch is dealt by performing an extract in configuration followed immediately by an insert leading to the following configuration which can be handled by normal shiftreduce actions zou moeten worden gemaakt in another linguistic phenomenon is the anticipation of pronouns like in this portuguese fragment tudo a possive7 encontrar them o ix sa7o de antiguidades desde objectos de ouro e prata moedas the problem here is due to the pronoun tudo which is the object of encontrar but which is also the head of desde and its preceding commain order to be able to properly link desde to tudo it is necessary to postpone its processing hence it is saved with extract to the temporary stack and put back later in front of the comma with insertin fact the pair extractinsert behaves like a generalized rightnleftn when n is not knownas in the example except for the case where n2 it is difficult to predict the value of n since there can be an arbitrary long sequence of tokens before reaching the position where the link can be insertedi used my own c implementation of maximum entropy which is very fast both in learning and classificationon a 28 mhz pentium xeon pc the learning time is about 15 minutes for portuguese and 4 hours for czechparsing is also very fast with an average throughput of 200 sentences per second table 1 reports parse time for parsing each whole test setusing memory based learning increases considerably the parsing time while as expected learning time is quite shorteron the other hand mbl achieves an improvement up to 5 in accuracy as shown in detail in table 1shiftreduce one to decide which reduce action and a third one to choose the dependency in case of leftright action for details on the conllx shared task and the measurements see i performed several experiments to tune the parseri also tried alternative machine learning algorithms including svm winnow voted perceptronthe use of svm turned out quite impractical since the technique does not scale to the size of training data involved training an svm with such a large number of features was impossible for any of the larger corporafor smaller ones egportuguese training required over 4 days but produced a bad model which could not be used and the libsvm implementationsgiven the speed of the maximum entropy classifier i explored whether increasing the number of features could improve accuracyi experimented adding various features controlled by the parameters above none appeared to be effective except the addition of the previous actionthe classifier returns both the action and the label to be assignedsome experiments were carried out splitting the task among several specialized classifiersi experimented with 2 two classifiers one to decide which action to perform and a second one to choose the dependency in case of leftright action none of these variants produced improvements in precisiononly a small improvement in labeled attachment score was noticed using the full nonspecialized classifier to decide the action but discarding its suggestion for label and using a specialized classifier for labelinghowever this was combined with a slight decrease in unlabeled attachment score hence it was not considered worth the effortthe parser does not attempt to assign a dependency relation to the roota simple correction of assigning a default value for each language gave an improvement in the las as shown in table 1out of the 45 dependency relations that the parser had to assign to a sentence the largest number of errors occurred assigning npred acc piv cjt n p the highest number of head error occurred at the cpos tags prp with 193 and v with 176in particular just four prepositions accounted for 120 head errorsmost of the errors occur near punctuationsoften this is due to the fact that commas introduce relative phrases or parenthetical phrases that produce diversions in the flowsince the parser makes decisions analyzing only a window of tokens of a limited size it gets confused in creating attachmentsi tried to add some global context features to be able to distinguish these cases in particular a count of the number of punctuation marks seen so far whether punctuation is present between the focus wordsnone of them helped improving precision and were not used in the submitted runsmost current parsers for czech do not perform well on apos coord and exd but they are not very frequentthe largest number of errors occur on obj adv sb atr there is also often confusion among these 33 times obj instead of adv 32 sb instead of obj 28 atr instead of advthe high error rate of j is expected mainly due to coordination problemsthe error of are is also relatively highprepositions are problematic but their error rate is higher than expected since they are in terms of surface order rather regular and close to the nounit could be that the decision by the pdt to hang them as heads instead of children causes a problem in attaching themit seems that a postprocessing may correct a significant portion of these errorsthe labels ending with _co _ap or _pa are nodes who are members of the coordination apposition or the parenthetical relation so it may be worth while omitting these suffixes in learning and restore them by postprocessingan experiment using as training corpus a subset consisting of just sentences which include nonprojective relations achieved a las of 6528 and uas of 7620 using mblacknowledgmentskiril ribarov provided insightful comments on the results for czechthe following treebanks were used for training the parser
W06-2922
experiments with a multilanguage nonprojective dependency parserwe present a deterministic classifierbased shiftreduce parserwe develop desr an incremental deterministic classifierbased parserwe propose a transition system whose individual transitions can deal with nonprojective dependencies only to a limited extent depending on the distance in the stack of the nodes involved in the newly constructed dependency
multilingual dependency analysis with a twostage discriminative parser present a twostage multilingual pendency parser and evaluate it on 13 diverse languages the first stage based on the unlabeled dependency parsing models described by mcdonald and pereira augmented with morphological features for a subset of the languages the second stage takes the output from the first and labels all the edges in the dependency graph with appropriate syntactic categories using a globally trained sequence classifier over components of the graph we report results on the conllx shared task data sets and present an error analysis often in language processing we require a deep syntactic representation of a sentence in order to assist further processingwith the availability of resources such as the penn wsj treebank much of the focus in the parsing community had been on producing syntactic representations based on phrasestructurehowever recently their has been a revived interest in parsing models that produce dependency graph representations of sentences which model words and their arguments through directed edges this interest has generally come about due to the computationally efficient and flexible nature of dependency graphs and their ability to easily model nonprojectivity in freerword order languagesnivre gives an introduction to dependency representations of sentences and recent developments in dependency parsing strategiesdependency graphs also encode much of the deep syntactic information needed for further processingthis has been shown through their successful use in many standard natural language processing tasks including machine translation sentence compression and textual inference in this paper we describe a twostage discriminative parsing approach consisting of an unlabeled parser and a subsequent edge labelerwe evaluate this parser on a diverse set of 13 languages using data provided by the conllx sharedtask organizers the results are promising and show the language independence of our system under the assumption of a labeled dependency corpus in the target languagefor the remainder of this paper we denote by x x1 xn a sentence with n words and by y a corresponding dependency grapha dependency graph is represented by a set of ordered pairs e y in which xj is a dependent and xi is the corresponding headeach edge can be assigned a label l from a finite set l of predefined labelswe proceedings of the 10th conference on computational natural language learning pages 216220 new york city june 2006 c2006 association for computational linguistics assume that all dependency graphs are trees but may be nonprojective both of which are true in the data sets we usethe first stage of our system creates an unlabeled parse y for an input sentence xthis system is primarily based on the parsing models described by mcdonald and pereira that work extends the maximum spanning tree dependency parsing framework to incorporate features over multiple edges in the dependency graphan exact projective and an approximate nonprojective parsing algorithm are presented since it is shown that nonprojective dependency parsing becomes nphard when features are extended beyond a single edgethat system uses mira an online largemargin learning algorithm to compute model parametersits power lies in the ability to define a rich set of features over parsing decisions as well as surface level features relative to these decisionsfor instance the system of mcdonald et al incorporates features over the part of speech of words occurring between and around a possible headdependent relationthese features are highly important to overall accuracy since they eliminate unlikely scenarios such as a preposition modifying a noun not directly to its left or a noun modifying a verb with another verb occurring between themwe augmented this model to incorporate morphological features derived from each tokenconsider a proposed dependency of a dependent xj on the head xi each with morphological features mj and mi respectivelywe then add to the representation of the edge mi as head features mj as dependent features and also each conjunction of a feature from both setsthese features play the obvious role of explicitly modeling consistencies and commonalities between a head and its dependents in terms of attributes like gender case or numbernot all data sets in our experiments include morphological features so we use them only when availablethe second stage takes the output parse y for sentence x and classifies each edge e y with a particular label lideally one would like to make all parsing and labeling decisions jointly so that the shared knowledge of both decisions will help resolve any ambiguitieshowever the parser is fundamentally limited by the scope of local factorizations that make inference tractablein our case this means we are forced only to consider features over single edges or pairs of edgeshowever in a two stage system we can incorporate features over the entire output of the unlabeled parser since that structure is fixed as inputthe simplest labeler would be to take as input an edge e y for sentence x and find the label with highest score doing this for each edge in the tree would produce the final outputsuch a model could easily be trained using the provided training data for each languagehowever it might be advantageous to know the labels of other nearby edgesfor instance if we consider a head xi with dependents xj1 xjm it is often the case that many of these dependencies will have correlated labelsto model this we treat the labeling of the edges as a sequence labeling problem we use a firstorder markov factorization of the score s l i y x in which each factor is the score of labeling the adjacent edges and in the tree ywe attempted higherorder markov factorizations but they did not improve performance uniformly across languages and training became significantly slowerfor score functions we use simple dot products between high dimensional feature representations and a weight vector assuming we have an appropriate feature representation we can find the highest scoring label sequence with viterbis algorithmwe use the mira online learner to set the weights since we found it trained quickly and provide good performancefurthermore it made the system homogeneous in terms of learning algorithms since that is what is used to train our unlabeled parser of course we have to define a set of suitable featureswe used the following dependent have identical valuesis this the leftrightmost dependent for the headis this the first dependent to the leftright of the headvarious conjunctions of these were included based on performance on heldout datanote that many of these features are beyond the scope of the edge based factorizations of the unlabeled parserthus a joint model of parsing and labeling could not easily include them without some form of reranking or approximate parameter estimationwe trained models for all 13 languages provided by the conll organizers based on performance from a heldout section of the training data we used nonprojective parsing algorithms for czech danish dutch german japanese portuguese and slovene and projective parsing algorithms for arabic bulgarian chinese spanish swedish and turkishfurthermore for arabic and spanish we used lemmas instead of inflected word forms again based on performance on heldout data1results on the test set are given in table 1performance is measured through unlabeled accuracy which is the percentage of words that modify the correct head in the dependency graph and labeled accuracy which is the percentage of words that modify the correct head and label the dependency edge correctly in the graphthese results show that the discriminative spanning tree parsing framework is easily adapted across all these languagesonly arabic turkish and slovene have parsing accuracies significantly below 80 and these languages have relatively small training sets andor are highly inflected with little to no word order constraintsfurthermore these results show that a twostage system can achieve a relatively high performancein fact for every language our models perform significantly higher than the average performance for all the systems reported in buchholz et al for the remainder of the paper we provide a general error analysis across a wide set of languages plus a detailed error analysis of spanish and arabicour system has several components including the ability to produce nonprojective edges sequential japanese portuguese slovene spanish swedish and turkishnp allow nonprojectiveforce projective sa sequential labelingatomic labeling mb include morphology featuresno morphology features assignment of edge labels instead of individual assignment and a rich feature set that incorporates morphological properties when availablethe benefit of each of these is shown in table 2these results report the average labeled and unlabeled precision for the 10 languages with the smallest training setsthis allowed us to train new models quicklytable 2 shows that each component of our system does not change performance significantly however if we only allow projective parses do not use morphological features and label edges with a simple atomic classifier the overall drop in performance becomes significant allowing nonprojective parses helped with freer word order languages like dutch including rich morphology features naturally helped with highly inflected languages in particular spanish arabic turkish slovene and to a lesser extent dutch and portuguesederived morphological features improved accuracy in all these languages by 13 absolutesequential classification of labels had very little effect on overall labeled accuracy 2the major contribution was in helping to distinguish subjects objects and other dependents of main verbs which is the most common labeling errorthis is not surprising since these edge labels typically are the most correlated for instance sequential labeling improves the labeling of 2this difference was much larger for experiments in which gold standard unlabeled dependencies are used objects from 817756 to 842813 and the labeling of subjects from 868882 to 905904 for swedishsimilar improvements are common across all languages though not as dramaticeven with this improvement the labeling of verb dependents remains the highest source of erroralthough overall unlabeled accuracy is 86 most verbs and some conjunctions attach to their head words with much lower accuracy 69 for main verbs 75 for the verb ser and 65 for coordinating conjunctionsthese words form 17 of the test corpusother highfrequency word classes with relatively low attachment accuracy are prepositions adverbs and subordinating conjunctions for a total of another 23 of the test corpusthese weaknesses are not surprising since these decisions encode the more global aspects of sentence structure arrangement of clauses and adverbial dependents in multiclause sentences and prepositional phrase attachmentin a preliminary test of this hypothesis we looked at all of the sentences from a development set in which a main verb is incorrectly attachedwe confirmed that the main clause is often misidentified in multiclause sentences or that one of several conjoined clauses is incorrectly taken as the main clauseto test this further we added features to count the number of commas and conjunctions between a dependent verb and its candidate headunlabeled accuracy for all verbs increases from 71 to 73 and for all conjunctions from 71 to 74unfortunately accuracy for other word types decreases somewhat resulting in no significant net accuracy changenevertheless this very preliminary experiment suggests that widerrange features may be useful in improving the recognition of overall sentence structureanother common verb attachment error is a switch between head and dependent verb in phrasal verb forms like dejan intrigar or qiero decir possibly because the nonfinite verb in these cases is often a main verb in training sentenceswe need to look more carefully at verb features that may be useful here in particular features that distinguish finite and nonfinite formsin doing this preliminary analysis we noticed some inconsistencies in the reference dependency structuresfor example in the test sentence lo que decia mae west de si misma podriamos decirlo tambien los hombres decias head is given as decirlo although the main verbs of relative clauses are normally dependent on what the relative modifies in this case the article loa quick look at unlabeled attachment accuracies indicate that errors in arabic parsing are the most common across all languages prepositions conjunctions and to a lesser extent verbs similarly for labeled accuracy the hardest edges to label are for dependents of verbs ie subjects objects and adverbialsnote the difference in error between the unlabeled parser and the edge labeler the former makes mistakes on edges into prepositions conjunctions and verbs and the latter makes mistakes on edges into nouns each stage by itself is relatively accurate but since there is very little overlap in the kinds of errors each makes overall labeled accuracy drops to 67this drop is not nearly as significant for other languagesanother source of potential error is that the average sentence length of arabic is much higher than other languages however if we only look at performance for sentences of length less than 30 the labeled accuracy is still only 71the fact that arabic has only 1500 training instances might also be problematicfor example if we train on 200 400 800 and the full training set labeled accuracies are 54 60 62 and 67clearly adding more data is improving performancehowever when compared to the performance of slovene and spanish it appears that arabic parsing is laggingwe have presented results showing that the spanning tree dependency parsing framework of mcdonald et al generalizes well to languages other than englishin the future we plan to extend these models in two waysfirst we plan on examining the performance difference between twostaged dependency parsing and joint parsing plus labelingit is our hypothesis that for languages with finegrained label sets joint parsing and labeling will improve performancesecond we plan on integrating any available morphological features in a more principled mannerthe current system simply includes all morphological bigram featuresit is our hope that a better morphological feature set will help with both unlabeled parsing and labeling for highly inflected languages
W06-2932
multilingual dependency analysis with a twostage discriminative parserwe present a twostage multilingual dependency parser and evaluate it on 13 diverse languagesthe first stage is based on the unlabeled dependency parsing models described by mcdonald and pereira augmented with morphological features for a subset of the languagesthe second stage takes the output from the first and labels all the edges in the dependency graph with appropriate syntactic categories using a globally trained sequence classifier over components of the graphwe report results on the conllx shared task data sets and present an error analysiswe use postprocessing for nonprojective dependencies and for labelingwe treat the labeling of dependencies as a sequence labeling problemthe specific graphbased model studied in this work factors scores over pairs of arcs and uses near exhaustive search for unlabeled parsing coupled with a separate classifier to label each arc
labeled pseudoprojective dependency parsing with support vector machines we use svm classifiers to predict the next action of a deterministic parser that builds labeled projective dependency graphs in an incremental fashion nonprojective dependencies are captured indirectly by projectivizing the training data for the classifiers and applying an inverse transformation to the output of the parser we present evaluation results and an error analysis focusing on swedish and turkish the conllx shared task consists in parsing texts in multiple languages using a single dependency parser that has the capacity to learn from treebank dataour methodology for performing this task is based on four essential components all experiments have been performed using maltparser version 04 which is made available together with the suite of programs used for pre and postprocessing1the parsing algorithm used for all languages is the deterministic algorithm first proposed for unlabeled dependency parsing by nivre and extended to labeled dependency parsing by nivre et al the algorithm builds a labeled dependency graph in one lefttoright pass over the input using a stack to store partially processed tokens and adding arcs using four elementary actions although the parser only derives projective graphs the fact that graphs are labeled allows nonprojective dependencies to be captured using the pseudoprojective approach of nivre and nilsson another limitation of the parsing algorithm is that it does not assign dependency labels to roots ie to tokens having head0to overcome this problem we have implemented a variant of the algorithm that starts by pushing an artificial root token with id0 onto the stacktokens having head0 can now be attached to the artificial root in a rightarc action which means that they can be assigned any labelsince this variant of the algorithm increases the overall nondeterminism it has only been used for the data sets that include informative root labels historybased parsing models rely on features of the derivation history to predict the next parser actionthe features used in our system are all symbolic and extracted from the following fields of the data representation form lemma cpostag postag feats and deprelfeatures of the type deprel have a special status in that they are extracted during parsing from the partially built dependency graph and may therefore contain errors whereas all the other features have gold standard values during both training and parsing2 based on previous research we defined a base model to be used as a starting point for languagespecific feature selectionthe features of this model are shown in table 1 where rows denote tokens in a parser configuration and where columns correspond to data fieldsthe base model contains twenty features but note that the fields lemma cpos and feats are not available for all languageswe use support vector machines3 to predict the next parser action from a feature vector representing the historymore specifically we use libsvm with a quadratic kernel k 2 and the builtin oneversusall strategy for multiclass classificationsymbolic features are converted to numerical features using the standard technique of binarization and we split values of the feats field into its atomic components4 for some languages we divide the training data into smaller sets based on some feature s which may reduce training times without a significant loss in accuracy to avoid too small training sets we pool together categories that have a frequency below a certain threshold t pseudoprojective parsing was proposed by nivre and nilsson as a way of dealing with nonprojective structures in a projective datadriven parserwe projectivize training data by a minimal transformation lifting nonprojective arcs one step at a time and extending the arc label of lifted arcs using the encoding scheme called head by nivre and nilsson which means that a lifted arc is assigned the label rth where r is the original label and h is the label of the original head in the nonprojective dependency graphnonprojective dependencies can be recovered by applying an inverse transformation to the output of the parser using a lefttoright topdown breadthfirst search guided by the extended arc labels rth assigned by the parserthis technique has been used without exception for all languagessince the projective parsing algorithm and graph transformation techniques are the same for all data sets our optimization efforts have been focused on feature selection using a combination of backward and forward selection starting from the base model described in section 22 and parameter optimization for the svm learner using grid search for an optimal combination of the kernel parameters y and r the penalty parameter c and the termination criterion c as well as the splitting feature s and the frequency threshold t feature selection and parameter optimization have to some extent been interleaved but the amount of work done varies between languagesthe main optimization criterion has been labeled attachment score on heldout data using tenfold crossvalidation for all data sets with 100k tokens or less and an 8020 split into training and devtest sets for larger datasetsthe number of features in the optimized models varies from 16 to 30 but the models use all fields available for a given language except that form is not used for turkish the svm parameters fall into the following ranges γ 012020 r 0006 c 0107 c 00110data has been split on the pos of the next input token for czech german and spanish and on the cpos of the next input token for bulgarian slovene and turkish 5 a dry run at the end of the development phase gave a labeled attachment score of 8046 over the twelve required languagestable 2 shows final test results for each language and for the twelve required languages togetherthe total score is only 027 percentage points below the score from the dry run which seems to indicate that models have not been overfitted to the training datathe labeled attachment score varies from 9165 to 6568 but is above average for all languageswe have the best reported score for japanese swedish and turkish and the score for arabic danish dutch portuguese spanish and overall does not differ significantly from the best onethe unlabeled score is less competitive with only turkish having the highest reported score which indirectly indicates that the integration of labels into the parsing process primarily benefits labeled accuracyan overall error analysis is beyond the scope of this paper but we will offer a few general observations before we turn to swedish and turkish focusing on recall and precision of root nodes as a reflection of global syntactic structure and on attachment score as a function of arc lengthif we start by considering languages with a labeled attachment score of 85 or higher they are characterized by high precision and recall for root nodes typically 9590 and by a graceful degradation of attachment score as arcs grow longer typically 959085 for arcs of length 1 2 and 36typical examples are bulgarian chinese danish and swedish japanese despite a very high accuracy is different in that attachment score drops from 98 to 85 as we go from length 1 to 2 which may have something to do with the data consisting of transcribed speech with very short utterancesa second observation is that a high proportion of nonprojective structures leads to fragmentation in the parser output reflected in lower precision for rootsthis is noticeable for german and portuguese which still have high overall accuracy thanks to very high attachment scores but much more conspicuous for czech dutch and slovene where root precision drops more drastically to about 69 71 and 41 respectively and root recall is also affected negativelyon the other hand all three languages behave like highaccuracy languages with respect to attachment scorea very similar pattern is found for spanish although this cannot be explained by a high proportion of nonprojective structuresone possible explanation in this case may be the fact that dependency graphs in the spanish data are sparsely labeled which may cause problem for a parser that relies on dependency labels as featuresthe results for arabic are characterized by low root accuracy as well as a rapid degradation of attachment score with arc length by contrast turkish exhibits high root accuracy but consistently low attachment scores it is noteworthy that arabic and turkish being typological outliers show patterns that are different both from each other and from most of the other languagesa more finegrained analysis of the swedish results reveals a high accuracy for function words which is compatible with previous studies thus the labeled fscore is 100 for infinitive markers and subordinating conjunctions and above 95 for determiners in addition subjects have a score above 90in all these cases the dependent has a configurationally defined position with respect to its headarguments of the verb such as objects and predicative complements have a slightly lower accuracy which is due to the fact that they compete in the same structural positions whereas adverbials have even lower scores the latter result must be related both to the relatively finegrained inventory of dependency labels for adverbials and to attachment ambiguities that involve prepositional phrasesthe importance of this kind of ambiguity is reflected also in the drastic difference in accuracy between noun premodifiers and noun postmodifiers finally it is worth noting that coordination which is often problematic in parsing has high accuracythe swedish treebank annotation treats the second conjunct as a dependent of the first conjunct and as the head of the coordinator which seems to facilitate parsing6 the attachment of the second conjunct to the first has a labeled fscore above 80 while the attachment of the coordinator to the second conjunct has a score well above 90in turkish very essential syntactic information is contained in the rich morphological structure where concatenated suffixes carry information that in other languages may be expressed by separate wordsthe turkish treebank therefore divides word forms into smaller units called inflectional groups and the task of the parser is to construct dependencies between igs not between word forms it is then important to remember that an unlabeled attachment score of 758 corresponds to a wordtoword score of 827 which puts turkish on a par with languages like czech dutch and spanishmoreover when we break down the results according to whether the head of a dependency is part of a multipleig word or a complete word we observe a highly significant difference in accuracy with only 532 unlabeled attachment score for multipleig heads versus 837 for singleig headsit is hard to say at this stage whether this means that our methods are illsuited for igbased parsing or whether it is mainly a case of sparse data for multipleig wordswhen we break down the results by dependency type we can distinguish three main groupsthe first consists of determiners and particles which have an unlabeled attachment score over 80 and which are found within a distance of 114 igs from their head7 the second group mainly contains subjects objects and different kinds of adjuncts with a score in the range 6080 and a distance of 1852 igs to their headin this group information about case and possessive features of nominals is important which is found in the feats field in the data representationwe believe that one important explanation for our relatively good results for turkish is that we break down the feats information into its atomic components independently of pos and cpos tags and let the classifier decide which one to use in a given situationthe third group contains distant dependencies such as sentence modifiers vocatives and appositions which have a much lower accuracythe evaluation shows that labeled pseudoprojective dependency parsing using a deterministic parsing algorithm and svm classifiers gives competitive parsing accuracy for all languages involved in the 7given that the average ig count of a word is 126 in the treebank this means that they are normally adjacent to the head word shared task although the level of accuracy varies considerably between languagesto analyze in depth the factors determining this variation and to improve our parsing methods accordingly to meet the challenges posed by the linguistic diversity will be an important research goal for years to comewe are grateful for the support from t ubitak and the swedish research councilwe also want to thank atanas chanev for assistance with slovene the organizers of the shared task for all their hard work and the creators of the treebanks for making the data available
W06-2933
labeled pseudoprojective dependency parsing with support vector machineswe use svm classifiers to predict the next action of a deterministic parser that builds labeled projective dependency graphs in an incremental fashionnonprojective dependencies are captured indirectly by projectivizing the training data for the classifiers and applying an inverse transformation to the output of the parserwe present evaluation results and an error analysis focusing on swedish and turkishour pseudoprojective approach transforms nonprojective training trees to projective ones but encode the information necessary to make the inverse transformation in the deprel so that this inverse transformation can also be carried out on the test trees
why generative phrase models underperform surface heuristics we investigate why weights from generative models underperform heuristic estimates in phrasebased machine translation we first propose a simple generative phrasebased model and verify that its estimates are inferior to those given by surface statistics the performance gap stems primarily from the addition of a hidden segmentation variable which increases the capacity for overfitting during maximum likelihood training with them in particular while word level models benefit greatly from reestimation phraselevel models do not the crucial difference is that distinct word alignments cannot all be correct while distinct segmentations can alternate segmentations rather than alternate alignments compete resulting in increased determinization of the phrase table decreased generalization and decreased final bleu score we also show that interpolation of the two methods can result in a modest increase in bleu score at the core of a phrasebased statistical machine translation system is a phrase table containing pairs of source and target language phrases each weighted by a conditional translation probabilitykoehn et al showed that translation quality is very sensitive to how this table is extracted from the training dataone particularly surprising result is that a simple heuristic extraction algorithm based on surface statistics of a wordaligned training set outperformed the phrasebased generative model proposed by marcu and wong this result is surprising in light of the reverse situation for wordbased statistical translationspecifically in the task of word alignment heuristic approaches such as the dice coefficient consistently underperform their reestimated counterparts such as the ibm word alignment models this wellknown result is unsurprising reestimation introduces an element of competition into the learning processthe key virtue of competition in word alignment is that to a first approximation only one source word should generate each target wordif a good alignment for a word token is found other plausible alignments are explained away and should be discounted as incorrect for that tokenas we show in this paper this effect does not prevail for phraselevel alignmentsthe central difference is that phrasebased models such as the ones presented in section 2 or marcu and wong contain an element of segmentationthat is they do not merely learn correspondences between phrases but also segmentations of the source and target sentenceshowever while it is reasonable to suppose that if one alignment is right others must be wrong the situation is more complex for segmentationsfor example if one segmentation subsumes another they are not necessarily incompatible both may be equally validwhile in some cases such as idiomatic vs literal translations two segmentations may be in true competition we show that the most common result is for different segmentations to be recruited for different examples overfitting the training data and overly determinizing the phrase translation estimatesin this work we first define a novel generative phrasebased model analogous to ibm model 3while its exact training is intractable we describe a training regime which uses wordlevel alignments to constrain the space of feasible segmentations down to a manageable numberwe demonstrate that the phrase analogue of the dice coefficient is superior to our generative model in the primary contribution of the paper we present a series of experiments designed to elucidate what reestimation learns in this contextwe show that estimates are overly determinized because segmentations are used in unintuitive ways for the sake of data likelihoodwe comment on both the beneficial instances of segment competition as well as the harmful ones finally we demonstrate that interpolation of the two estimates can provide a modest increase in bleu score over the heuristic baselinethe generative model defined below is evaluated based on the bleu score it produces in an endtoend machine translation system from english to frenchthe topperforming diagand extraction heuristic serves as the baseline for evaluation1 each approach the generative model and heuristic baseline produces an estimated conditional distribution of english phrases given french phraseswe will refer to the distribution derived from the baseline heuristic as φhthe distribution learned via the generative model denoted φthem is described in detail belowwhile our model for computing φthem is novel it is meant to exemplify a class of models that are not only clear extensions to generative word alignment models but also compatible with the statistical framework assumed during phrasebased decodingthe generative process we modeled produces a phrasealigned english sentence from a french sentence where the former is a translation of the latternote that this generative process is opposite to the translation direction of the larger system because of the standard noisychannel decompositionthe learned parameters from this model will be used to translate sentences from english to frenchthe generative process modeled has four steps2 the corresponding probabilistic model for this generative process is where p factors into a segmentation model σ a translation model φ and a distortion model d the parameters for each component of this model are estimated differently ing function based on absolute sentence position akin to the one used in ibm model 3while similar to the joint model in marcu and wong our model takes a conditional form compatible with the statistical assumptions used by the pharaoh decoderthus after training the parameters of the phrase translation model φthem can be used directly for decodingsignificant approximation and pruning is required to train a generative phrase model and table such as φthem with hidden segmentation and alignment variables using the expectation maximization algorithm computing the likelihood of the data for a set of parameters involves summing over exponentially many possible segmentations for each training sentenceunlike previous attempts to train a similar model we allow information from a wordalignment model to inform our approximationthis approach allowed us to directly estimate translation probabilities even for rare phrase pairs which were estimated heuristically in previous workin each iteration of them we reestimate each phrase translation probability by summing fractional phrase counts from the data given the current model parametersthis training loop necessitates approximation because summing over all possible segmentations and alignments for each sentence is intractable requiring time exponential in the length of the sentencesadditionally the set of possible phrase pairs grows too large to fit in memoryusing word alignments we can address both problems4 in particular we can determine for any aligned segmentation whether it is compatible with the wordlevel alignment for the sentence pairwe define a phrase pair to be compatible with a wordalignment if no word in either phrase is aligned with a word outside the other phrase then is compatible with the wordalignment if each of its aligned phrases is a compatible phrase pairthe training process is then constrained such that when evaluating the above sum only compatible aligned segmentations are consideredthat is we allow p 0 only for aligned segmentations such that a provides a onetoone mapping from fi1 to ei1 where all phrase pairs are compatible with the word alignmentthis constraint has two important effectsfirst we force p 0 for all phrase pairs not compatible with the wordlevel alignment for some sentence pairthis restriction successfully reduced the total legal phrase pair types from approximately 250 million to 17 million for 100000 training sentenceshowever some desirable phrases were eliminated because of errors in the word alignmentssecond the time to compute the estep is reducedwhile in principle it is still intractable in practice we can compute most sentence pairs contributions in under a second eachhowever some spurious word alignments can disallow all segmentations for a sentence pair rendering it unusable for trainingseveral factors including errors in the wordlevel alignments sparse word alignments and nonliteral translations because our constraint to rule out approximately 54 of the training setthus the reduced size of the usable training set accounts for some of the degraded performance of oem relative to ohhowever the results in figure 1 of the following section show that oem trained on twice as much data as oh still underperforms the heuristic indicating a larger issue than decreased training set sizeto test the relative performance of oem and oh we evaluated each using an endtoend translation system from english to frenchwe chose this nonstandard translation direction so that the examples in this paper would be more accessible to a primarily englishspeaking audienceall training and test data were drawn from the frenchenglish section of the europarl sentencealigned corpuswe tested on the first 1000 unique sentences of length 5 to 15 in the corpus and trained on sentences of length 1 to 60 starting after the first 10000the system follows the structure proposed in the documentation for the pharaoh decoder and uses many publicly available components the language model was generated from the europarl corpus using the sri language modeling toolkit pharaoh performed decoding using a set of default parameters for weighting the relative influence of the language translation and distortion models a maximum phrase length of three was used for all experimentsto properly compare oem to oh all aspects of the translation pipeline were held constant except for the parameters of the phrase translation tablein particular we did not tune the decoding hyperparameters for the different phrase tables pehaving generated oh heuristically and oem with them we now0compare their performancewhile the model and training regimen for oem differ from the model frommarcu and wong we achieved tion maximization algorithm for training oem was initialized with the heuristic parameters oh so the heuristic curve can be equivalently labeled as iteration 0thus the first iteration of them increases the observed likelihood of the training sentences while simultaneously degrading translation performance on the test setas training proceeds performance on the test set levels off after three iterations of themthe system never achieves the performance of its initialization parametersthe pruning of our training regimen accounts for part of this degradation but not all augmenting oem by adding back in all phrase pairs that were dropped during training does not close the performance gap between oem and ohlearning oem degrades translation quality in large part because them learns overly determinized segmentations and translation parameters overfitting the training data and failing to generalizethe primary increase in richness from generative wordlevel models to generative phraselevel models is due to the additional latent segmentation variablealthough we impose a uniform distribution over segmentations it nonetheless plays a crucial role during trainingwe will characterize this phenomenon through aggregate statistics and translation examples shortly but begin by demonstrating the models capacity to overfit the training datalet us first return to the motivation behind introducing and learning phrases in machine translationfor any language pair there are contiguous strings of words whose collocational translation is noncompositional that is they translate together differently than they would in isolationfor instance chat in french generally translates to cat in english but appeler un chat un chat is an idiom which translates to call a spade a spadeintroducing phrases allows us to translate chat un chat atomically to spade a spade and vice versawhile introducing phrases and parameterizing their translation probabilities with a surface heuristic allows for this possibility statistical reestimation would be required to learn that chat should never be translated to spade in isolationhence translating i have a spade with oh could yield an errorbut enforcing competition among segmentations introduces a new problem true translation ambiguity can also be spuriously explained by the segmentationconsider the french fragment carte sur la table which could translate to map on the table or notice on the chartusing these two sentence pairs as training one would hope to capture the ambiguity in the parameter table as assuming we only allow nondegenerate segmentations and disallow nonmonotonic alignments this parameter table yields a marginal likelihood p 025 for both sentence pairs the intuitive result given two independent lexical ambiguhence a higher likelihood can be achieved by allocating some phrases to certain translations while reserving overlapping phrases for others thereby failing to model the real ambiguity that exists across the language pairalso notice that the phrase sur la can take on an arbitrary distribution over any english phrases without affecting the likelihood of either sentence pairnot only does this counterintuitive parameterization give a high data likelihood but it is also a fixed point of the them algorithmthe phenomenon demonstrated above poses a problem for generative phrase models in generalthe ambiguous process of translation can be modeled either by the latent segmentation variable or the phrase translation probabilitiesin some cases optimizing the likelihood of the training corpus adjusts for the former when we would prefer the latterwe next investigate how this problem manifests in φthem and its effect on translation qualitythe parameters of φthem differ from the heuristically extracted parameters φh in that the conditional distributions over english translations for some french words are sharply peaked for φthem compared to flatter distributions generated by φhthis determinism predicted by the previous sections example is not atypical of them training for other tasksto quantify the notion of peaked distributions over phrase translations we compute the entropy of the distribution for each french phrase according to the standard definitionthe average entropy weighted by frequency for the most common 10000 phrases in the learned table was 155 comparable to 376 for the heuristic tablethe difference between the tables becomes much more striking when we consider the histogram of entropies for phrases in figure 2in particular the learned table has many more phrases with entropy near zerothe most pronounced entropy differences often appear for common phrasesten of the most common phrases in the french corpus are shown in figure 3as more probability mass is reserved for fewer translations many of the alternative translations under φh are assigned prohibitively small probabilitiesin translating 1000 test sentences for example no phrase translation with φ less than 105 was used by the decodergiven this empirical threshold nearly 60 of entries in φthem are unusable compared with 1 in φhwhile this determinism of φthem may be desirable in some circumstances we found that the ambiguity in φh is often preferable at decoding timeseveral learned distributions have very low entropy30 in particular the pattern of translationambiguous 0 phrases receiving spuriously peaked distributions introduces new traslation entropy errors relative to the baselinewe now investigate both positive and negative effects of the learning processthe issue that motivated training a generative model is sometimes resolved correctly for a word that translates differently alone than in the context of an idiom the translation probabilities can more accurately reflect thisreturning to the previous example the phrase table for chat has been corrected through the learning processthe heuristic process gives the incorrect translation spade with 61 probability while the statistical learning approach gives cat with 95 probabilitywhile such examples of improvement are encouraging the trend of spurious determinism overwhelms this benefit by introducing errors in four related ways each of which will be explored in turnthe first effect follows from our observation in section 32 that many phrase pairs are unusable due to vanishingly small probabilitiessome of the entries that are made unusable by reestimation are helpful at decoding time evidenced by the fact that pruning the set of oems lowscoring learned phrases from the original heuristic table reduces bleu score by 002 for 25k training sentences the second effect is more subtleconsider the sentence in figure 4 which to a first approximation can be translated as a series of cognates as demonstrated by the decoding that follows from the heuristic heuristic parameterization oh6 notice also that the learned translation probabilities from heuristic extraction are nondeterministicon the other hand the translation system makes a significant lexical error on this sim 2 ple sentence when parameterized by oem the use of caracterise in this context is incorrectthis error arises from a sharply peaked distribution over english phrases for caracterisethis example illustrates a recurring problem errors do not necessarily arise because a correct translation is not availablenotice that a preferable translation of degree as degre is available under both parameterizationsdegre is not used however because of the peaked distribution of a competing translation candidatein this way very high probability translations can effectively block the use of more appropriate translations at decoding timewhat is furthermore surprising and noteworthy in this example is that the learned neardeterministic translation for caracterise is not a common translation for the wordnot only does the statistical learning process yield lowentropy translation distributions but occasionally the translation with undesirably high conditional probability does not have a strong surface correlation with the source phrasethis example is not unique during different initializations of the them algorithm we noticed such patterns even for common french phrases such as de and nethe third source of errors is closely related common phrases that translate in many ways depending on the context can introduce errors if they have a spuriously peaked distributionfor instance consider the lone apostrophe which is treated as a single token in our data set the shape of the heuristic translation distribution for the phrase is intuitively appealing showing a relatively flat distribution among many possible translationssuch a distribution has very high entropyon the other hand the learned table translates the apostrophe to the with probability very near 1 phe the most common french phrasethe learned table contains a highly peaked distributionsuch common phrases whose translation depends highly on the context are ripe for producing translation errorsthe flatness of the distribution of oh ensures that the single apostrophe will rarely be used during decoding because no one phrase table entry has high enough probability to promote its useon the other hand using the peaked entry oem incurs virtually no cost to the score of a translationthe final kind of errors stems from interactions between the language and translation modelsthe selection among translation choices via a language model a key virtue of the noisy channel framework is hindered by the determinism of the translation modelthis effect appears to be less significant than the previous threewe should note however that adjusting the language and translation model weights during decoding does not close the performance gap between oh and oemin light of the low entropy of oem we could hope to improve translations by retaining entropythere are several strategies we have considered to achieve thisbroadly we have tried two approaches combining oem and oh via heuristic interpolation methods and modifying the training loop to limit determinismthe simplest strategy to increase entropy is to interpolate the heuristic and learned phrase tablesvarying the weight of interpolation showed an improvement over the heuristic of up to 001 for 100k sentencesa more modest improvement of 0003 for 25k training sentences appears in table 1in another experiment we interpolated the output of each iteration of them with its input thereby maintaining some entropy from the initialization parametersbleu score increased to a maximum of 0394 using this technique with 100k training sentences outperforming the heuristic by a slim margin of 0005we might address the determinization in oem without resorting to interpolation by modifying the training procedure to retain entropyby imposing a nonuniform segmentation model that favors shorter phrases over longer ones we hope to prevent the errorcausing effects of them training outlined abovein principle this change will encourage them to explain training sentences with shorter sentencesin practice however this approach has not led to an improvement in bleuanother approach to maintaining entropy during the training process is to smooth the probabilities generated by themin particular we can use the following smoothed update equation during the training loop which reserves a portion of probability mass for unseen translationswe would like to thank the anonymous reviewers for their valuable feedback on this paperin the equation above l is the length of the french phrase and k is a tuning parameterthis formulation not only serves to reduce very spiked probabilities in oem but also boosts the probability of short phrases to encourage their usewith k 25 this smoothing approach improves bleu by 007 using 25k training sentences nearly equaling the heuristic reestimating phrase translation probabilities using a generative model holds the promise of improving upon heuristic techniqueshowever the combinatorial properties of a phrasebased generative model have unfortunate side effectsin cases of true ambiguity in the language pair to be translated parameter estimates that explain the ambiguity using segmentation variables can in some cases yield higher data likelihoods by determinizing phrase translation estimateshowever this behavior in turn leads to errors at decoding timewe have also shown that some modest benefit can be obtained from reestimation through the blunt instrument of interpolationa remaining challenge is to design more appropriate statistical models which tie segmentations together unless sufficient evidence of true noncompositionality is present perhaps such models could properly combine the benefits of both current approaches
W06-3105
why generative phrase models underperform surface heuristicswe investigate why weights from generative models underperform heuristic estimates in phrasebased machine translationwe first propose a simple generative phrasebased model and verify that its estimates are inferior to those given by surface statisticsthe performance gap stems primarily from the addition of a hidden segmentation variable which increases the capacity for overfitting during maximum likelihood training with themin particular while word level models benefit greatly from reestimation phraselevel models do not the crucial difference is that distinct word alignments cannot all be correct while distinct segmentations canalternate segmentations rather than alternate alignments compete resulting in increased determinization of the phrase table decreased generalization and decreased final bleu scorewe also show that interpolation of the two methods can result in a modest increase in bleu scorewe try a different generative phrase translation model analogous to ibm wordtranslation model 3 and again find that the standard model outperforms their generative modelwe explore estimation using them of phrase pair probabilities under a conditional translation model based on the original sourcechannel formulationwe conclude that segmentation variables in the generative translation model lead to overfitting while attaining higher likelihood of the training data than the heuristic estimator
discriminative reordering models for statistical machine translation we present discriminative reordering models for phrasebased statistical machine translation the models are trained using the maximum entropy principle we use several types of features based on words based on word classes based on the local context we evaluate the overall performance of the reordering models as well as the contribution of the individual feature types on a wordaligned corpus additionally we show improved translation performance using these reordering models compared to a stateoftheart baseline system in recent evaluations phrasebased statistical machine translation systems have achieved good performancestill the fluency of the machine translation output leaves much to desireone reason is that most phrasebased systems use a very simple reordering modelusually the costs for phrase movements are linear in the distance eg see recently in and in a reordering model has been described that tries to predict the orientation of a phrase ie it answers the question should the next phrase be to the left or to the right of the current phrase this phrase orientation probability is conditioned on the current source and target phrase and relative frequencies are used to estimate the probabilitieswe adopt the idea of predicting the orientation but we propose to use a maximumentropy based modelthe relativefrequency based approach may suffer from the data sparseness problem because most of the phrases occur only once in the training corpusour approach circumvents this problem by using a combination of phraselevel and wordlevel features and by using wordclasses or partofspeech informationmaximum entropy is a suitable framework for combining these different features with a welldefined training criterionin several variants of the orientation model have been triedit turned out that for different tasks different models show the best performancehere we let the maximum entropy training decide which features are important and which features can be neglectedwe will see that additional features do not hurt performance and can be safely added to the modelthe remaining part is structured as follows first we will describe the related work in section 2 and give a brief description of the baseline system in section 3then we will present the discriminative reordering model in section 4afterwards we will evaluate the performance of this new model in section 5this evaluation consists of two parts first we will evaluate the prediction capabilities of the model on a wordaligned corpus and second we will show improved translation quality compared to the baseline systemfinally we will conclude in section 6as already mentioned in section 1 many current phrasebased statistical machine translation systems use a very simple reordering model the costs for phrase movements are linear in the distancethis approach is also used in the publicly available pharaoh decoder the idea of predicting the orientation is adopted from and here we use the maximum entropy principle to combine a variety of different featuresa reordering model in the framework of weighted finite state transducers is described in there the movements are defined at the phrase level but the window for reordering is very limitedthe parameters are estimated using an themstyle methodnone of these methods try to generalize from the words or phrases by using word classes or partofspeech informationthe approach presented here has some resemblance to the bracketing transduction grammars of which have been applied to a phrasebased machine translation system in the difference is that here we do not constrain the phrase reorderingnevertheless the invertedmonotone concatenation of phrases in the btg framework is similar to the leftright phrase orientation used herein statistical machine translation we are given a source language sentence fj1 f1 fj fj which is to be translated into a target language sentence ei1 e1 ei eiamong all possible target language sentences we will choose the sentence with the highest probability the posterior probability pr is modeled directly using a loglinear combination of several models the denominator represents a normalization factor that depends only on the source sentence fj1 therefore we can omit it during the search processas a decision rule we obtain this approach is a generalization of the sourcechannel approach it has the advantage that additional models h can be easily integrated into the overall systemthe model scaling factors λm1 are trained with respect to the final translation quality measured by an error criterion we use a stateoftheart phrasebased translation system including the following models an ngram language model a phrase translation model and a wordbased lexicon modelthe latter two models are used for both directions p and padditionally we use a word penalty and a phrase penaltythe reordering model of the baseline system is distancebased ie it assigns costs based on the distance from the end position of a phrase to the start position of the next phrasethis very simple reordering model is widely used for instance in in this section we will describe the proposed discriminative reordering modelto make use of word level information we need the word alignment within the phrase pairsthis can be easily stored during the extraction of the phrase pairs from the bilingual training corpusif there are multiple possible alignments for a phrase pair we use the most frequent onethe notation is introduced using the illustration in figure 1there is an example of a left and a right phrase orientationwe assume that we have already produced the threeword phrase in the lower partnow the model has to predict if the start position of the next phrase j is to the left or to the right of the current phrasethe reordering model is applied only at the phrase boundarieswe assume that the reordering within the phrases is correctin the remaining part of this section we will describe the details of this reordering modelthe classes our model predicts will be defined in section 42then the feature functions will be defined in section 43the training criterion and the training events of the maximum entropy model will be described in section 44ideally this model predicts the start position of the next phrasebut as predicting the exact position is rather difficult we group the possible start positions into classesin the simplest case we use only two classesone class for the positions to the left and one class for the positions to the rightas a refinement we can use four classes instead of two 1 one position to the left 2 more than one positions to the left 3 one position to the right 4 more than one positions to the rightin general we use a parameter d to specify 2 d classes of the types let cjj denote the orientation class for a movement from source position j to source position j as illustrated in figure 1in the case of two orientation classes cjj is defined as r left if j j then the reordering model has the form p a wellfounded framework for directly modeling the probability p is maximum entropy in this framework we have a set of n feature functions hn n 1 n each feature function hn is weighted with a factor λnthe resulting model is the functional form is identical to equation 2 but here we will use a large number of binary features whereas in equation 2 usually only a very small number of realvalued features is usedmore precisely the resulting reordering model pλn1 is used as an additional component in the loglinear combination of equation 2the feature functions of the reordering model depend on the last alignment link of a phrasenote that the source position j is not necessarily the end position of the source phrasewe use the source position j which is aligned to the last word of the target phrase in target position ithe illustration in figure 1 contains such an exampleto introduce generalization capabilities some of the features will depend on word classes or partofspeech informationlet f1j denote the word class sequence that corresponds to the source language sentence fj1 and let ei1 denote the target word class sequence that corresponds to the target language sentence ei1then the feature functions are of the form hnwe consider the following binary features here δ denotes the kroneckerfunctionin the experiments we will use d e 1 0 11many other feature functions are imaginable eg combinations of the described feature functions ngram or multiword features joint source and target language feature functionsas training criterion we use the maximum class posterior probabilitythis corresponds to maximizing the likelihood of the maximum entropy modelsince the optimization criterion is convex there is only a single optimum and no convergence problems occurto train the model parameters λn1 we use the generalized iterative scaling algorithm in practice the training procedure tends to result in an overfitted modelto avoid overfitting have suggested a smoothing method where a gaussian prior distribution of the parameters is assumedthis method tried to avoid very large lambda values and prevents features that occur only once for a specific class from getting a value of infinitywe train ibm model 4 with giza in both translation directionsthen the alignments are symmetrized using a refined heuristic as described in this wordaligned bilingual corpus is used to train the reordering model parameters ie the feature weights λn1 each alignment link defines an event for the maximum entropy trainingan exception are the onetomany alignments ie one source word is aligned to multiple target wordsin this case only the topmost alignment link is considered because the other ones cannot occur at a phrase boundarymanytoone and manytomany alignments are handled in a similar waythe experiments were carried out on the basic travel expression corpus task this is a multilingual speech corpus which contains tourismrelated sentences similar to those that are found in phrase bookswe use the arabicenglish the chineseenglish and the japaneseenglish datathe corpus statistics are shown in table 1as the btec is a rather clean corpus the preprocessing consisted mainly of tokenization ie separating punctuation marks from wordsadditionally we replaced contractions such as it is or i am in the english corpus and we removed the case informationfor arabic we removed the diacritics and we split common prefixes al w f b l there was no special preprocessing for the chinese and the japanese training corporato train and evaluate the reordering model we use the word aligned bilingual training corpusfor evaluating the classification power of the reordering model we partition the corpus into a training part and a test partin our experiments we use about 10 of the corpus for testing and the remaining part for training the feature weights of the reordering model with the gis algorithm using yasmet the statistics of the training and test alignment links is shown in table 2the number of training events ranges from 119k for japaneseenglish to 144k for arabicenglishthe word classes for the classbased features are trained using the mkcls tool in the experiments we use 50 word classesalternatively one could use partofspeech information for this purposeadditional experiments were carried out on the large data track of the chineseenglish nist taskthe corpus statistics of the bilingual training corpus are shown in table 3the language model was trained on the english part of the bilingual training corpus and additional monolingual english data from the gigaword corpusthe total amount of language model training data was about 600m running wordswe use a fourgram language model with modified kneserney smoothing as implemented in the srilm toolkit for the four english reference translations of the evaluation sets the accumulated statistics are presentedin this section we present the classification results for the three language pairsin table 4 we present the classification results for two orientation classesas baseline we always choose the most frequent orientation classfor arabicenglish the baseline is with 63 already very lowthis means that the word order in arabic is very similar to the word order in englishfor chineseenglish the baseline is with 127 about twice as largethe most differences in word order occur for japaneseenglishthis seems to be reasonable as japanese has usually a different sentence structure subjectobjectverb compared to subjectverbobject in englishfor each language pair we present results for several combination of featuresthe three columns per language pair indicate if the features are based on the words on the word classes or on both we also distinguish if the features depend on the target sentence on the source sentence or on both for arabicenglish using features based only on words of the target sentence the classification error rate can be reduced to 45if the features are based only on the source sentence words a classification error rate of 29 is reachedcombining the features based on source and target sentence words a classification error rate of 28 can be achievedadding the features based on word classes the classification error rate can be further improved to 21for the other language pairs the results are similar except that the absolute values of the classification error rates are higherwe observe the following these are desirable properties of an appropriate reordering modelthe main point is that these are fulfilled not only on the training data but on unseen test datathere seems to be no overfitting problemin table 5 we present the results for four orientation classesthe final error rates are a factor 24 larger than for two orientation classesdespite that we observe the same tendencies as for two orientation classesagain using more features always helps to improve the performancefor the translation experiments on the btec task we report the two accuracy measures bleu and nist as well as the two error rates word error rate and positionindependent word error rate these criteria are computed with respect to 16 referencesin table 6 we show the translation results for the btec taskin these experiments the reordering model uses two orientation classes ie it predicts either a left or a right orientationthe features for the maximumentropy based reordering model are based on the source and target language words within a window of onethe wordclass based features are not used for the translation experimentsthe maximumentropy based reordering model achieves small but consistent improvement for all the evaluation criterianote that the baseline system ie using the distancebased reordering was among the best systems in the iwslt 2005 evaluation campaign some translation examples are presented in table 7we observe that the system using the maximumentropy based reordering model produces more fluent translationsadditional translation experiments were carried out on the large data track of the chineseenglish nist taskfor this task we use only the bleu and nist scoresboth scores are computed caseinsensitive with respect to four reference translations using the mtevalv11b tool1for the nist task we use the bleu score as primary criterion which is optimized on the nist 2002 evaluation set using the downhill simplex algorithm note that only the eight or nine model scaling factors of equation 2 are optimized using the downhill simplex algorithmthe feature weights of the reordering model are trained using the gis algorithm as described in section 44we use a stateoftheart baseline system which would have obtained a good rank in the last nist evaluation the translation results for the nist task are presented in table 8we observe consistent improvements of the bleu score on all evaluation setsthe overall improvement due to reordering ranges from 12 to 20 absolutethe contribution of the maximumentropy based reordering model to this improvement is in the range of 25 to 58 eg for the nist 2003 evaluation set about 58 of the improvement using reordering can be attributed to the maximumentropy based reordering modelwe also measured the classification performance for the nist taskthe general tendencies are identical to the btec taskwe have presented a novel discriminative reordering model for statistical machine translationthis model is trained on the word aligned bilingual corpus using the maximum entropy principleseveral types of features have been used we have evaluated the performance of the reordering model on a heldout wordaligned corpuswe have shown that the model is able to predict the orientation very well eg for arabicenglish the classification error rate is only 21we presented improved translation results for three language pairs on the btec task and for the large data track of the chineseenglish nist taskin none of the cases additional features have hurt the classification performance on the heldout test corpusthis is a strong evidence that the maximum entropy framework is suitable for this taskanother advantage of our approach is the generalization capability via the use of word classes or partofspeech informationfurthermore additional features can be easily integrated into the maximum entropy frameworkso far the word classes were not used for the translation experimentsas the word classes help for the classification task we might expect further improvements of the translation resultsusing partofspeech information instead to the automatically computed word classes might also be beneficialmore finetuning of the reordering model toward translation quality might also result in improvementsas already mentioned in section 43 a richer feature set could be helpfulthis material is partly based upon work supported by the defense advanced research projects agency under contract nohr001106c0023 and was partly funded by the european union under the integrated project tcstar
W06-3108
discriminative reordering models for statistical machine translationwe present discriminative reordering models for phrasebased statistical machine translationthe models are trained using the maximum entropy principlewe use several types of features based on words based on word classes based on the local contextwe evaluate the overall performance of the reordering models as well as the contribution of the individual feature types on a wordaligned corpusadditionally we show improved translation performance using these reordering models compared to a stateoftheart baseline systemdespite their high perplexities reordered lms yield some improvements when integrated to a psmt baseline that already includes a discriminative phrase orientation modelto lexicalize reordering a discriminative reordering model is usedwe use clustered word classes in a discriminate reordering model and show that they reduce the classification error rate
manual and automatic evaluation of machine translation between european languages adequacy fluency bleu upcjmc lcc utd upcmr nrc ntt cmu rali systran upv spanishenglish adequacy fluency bleu upcjmc ntt lcc utd nrc upcmr uedinbirch rali upcjg upv germanenglish adequacy fluency bleu uedinphi lcc nrc utd ntt upcmr rali upcjmc systran upv figure 7 evaluation of translation to english on indomain test data 112 englishfrench adequacy fluency bleu nrc upcmr upcjmc systran utd rali ntt englishspanish adequacy fluency bleu ms upcmr utd nrc ntt upcjmc rali uedinbirch upcjg upv englishgerman adequacy fluency bleu upcmr ntt upcjmc nrc rali systran upv figure 8 evaluation of translation from english on indomain test data 113 frenchenglish adequacy fluency bleu upcjmc cmu systran lcc upcmr utd ntt nrc rali upv spanishenglish adequacy fluency bleu upcjmc uedinbirch nrc ntt upcmr lcc utd upcjg rali upv germanenglish adequacy fluency bleu systran uedinphi lcc utd ntt nrc upcmr upcjmc rali upv figure 9 evaluation of translation to english on outofdomain test data 114 englishfrench adequacy fluency bleu systran upcjmc upcmr utd rali nrc ntt englishspanish adequacy fluency bleu upcmr ms utd nrc upcjmc ntt rali uedinbirch upcjg upv englishgerman adequacy fluency bleu systran upcmr upcjmc rali nrc ntt upv figure 10 evaluation of translation from english on outofdomain test data 115 frenchenglish in domain out of domain adequacy adequacy 03 03 02 02 01 01 00 00 01 01 02 02 03 03 04 04 05 05 06 06 07 07 upv 08 08 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 upv systran upcntt rali upcjmc cc fluency 02 01 00 01 02 03 04 upv 05 systran upv upc jmc fluency 02 01 00 01 02 03 04 05 06 td t cc upc rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 figure 11 correlation between manual and automatic scores for frenchenglish 116 spanishenglish figure 12 correlation between manual and automatic scores for spanishenglish 03 04 02 01 00 01 02 03 04 05 upv 04 upv 03 in domain upcjg adequacy 03 02 01 00 01 02 out of domain upcjmc nrc ntt adequacy upcjmc lcc rali rali 07 05 06 upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 rali fluency 02 01 00 01 02 ntt upcmr lcc utd upcjg rali fluency 02 01 00 01 02 03 04 upcjmc uedinbirch 05 05 upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 in domain out of domain adequacy adequacy germanenglish 15 16 17 18 19 20 21 22 23 24 25 26 27 04 03 02 01 00 01 02 03 04 05 06 lcc upcjmc systran upv fluency ula upcmr lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 04 03 02 01 00 01 02 03 04 systran upv uedinphi jmc rali systran 03 04 05 06 upv 12 13 14 15 16 17 18 19 20 04 03 02 01 00 01 02 fluency uedinphi utd upcjmc upcmr 04 rali 03 04 05 upv 12 13 14 15 16 17 18 19 20 03 02 01 00 01 02 englishfrench in domain out of domain adequacy adequacy 02 01 00 01 25 26 27 28 29 30 31 32 02 03 systran ntt 05 04 03 02 01 00 01 02 03 20 21 22 23 24 25 26 fluency fluency systran nrc rali 25 26 27 28 29 30 31 32 02 01 00 01 02 03 cme p 20 21 22 23 24 25 26 05 04 03 02 01 00 01 02 03 figure 14 correlation between manual and automatic scores for englishfrench 119 in domain out of domain upv adequacy 09 03 02 01 00 01 02 03 04 05 06 07 08 upv 23 24 25 26 27 28 29 30 31 32 upcmr utd upcjmc uedinbirch ntt rali uedinbirch 16 17 18 19 20 21 22 23 24 25 26 27 adequacy upcmr 04 03 02 01 00 01 02 03 04 05 06 07 08 09 10 11 englishspanish fluency ntt nrc rali uedinbirch 02 03 05 upv 16 17 18 19 20 21 22 23 24 25 26 27 04 nr rali fluency 04 upcmr utd upcjmc 05 06 upv 23 24 25 26 27 28 29 30 31 32 02 01 00 01 02 03 03 02 01 00 01 06 07 figure 15 correlation between manual and automatic scores for englishspanish 120 englishgerman in domain out of domain adequacy adequacy 03 02 01 00 01 02 03 04 05 06 07 08 upv 02 03 04 05 06 07 08 09 upv 05 04 systran upcmr rali 03 ntt 02 01 00 01 systran upcmr 09 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 fluency 02 01 00 01 02 03 04 upv 05 upv systran upcmr fluency 04 03 02 01 00 01 02 03 04 05 06 systran ntt was done by the participantsthis revealed interesting clues about the properties of automatic and manual scoring we evaluated translation from english in addition to into englishenglish was again paired with german french and spanishwe dropped however one of the languages finnish partly to keep the number of tracks manageable partly because we assumed that it would be hard to find enough finnish speakers for the manual evaluationthe evaluation framework for the shared task is similar to the one used in last years shared tasktraining and testing is based on the europarl corpusfigure 1 provides some statistics about this corpusto lower the barrier of entrance to the competition we provided a complete baseline mt system along with data resourcesto summarize we provided the performance of the baseline system is similar to the best submissions in last years shared taskwe are currently working on a complete open source implementation of a training and decoding system which should become available over the summer pus from which also the indomain test set is takenthere is twice as much language modelling data since training data for the machine translation system is filtered against sentences of length larger than 40 wordsoutofdomain test data is from the project syndicate web site a compendium of political commentarythe test data was again drawn from a segment of the europarl corpus from the fourth quarter of 2000 which is excluded from the training dataparticipants were also provided with two sets of 2000 sentences of parallel text to be used for system development and tuningin addition to the europarl test set we also collected 29 editorials from the project syndicate website2 which are published in all the four languages of the shared taskwe aligned the texts at a sentence level across all four languages resulting in 1064 sentence per languagefor statistics on this test set refer to figure 1the outofdomain test set differs from the europarl data in various waysthe text type are editorials instead of speech transcriptsthe domain is general politics economics and sciencehowever it is also mostly political content and opinionwe received submissions from 14 groups from 11 institutions as listed in figure 2most of these groups follow a phrasebased statistical approach to machine translationmicrosofts approach uses dependency trees others use hierarchical phrase modelssystran submitted their commercial rulebased system that was not tuned to the europarl corpusabout half of the participants of last years shared task participated againthe other half was replaced by other participants so we ended up with roughly the same numbercompared to last years shared task the participants represent more longterm research effortsthis may be the sign of a maturing research environmentwhile building a machine translation system is a serious undertaking in future we hope to attract more newcomers to the field by keeping the barrier of entry as low as possiblefor more on the participating systems please refer to the respective system description in the proceedings of the workshopfor the automatic evaluation we used bleu since it is the most established metric in the fieldthe bleu metric as all currently proposed automatic metrics is occasionally suspected to be biased towards statistical systems especially the phrasebased systems currently in useit rewards matches of ngram sequences but measures only at most indirectly overall grammatical coherencethe bleu score has been shown to correlate well with human judgement when statistical machine translation systems are compared however a recent study pointed out that this correlation may not always be strongthey demonstrated this with the comparison of statistical systems against manually postedited mt output and a rulebased commercial systemthe development of automatic scoring methods is an open field of researchit was our hope that this competition which included the manual and automatic evaluation of statistical systems and one rulebased commercial system will give further insight into the relation between automatic and manual evaluationat the very least we are creating a data resource that may the basis of future research in evaluation metricswe computed bleu scores for each submission with a single reference translationfor each sentence we counted how many ngrams in the system output also occurred in the reference translationby taking the ratio of matching ngrams to the total number of ngrams in the system output we obtain the precision pn for each ngram order n these values for ngram precision are combined into a bleu score the formula for the bleu metric also includes a brevity penalty for too short output which is based on the total number of words in the system output c and in the reference r bleu is sensitive to tokenizationbecause of this we retokenized and lowercased submitted output with our own tokenizer which was also used to prepare the training and test dataconfidence interval since bleu scores are not computed on the sentence level traditional methods to compute statistical significance and confidence intervals do not applyhence we use the bootstrap resampling method described by koehn following this method we repeatedly say 1000 times sample sets of sentences from the output of each system measure their bleu score and use these 1000 bleu scores as basis for estimating a confidence intervalwhen dropping the top and bottom 25 the remaining bleu scores define the range of the confidence intervalpairwise comparison we can use the same method to assess the statistical significance of one system outperforming anotherif two systems scores are close this may simply be a random effect in the test datato check for this we do pairwise bootstrap resampling again we repeatedly sample sets of sentences this time from both systems and compare their bleu scores on these setsif one system is better in 95 of the sample sets we conclude that its higher bleu score is statistically significantly betterthe bootstrap method has been critized by riezler and maxwell and collins et al as being too optimistic in deciding for statistical significant difference between systemswe are therefore applying a different method which has been used at the 2005 darpanist evaluationwe divide up each test set into blocks of 20 sentences check for each block if one system has a higher bleu score than the other and then use the sign testthe sign test checks how likely a sample of better and worse bleu scores would have been generated by two systems of equal performancelet say if we find one system doing better on 20 of the blocks and worse on 80 of the blocks is it significantly worsewe check how likely only up to k 20 better scores out of n 100 would have been generated by two equal systems using the binomial distribution if p 095 then we have a statistically significant difference between the systemswhile automatic measures are an invaluable tool for the daytoday development of machine translation systems they are only a imperfect substitute for human assessment of translation quality or as the acronym bleu puts it a bilingual evaluation understudymany human evaluation metrics have been proposedalso the argument has been made that machine translation performance should be evaluated via taskbased evaluation metrics ie how much it assists performing a useful task such as supporting human translators or aiding the analysis of textsthe main disadvantage of manual evaluation is that it is timeconsuming and thus too expensive to do frequentlyin this shared task we were also confronted with this problem and since we had no funding for paying human judgements we asked participants in the evaluation to share the burdenparticipants and other volunteers contributed about 180 hours of labor in the manual evaluationwe asked participants to each judge 200300 sentences in terms of fluency and adequacy the most commonly used manual evaluation metricswe settled on contrastive evaluations of 5 system outputs for a single test sentencesee figure 3 for a screenshot of the evaluation toolpresenting the output of several system allows the human judge to make more informed judgements contrasting the quality of the different systemsthe judgements tend to be done more in form of a ranking of the different systemswe assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each otherwhile we had up to 11 submissions for a translation direction we did decide against presenting all 11 system outputs to the human judgeour initial experimentation with the evaluation tool showed that this is often too overwhelmingmaking the ten judgements takes on average 2 minutestypically judges initially spent about 3 minutes per sentence but then accelerate with experiencejudges where excluded from assessing the quality of mt systems that were submitted by their institutionsentences and systems were randomly selected and randomly shuffled for presentationwe collected around 300400 judgements per judgement type per system per language pairthis is less than the 694 judgements 2004 darpanist evaluation or the 532 judgements in the 2005 darpanist evaluationthis decreases the statistical significance of our results compared to those studiesthe number of judgements is additionally fragmented by our breakup of sentences into indomain and outofdomainthe human judges were presented with the following definition of adequacy and fluency but no additional instructionsjudges varied in the average score they handed outthe average fluency judgement per judge ranged from 233 to 367 the average adequacy judgement ranged from 256 to 413since different judges judged different systems we normalized the scoresthe normalized judgement per judge is the raw judgement plus in words the judgements are normalized so that the average normalized judgement per judge is 3another way to view the judgements is that they are less quality judgements of machine translation systems per se but rankings of machine translation systemsin fact it is very difficult to maintain consistent standards on what an adequacy judgement of 3 means even for a specific language pairthe way judgements are collected human judges tend to use the scores to rank systems against each otherif one system is perfect another has slight flaws and the third more flaws a judge is inclined to hand out judgements of 5 4 and 3on the other hand when all systems produce muddled output but one is better and one is worse but not completely wrong a judge is inclined to hand out judgements of 4 3 and 2the judgement of 4 in the first case will go to a vastly better system output than in the second casewe therefore also normalized judgements on a persentence basisthe normalized judgement per sentence is the raw judgement plus systems that generally do better than others will receive a positive average normalizedjudgement per sentencesystems that generally do worse than others will receive a negative oneone may argue with these efforts on normalization and ultimately their value should be assessed by assessing their impact on interannotator agreementgiven the limited number of judgements we received we did not try to evaluate thisconfidence interval to estimate confidence intervals for the average mean scores for the systems we use standard significance testinggiven a set of n sentences we can compute the sample mean x and sample variance s2 of the individual sentence judgements xi the extend of the confidence interval xd xdf can be computed by d 196 n pairwise comparison as for the automatic evaluation metric we want to be able to rank different systems against each other for which we need assessments of statistical significance on the differences between a pair of systemsunfortunately we have much less data to work with than with the automatic scoresthe way we cannot distinction between system performanceautomatic scores are computed on a larger tested than manual scores collected manual judgements we do not necessarily have the same sentence judged for both systems still for about good number of sentences we do have this direct comparison which allows us to apply the sign test as described in section 22the results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paperthe scores and confidence intervals are detailed first in the figures 710 in table form and then in graphical form in figures 1116in the graphs system scores are indicated by a point the confidence intervals by shaded areas around the pointin all figures we present the persentence normalized judgementsthe normalization on a perjudge basis gave very similar ranking only slightly less consistent with the ranking from the pairwise comparisonsthe confidence intervals are computed by bootstrap resampling for bleu and by standard significance testing for the manual scores as described earlier in the paperpairwise comparison is done using the sign testoften two systems can not be distinguished with a confidence of over 95 so there are ranked the samethis actually happens quite frequently so that the rankings are broad estimatesfor instance if 10 systems participate and one system does better than 3 others worse then 2 and is not significant different from the remaining 4 its rank is in the interval 37at first glance we quickly recognize that many systems are scored very similar both in terms of manual judgement and bleuthere may be occasionally a system clearly at the top or at the bottom but most systems are so close that it is hard to distinguish themin figure 4 we displayed the number of system comparisons for which we concluded statistical significancefor the automatic scoring method bleu we can distinguish three quarters of the systemswhile the bootstrap method is slightly more sensitive it is very much in line with the sign test on text blocksfor the manual scoring we can distinguish only half of the systems both in terms of fluency and adequacymore judgements would have enabled us to make better distinctions but it is not clear what the upper limit iswe can check what the consequences of less manual annotation of results would have been with half the number of manual judgements we can distinguish about 40 of the systems 10 lessthe test set included 2000 sentences from the europarl corpus but also 1064 sentences outofdomain test datasince the inclusion of outofdomain test data was a very late decision the participants were not informed of thisso this was a surprise element due to practical reasons not maliceall systems did considerably worse on outofdomain training datathis is demonstrated by average scores over all systems in terms of bleu fluency and adequacy as displayed in figure 5the manual scores are averages over the raw unnormalized scoresit is well know that language pairs such as englishgerman pose more challenges to machine translation systems than language pairs such as frenchenglishdifferent sentence structure and rich target language morphology are two reasons for thisagain we can compute average scores for all systems for the different language pairs the differences in difficulty are better reflected in the bleu scores than in the raw unnormalized manual judgementsthe easiest language pair according to bleu received worse manual scores than the hardest this is because different judges focused on different language pairshence the different averages of manual scores for the different language pairs reflect the behaviour of the judges not the quality of the systems on different language pairsgiven the closeness of most systems and the wide overlapping confidence intervals it is hard to make strong statements about the correlation between human judgements and automatic scoring methods such as bleuwe confirm the finding by callisonburch et al that the rulebased system of systran is not adequately appreciated by bleuindomain systran scores on this metric are lower than all statistical systems even the ones that have much worse human scoressurprisingly this effect is much less obvious for outofdomain test datafor instance for outofdomain englishfrench systran has the best bleu and manual scoresour suspicion is that bleu is very sensitive to jargon to selecting exactly the right words and not synonyms that human judges may appreciate as equally goodthis is can not be the only explanation since the discrepancy still holds for instance for outofdomain frenchenglish where systran receives among the best adequacy and fluency scores but a worse bleu score than all but one statistical systemthis data set of manual judgements should provide a fruitful resource for research on better automatic scoring methodsso who won the competitionthe best answer to this is many research labs have very competitive systems whose performance is hard to tell apartthis is not completely surprising since all systems use very similar technologyfor some language pairs system performance is more divergent than for others at least as measured by bleuthe statistical systems seem to still lag behind the commercial rulebased competition when translating into morphological rich languages as demonstrated by the results for englishgerman and englishfrenchthe predominate focus of building systems that translate into english has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local contextthis is the first time that we organized a largescale manual evaluationwhile we used the standard metrics of the community the we way presented translations and prompted for assessment differed from other evaluation campaignsfor instance in the recent iwslt evaluation first fluency annotations were solicited and then adequacy annotationsalmost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements but nevertheless most did not explicitly move towards a rankingbased evaluationalmost all annotators expressed their preference to move to a rankingbased evaluation in the futurea few pointed out that adequacy should be broken up into two criteria are all source words covered does the translation have the same meaning including connotationsannotators suggested that long sentences are almost impossible to judgesince all long sentence translation are somewhat muddled even a contrastive evaluation between systems was difficulta few annotators suggested to break up long sentences into clauses and evaluate these separatelynot every annotator was fluent in both the source and the target languagewhile it is essential to be fluent in the target language it is not strictly necessary to know the source language if a reference translation was givenhowever ince we extracted the test corpus automatically from web sources the reference translation was not always accurate due to sentence alignment errors or because translators did not adhere to a strict sentencebysentence translation lack of correct reference translations was pointed out as a shortcoming of our evaluationone annotator suggested that this was the case for as much as 10 of our test sentencesannotators argued for the importance of having correct and even multiple referencesit was also proposed to allow annotators to skip sentences that they are unable to judgewe carried out an extensive manual and automatic evaluation of machine translation performance on european language pairswhile many systems had similar performance the results offer interesting insights especially about the relative performance of statistical and rulebased systemsdue to many similarly performing systems we are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metricsthe bias of automatic methods in favor of statistical systems seems to be less pronounced on outofdomain test datathe manual evaluation of scoring translation on a graded scale from 15 seems to be very hard to performreplacing this with an ranked evaluation seems to be more suitablehuman judges also pointed out difficulties with the evaluation of long sentencesthis work was supported in part under the gale program of the defense advanced research projects agency contract nohr001106c0022
W06-3114
manual and automatic evaluation of machine translation between european languageswe evaluated machine translation performance for six european language pairs that participated in a shared task translating french german spanish texts to english and backevaluation was done automatically using the bleu score and manually on fluency and adequacythe results of the workshop further suggested that bleu systematically underestimated the quality of rulebased machine translation systemswe report and analyze several cases of strong disagreement between system rankings provided by human assessors and those produced by the bleu metric
syntax augmented machine translation via chart parsing we present translation results on the shared task exploiting parallel texts for statistical machine translation generated by a chart parsing decoder operating on phrase tables augmented and generalized with target language syntactic categories we use a target language parser to generate parse trees for each sentence on the target side of the bilingual training corpus matching them with phrase table lattices built for the corresponding source sentence considering phrases that correspond to syntactic categories in the parse trees we develop techniques to augment and generalize the phrase table into a synchronous bilingual grammar we present results on the frenchtoenglish task for this workshop representing significant improvements over the workshops baseline system our translation system is available opensource under the gnu general recent work in machine translation has evolved from the traditional word and phrase based models to include hierarchical phrase models and bilingual synchronous grammars these advances are motivated by the desire to integrate richer knowledge sources within the translation process with the explicit goal of producing more fluent translations in the target languagethe hierarchical translation operations introduced in these methods call for extensions to the traditional beam decoder in this work we introduce techniques to generate syntactically motivated generalized phrases and discuss issues in chart parser based decoding in the statistical machine translation environment generates synchronous contextfree grammar rules from an existing phrase translation tablethese rules can be viewed as phrase pairs with mixed lexical and nonterminal entries where nonterminal entries represent placeholders for inserting additional phrases pairs at decoding timewhile uses only two nonterminal symbols in his grammar we introduce multiple syntactic categories taking advantage of a target language parser for this informationwhile represent syntactical information in the decoding process through a series of transformation operations we operate directly at the phrase levelin addition to the benefits that come from a more structured hierarchical rule set we believe that these restrictions serve as a syntax driven language model that can guide the decoding process as ngram context based language models do in traditional decodingin the following sections we describe our phrase annotation and generalization process followed by the design and pruning decisions in our chart parserwe give results on the frenchenglish europarl data and conclude with prospects for future workwe start with phrase translations on the parallel training data using the techniques and implementation described in this phrase table provides the purely lexical entries in the final hierarchical rule set that will be used in decodingwe then use charniaks parser to generate the most likely parse tree for each english target sentence in the training corpusnext we determine all phrase pairs in the phrase table whose source and target side occur in each respective source and target sentence pair defining the scope of the initial rules in our syncfgannotation if the target side of any of these initial rules correspond to a syntactic category c of the target side parse tree we label the phrase pair with that syntactic categorythis label corresponds to the lefthand side of our synchronous grammarphrase pairs that do not correspond to a span in the parse tree are given a default category x and can still play a role in the decoding processin work done after submission to the 2006 data track we assign such phrases an extended category of the form c1 c2 c1c2 or c2c1 indicating that the phrase pairs target side spans two adjacent syntactic categories a partial syntactic category c1 missing a c2 to the right or a partial c1 missing a c2 to the left respectivelygeneralization in order to mitigate the effects of sparse data when working with phrase and ngram models we would like to generate generalized phrases which include nonterminal symbols that can be filled with other phrasestherefore after annotating the initial rules from the current training sentence pair we adhere to to recursively generalize each existing rule however we abstract on a persentence basisthe grammar extracted from this evaluations training data contains 75 nonterminals in our standard system and 4000 nonterminals in the extendedcategory systemfigure 1 illustrates the annotation and generalization processwe employ a loglinear model to assign costs to the syncfggiven a source sentence f the preferred translation output is determined by computing the lowestcost derivation yielding f as its source side where the cost of a derivation r1 o o right now with respective feature vectors v1 vn e rm is given by here λ1 λm are the parameters of the loglinear model which we optimize on a heldout portion of the training set using minimumerrorrate training we use the following features for our rulesour syncfg rules are equivalent to a probabilistic contextfree grammar and decoding is therefore an application of chart parsinginstead of the common method of converting the cfg grammar into chomsky normal form and applying a cky algorithm to produce the most likely parse for a given source sentence we avoided the explosion of the rule set caused by the introduction of new nonterminals in the conversion process and implemented a variant of the cky algorithm as described in each cell of the parsing process in contains a set of hypergraph nodes a hypergraph node is an equivalence class of complete hypotheses with identical production results complete hypotheses point directly to nodes in their backwards star and the cost of the complete hypothesis is calculated with respect to each back pointer nodes best costthis structure affords efficient parsing with minimal pruning but sacrifices effective management of unique language model states contributing to significant search errors during parsingat initial submission time we simply rescored a kbest list extracted after first best parsing using the lazy retrieval process in postsubmission after our workshop submission we modified the kbest list extraction process to integrate an ngram language model during kbest extractioninstead of expanding each derivation in a breadthfirst fashion we expand only a single back pointer and score this new derivation with its translation model scores and a language model cost estimate consisting of an accurate component based on the words translated so far and an estimate based on each remaining back pointers top scoring hypothesisto improve the diversity of the final kbest list we keep track of partially expanded hypotheses that have generated identical target words and refer to the same hypergraph nodesany arising twin hypothesis is immediately removed from the kbest extraction beam during the expansion processwe present results that compare our system against the baseline pharaoh implementation and mer training scripts provided for this workshopour results represent work done before the submission due date as well as after with the following generalized phrase systems extendedcategory scheme but with rules only containing up to 2 nonterminal abstraction siteswe also explored the impact of longer initial phrases by training another phrase table with phrases up to length 12our results are presented in table 1while our submission time system shows no improvement over the baseline we clearly see the impact of integrating the language model into the kbest list extraction processour final system shows at statistically significant improvement over the baseline we also see a trend towards improving translation quality as we employ richer extraction techniquesthe relatively poor performance of lex with lm in kbest compared to the baseline shows that we are still making search errors during parsing despite tighter integration of the language modelwe also ran an experiment with cmus phrasebased decoder using the length7 phrase tablewhile its developmentset score was only 3101 the decoder achieved 3142 on the test set placing it at the same level as our extendedcategory system for that phrase tablein this work we applied syntax based resources to annotate and generalize phrase translation tables extracted via existing phrase extraction techniquesour work reaffirms the feasibility of parsing approaches to machine translation in a large data setting and illustrates the impact of adding syntactic categories to drive and constrain the structured search spacewhile no improvements were available at submission time our subsequent performance highlights the importance of tight integration of ngram language modeling within the syntax driven parsing environmentour translation system is available opensource under the gnu general public license at wwwcscmueduzollmannsamt
W06-3119
syntax augmented machine translation via chart parsingwe present translation results on the shared task exploiting parallel texts for statistical machine translation generated by a chart parsing decoder operating on phrase tables augmented and generalized with target language syntactic categorieswe use a target language parser to generate parse trees for each sentence on the target side of the bilingual training corpus matching them with phrase table lattices built for the corresponding source sentenceconsidering phrases that correspond to syntactic categories in the parse trees we develop techniques to augment and generalize the phrase table into a synchronous bilingual grammarwe present results on the frenchtoenglish task for this workshop representing significant improvements over the workshops baseline systemour translation system is available opensource under the gnu general public licensein our work syntax is successfully integrated into hierarchical smtwe start with a complete set of phrases as extracted by traditional pbmt heuristics and then annotated the target side of each phrasal entry with the label of the constituent node in the targetside parse tree that subsumes the spanwe use broken syntactic fragments to augment their grammars to increase the rule coverage while we learn optimal tree fragments transformed from the original ones via a generative framework they enumerate the fragments available from the original trees without learning process
a syntaxdirected translator with extended domain of locality sd translation schema a syntaxdirected translator first parses the sourcelanguage input into a parsetree and then recursively converts the tree into a string in the targetlanguage we model this conversion by an extended treetostring transducer that have multilevel trees on the sourceside which gives our system more expressive power and flexibility we also define a direct probability model and use a lineartime dynamic programming algorithm to search for the best derivation the model is then extended to the general loglinear framework in order to rescore with other fealike language models we devise a simpleyeteffective algorithm to nonduplicate translations rescoring initial experimental results on englishtochinese translation are presented a syntaxdirected translator first parses the sourcelanguage input into a parsetree and then recursively converts the tree into a string in the targetlanguagewe model this conversion by an extended treetostring transducer that have multilevel trees on the sourceside which gives our system more expressive power and flexibilitywe also define a direct probability model and use a lineartime dynamic programming algorithm to search for the best derivationthe model is then extended to the general loglinear framework in order to rescore with other features like ngram language modelswe devise a simpleyeteffective algorithm to generate nonduplicate kbest translations for ngram rescoringinitial experimental results on englishtochinese translation are presentedthe concept of syntaxdirected translation was originally proposed in compiling where the source program is parsed into a tree representation that guides the generation of the object codefollowing aho and ullman a translation as a set of string pairs can be specified by a syntaxdirected translation schema which is essentially a synchronous contextfree grammar that generates two languages simultaneouslyan sdts also induces a translator a device that performs the transformation from input string to output stringin this context an sd translator consists of two components a sourcelanguage parser and a recursive converter which is usually modeled as a topdown treetostring transducer the relationship among these concepts is illustrated in fig1this paper adapts the idea of syntaxdirected translator to statistical machine translation we apply stochastic operations at each node of the sourcelanguage parsetree and search for the best derivation that converts the whole tree into some targetlanguage string with the highest probabilityhowever the structural divergence across languages often results in nonisomorphic parsetrees that is beyond the power of scfgsfor example the s structure in english is translated into a vso wordorder in arabic an instance of complex reordering not captured by any scfg to alleviate the nonisomorphism problem grammars with richer expressive power have been proposed whose rules apply to larger fragments of the treefor example shieber and schabes introduce synchronous treeadjoining grammar and eisner uses a synchronous treesubstitution grammar which is a restricted version of stag with no adjunctionsstsgs and stags generate more tree relations than scfgs eg the nonisomorphic tree pair in fig2this extra expressive power lies in the extended domain of locality ie elementary structures beyond the scope of onelevel contextfree productionsbesides being linguistically motivated the need for edl is also supported by empirical findings in mt that onelevel rules are often inadequate similarly in the treetransducer terminology graehl and knight define extended tree transducers that have multilevel trees on the sourcesidesince an sd translator separates the sourcelanguage analysis from the recursive transformation the domains of locality in these two modules are orthogonal to each other in this work we use a cfgbased treebank parser but focuses on the extended domain in the recursive converterfollowing galley et al we use a special class of extended treetostring transducer with multilevel lefthandside trees1 since the righthandside string can be viewed as a flat onelevel tree with the same nonterminal root from lhs this framework is closely related to stsgs they both have extended domain of locality on the sourceside while our framework remains as a cfg on the targetsidefor instance an equivalent zrs rule for the complex reordering in fig2 would be while section 3 will define the model formally we first proceed with an example translation from english to chinese 1throughout this paper we will use lhs and sourceside interchangeably in accordance with our experiments we also use english and chinese as the source and target languages opposite to the foreigntoenglish convention of brown et al figure 3 shows how the translator worksthe english sentence is first parsed into the tree in which is then recursively converted into the chinese string in through five stepsfirst at the root node we apply the rule r1 which preserves the toplevel wordorder and translates the english period into its chinese counterpart then the rule r2 grabs the whole subtree for the gunman and translates it as a phrase npc nn qiangshou now we get a partial chinese partial english sentence qiangshou vp o as shown in fig3 our recursion goes on to translate the vp subtreehere we use the rule r3 for the passive construction which captures the fact that the agent and the verb are always inverted between english and chinese in a passive voicefinally we apply rules r and r5 which perform phrasal translations for the two remaining subtrees in respectively and get the completed chinese string in it is helpful to compare this approach with recent efforts in statistical mtphrasebased models are good at learning local translations that are pairs of substrings but often insufficient in modeling the reorderings of phrases themselves especially between language pairs with very different wordorderthis is because the generative capacity of these models lies within the realm of finitestate machinery which is unable to process nested structures and longdistance dependencies in natural languagessyntaxbased models aim to alleviate this problem by exploiting the power of synchronous rewriting systemsboth yamada and knight and chiang use scfgs as the underlying model so their translation schemata are syntaxdirected as in fig1 but their translators are not both systems do parsing and transformation in a joint search essentially over a packed forest of parsetreesto this end their translators are not directed by a syntactic treealthough their method potentially considers more than one single parsetree as in our case the packed representation of the forest restricts the scope of each transfer step to a onelevel contextfree rule while our approach decouples the sourcelanguage analyzer and the recursive converter so that the latter can have an extended domain of localityin addition our translator also enjoys a speedup by this decoupling with each of the two stages having a smaller search spacein fact the recursive transfer step can be done by a a lineartime algorithm and the parsing step is also fast with the modern treebank parsers for instance in contrast their decodings are reported to be computationally expensive and chiang uses aggressive pruning to make it tractablethere also exists a compromise between these two approaches which uses a kbest list of parse trees to approximate the full forest besides our model as being linguistically motivated is also more expressive than the formally syntaxbased models of chiang and wu consider again the passive example in rule r3in chiangs scfg there is only one nonterminal x so a corresponding rule would be by x bei x x which can also patternmatch the english sentence i was asleep1 by sunset2 and translate it into chinese as a passive voicethis produces very odd chinese translation because here was a by b in the english sentence is not a passive constructionby contrast our model applies rule r3 only if a is a past participle and b is a noun phrase this example also shows that onelevel scfg rule even if informed by the treebank as in is not enough to capture a common construction like this which is five levels deep there are also some variations of syntaxdirected translators where dependency structures are used in place of constituent trees although they share with this work the basic motivations and similar speedup it is difficult to specify reordering information within dependency elementary structures so they either resort to heuristics or a separate ordering model for linearization 2 our approach in contrast explicitly models the reordering of subtrees within individual transfer rulesin this section we define the formal machinery of our recursive transformation model as a special case of xrs transducers that has only one state and each rule is linear and nondeleting with regarding to variables in the source and target sides we require each variable xi e x occurs exactly once in t and exactly once in s we denote ρ to be the root symbol of tree t when writing these rules we avoid notational overhead by introducing a shorthand form from galley et al that integrates the mapping into the tree which is used throughout section 1following tsg terminology we call these variable nodes such as x2npc substitution nodes since when applying a rule to a tree these nodes will be matched with a subtree with the same root symbolwe also define x to be the rank of the rule ie the number of variables in itfor example rules r1 and r3 in section 1 are both of rank 2if a rule has no variable ie it is of rank zero then it is called a purely lexical rule which performs a phrasal translation as in phrasebased modelsrule r2 for instance can be thought of as a phrase pair informally speaking a derivation in a transducer is a sequence of steps converting a sourcelanguage tree into a targetlanguage string with each step applying one tranduction rulehowever it can also be formalized as a tree following the notion of derivationtree in tag definition 2a derivation d its source and target projections noted and c respectively are recursively defined as follows derivation with the root symbol of its source projection matches the corresponding substitution node in r ie ρ φ then d r is also a derivation where xi h t and c xi h cs note that we use a shorthand notation xi h yit to denote the result of substituting each xi with yi in t where xi ranges over all variables in t for example figure 4 shows two derivations for the sentence pair in example in both cases the source projection is the english tree in figure 3 and the target projection is the chinese translationgalley et al presents a lineartime algorithm for automatic extraction of these xrs rules from a parallel corpora with wordalignment and parsetrees on the sourceside which will be used in our experiments in section 6departing from the conventional noisychannel approach of brown et al our basic model is a direct one where e is the english input string and c is the best chinese translation according to the translation model prwe now marginalize over all english parse trees t that yield the sentence e rather than taking the sum we pick the best tree t and factors the search into two separate steps parsing and treetostring translation in this sense our approach can be considered as a viterbi approximation of the computationally expensive joint search using directlysimilarly we now marginalize over all derivations that translates english tree t into some chinese string and apply the viterbi approximation again to search for the best derivation d assuming different rules in a derivation are applied independently we approximate pr as where the probability pr of the rule r is estimated by conditioning on the root symbol pfollowing och and ney we extend the direct model into a general loglinear framework in order to incorporate other features where pr is the language model and eλc is the length penalty term based on c the length of the translationparameters a q and a are the weights of relevant featuresnote that positive a prefers longer translationswe use a standard trigram model for prwe first present a lineartime algorithm for searching the best derivation under the direct model and then extend it to the loglinear case by a new variant of kbest parsingsince our probability model is not based on the noisy channel we do not call our search module a decoder as in most statistical mt workinstead readers who speak english but not chinese can view it as an encoder which corresponds exactly to our direct modelgiven a fixed parsetree t we are to search for the best derivation with the highest probabilitythis can be done by a simple topdown traversal from the root of t at each node q in t try each possible rule r whose englishside pattern t matches the subtree tη rooted at q and recursively visit each descendant node qi in tη that corresponds to a variable in twe then collect the resulting targetlanguage strings and plug them into the chineseside s of rule r getting a translation for the subtree tηwe finally take the best of all translationswith the extended lhs of our transducer there may be many different rules applicable at one tree nodefor example consider the vp subtree in fig3 where both r3 and r6 can applyas a result the number of derivations is exponential in the size of the tree since there are exponentially many decompositions of the tree for a given set of rulesthis problem can be solved by memoization we cache each subtree that has been visited before so that every tree node is visited at most oncethis results in a dynamic programming algorithm that is guaranteed to run in o time where n is the size of the parse tree p is the maximum number of rules applicable to one tree node and q is the maximum size of an applicable rulefor a given ruleset this algorithm runs in time linear to the length of the input sentence since p and q are considered grammar constants and n is proportional to the input lengththe full pseudocode is worked out in algorithm 1a restricted version of this algorithm first appears in compiling for optimal code generation from expressiontrees in computational linguistics the bottomup version of this algorithm resembles the tree parsing algorithm for tsg by eisner similar algorithms have also been proposed for dependencybased translation under the loglinear model one still prefers to search for the globally best derivation d however integrating the ngram model with the translation model in the search is computationally very expensiveas a standard alternative rather than aiming at the exact best derivation we search for topk derivations under the direct model using algorithm 1 and then rerank the kbest list with the language model and length penaltylike other instances of dynamic programming algorithm 1 can be viewed as a hypergraph search problemto this end we use an efficient algorithm by huang and chiang that solves the general kbest derivations problem in monotonic hypergraphsit consists of a normal forward phase for the 1best derivation and a recursive backward phase for the 2nd 3rd kth derivationsunfortunately different derivations may have the same yield due to multilevel lhs of our rulesin practice this results in a very small ratio of unique strings among topk derivationsto alleviate this problem determinization techniques have been proposed by mohri and riley for finitestate automata and extended to tree automata by may and knight these methods eliminate spurious ambiguity by effectively transforming the grammar into an equivalent deterministic formhowever this transformation often leads to a blowup in forest size which is exponential to the original size in the worstcaseso instead of determinization here we present a simpleyeteffective extension to the algorithm 3 of huang and chiang that guarantees to output unique translated strings this method should work in general for any equivalence relation that can be defined on derivationsour experiments are on englishtochinese translation the opposite direction to most of the recent work in smtwe are not doing the reverse direction at this time partly due to the lack of a sufficiently good parser for chineseour training set is a chineseenglish parallel corpus with 195m aligned sentences we first wordalign them by giza then parse the english side by a variant of collins parser and finally apply the ruleextraction algorithm of galley et al the resulting rule set has 247m xrs ruleswe also use the sri language modeling toolkit to train a chinese trigram model with knesserney smoothing on the chinese side of the parallel corpusour evaluation data consists of 140 short sentences of the xinhua portion of the nist 2003 chinesetoenglish evaluation setsince we are translating in the other direction we use the first english reference as the source input and the chinese as the single referencewe implemented our system as follows for each input sentence we first run algorithm 1 which returns the 1best translation and also builds the derivation forest of all translations for this sentencethen we extract the top 5000 nonduplicate translated strings from this forest and rescore them with the trigram model and the length penaltywe compared our system with a stateoftheart phrasebased system pharaoh on the evaluation datasince the target language is chinese we report characterbased bleu score instead of wordbased to ensure our results are independent of chinese tokenizations the bleu scores are based on single reference and up to 4gram precisions feature weights of both systems are tuned on the same data set3 for pharaoh we use the standard minimum errorrate training and for our system since there are only two independent features we use a simple gridbased lineoptimization along the languagemodel weight axisfor a given languagemodel weight q we use binary search to find the best length penalty a that leads to a lengthratio closest to 1 against the referencethe results are summarized in table 1the rescored translations are better than the 1best results from the direct model but still slightly worse than pharaohthis paper presents an adaptation of the classic syntaxdirected translation with linguisticallymotivated formalisms for statistical mtcurrently we are doing largerscale experimentswe are also investigating more principled algorithms for integrating ngram language models during the search rather than kbest rescoringbesides we will extend this work to translating the top k parse trees instead of committing to the 1best tree as parsing errors certainly affect translation quality
W06-3601
a syntaxdirected translator with extended domain of localitya syntaxdirected translator first parses the sourcelanguage input into a parse tree and then recursively converts the tree into a string in the targetlanguagewe model this conversion by an extended treetostring transducer that have multilevel trees on the sourceside which gives our system more expressive power and flexibilitywe also define a direct probability model and use a lineartime dynamic programming algorithm to search for the best derivationthe model is then extended to the general loglinear frame work in order to rescore with other features like ngram language modelswe devise a simpleyeteffective algorithm to generate nonduplicate kbest translations for ngram rescoringinitial experimental results on englishtochinese translation are presentedwe study a tsgbased treetostring alignment modelwe define the extended treetostring transducer
seeing stars when there are not many stars graphbased semisupervised learning for sentiment categorization we present a graphbased semisupervised learning algorithm to address the sentiment analysis task of rating inference given a set of documents and accompanying ratings the task calls for inferring numerical ratings for unlabeled documents based on the perceived sentiment expressed by their text in particular we are interested in the situation where labeled data is scarce we place this task in the semisupervised setting and demonstrate that considering unlabeled reviews in the learning process can improve ratinginference performance we do so by creating a graph on both labeled and unlabeled data to encode certain assumptions for this task we then solve an optimization problem to obtain a smooth rating function over the whole graph when only limited labeled data is available this method achieves significantly better predictive accuracy over other methods that ignore the unlabeled examples during training sentiment analysis of text documents has received considerable attention recently unlike traditional text categorization based on topics sentiment analysis attempts to identify the subjective sentiment expressed in documents such as consumer product or movie reviewsin particular pang and lee proposed the ratinginference problem rating inference is harder than binary positive negative opinion classificationthe goal is to infer a numerical rating from reviews for example the number of stars that a critic gave to a moviepang and lee showed that supervised machine learning techniques work well for rating inference with large amounts of training datahowever review documents often do not come with numerical ratingswe call such documents unlabeled datastandard supervised machine learning algorithms cannot learn from unlabeled dataassigning labels can be a slow and expensive process because manual inspection and domain expertise are neededoften only a small portion of the documents can be labeled within resource constraints so most documents remain unlabeledsupervised learning algorithms trained on small labeled sets suffer in performancecan one use the unlabeled reviews to improve ratinginferencepang and lee suggested that doing so should be usefulwe demonstrate that the answer is yes our approach is graphbased semisupervised learningsemisupervised learning is an active research area in machine learningit builds better classifiers or regressors using both labeled and unlabeled data under appropriate assumptions this paper contains three contributions workshop on textgraphs at hltnaacl 2006 pages 4552 new york city june 2006 c2006 association for computational linguistics to the sentiment analysis domain extending past supervised learning work by pang and lee the semisupervised ratinginference problem is formalized as followsthere are n review documents x1 xn each represented by some standard feature representation without loss of generality let the first l 0 between documents xi and xj wij should be computable from features so that we can measure similarities between any documents including unlabeled onesa large wij implies that the two documents tend to express the same sentiment we experiment with positivesentence percentage based similarity which is proposed in and mutualinformation modulated wordvector cosine similaritydetails can be found in section 42optionally we are given numerical rating predictions yl1 yn on the unlabeled documents from a separate learner for instance einsensitive support vector regression used by this acts as an extra knowledge source for our semisupervised learning framework to improve uponwe note our framework is general and works without the separate learner toowe now describe our graph for the semisupervised ratinginference problemwe do this piece by piece with reference to figure 1our undirected graph g has 2n nodes v and weighted edges e among some of the nodessumming over all edges in the graph we obtain the smoothness l over the whole graphwe call l the energy or loss which should be minimizedlet l 1 l and you l 1 n be labeled and unlabeled review indices respectivelywith the graph in figure 1 the loss l can be written as are set by cross validation in experimentsthe last two kinds of edges are the key to semisupervised learning they connect unobserved nodes and force ratings to be smooth throughout the graph as we discuss in the next sectionwith the graph defined there are several algorithms one can use to carry out semisupervised learning the basic idea is the same and is what we use in this paperthat is our rating function f should be smooth with respect to the graph f is not smooth if there is an edge with large weight w between nodes xi and xj and the difference between f and f is largethe smoothness over the particular edge can be defined as w f2a small loss implies that the rating of an unlabeled review is close to its labeled peers as well as its unlabeled peersthis is how unlabeled data can participate in learningthe optimization problem is minf lto understand the role of the parameters we define α ak bk and q a so that l can be written as thus q controls the relative weight between labeled neighbors and unlabeled neighbors α is roughly the relative weight given to semisupervised edgeswe can find the closedform solution to the optimization problemdefining an n x n matrix w let w max be a symmetrized version of this matrixlet d be a diagonal degree matrix with note that we define a nodes degree to be the sum of its edge weightslet a d w be the combinatorial laplacian matrixlet c be a diagonal dongle this is a quadratic function in f setting the gradient to zero alaf 0 we find the minimum loss function because c has strictly positive eigenvalues the inverse is well definedall our semisupervised learning experiments use in what followsbefore moving on to experiments we note an interesting connection to the supervised learning method in which formulates rating inference as a metric labeling problem consider a special case of our loss function when b 0 and m ocit is easy to show for labeled nodes j e l the optimal value is the given label f yjthen the optimization problem decouples into a set of onedimensional problems one for each unlabeled node the above problem is easy to solveit corresponds exactly to the supervised nontransductive version of metric labeling except we use squared difference while used absolute differenceindeed in experiments comparing the two their differences are not statistically significantfrom this perspective our semisupervised learning method is an extension with interacting terms among unlabeled datawe performed experiments using the movie review documents and accompanying 4class labels found in the scale dataset v10 available at httpwwwcscornelledupeoplepabo moviereviewdata and first used in we chose 4class instead of 3class labeling because it is harderthe dataset is divided into four authorspecific corpora containing 1770 902 1307 and 1027 documentswe ran experiments individually for each authoreach document is represented as a 10 11 wordpresence vector normalized to sum to 1we systematically vary labeled set size l e 109n 800 400 200100 50 2512 6 to observe the effect of semisupervised learningl 09n is included to match 10fold cross validation used by for each l we run 20 trials where we randomly split the corpus into labeled and test setswe ensure that all four classes are represented in each labeled setthe same random splits are used for all methods allowing paired ttests for statistical significanceall reported results are average test set accuracywe compare our graphbased semisupervised method with two previously studied methods regression and metric labeling as in we ran linear einsensitive support vector regression using joachims svmquotght package with all default parametersthe continuous prediction on a test document is discretized for classificationregression results are reported under the heading reg note this method does not use unlabeled data for trainingwe ran pang and lees method based on metric labeling using svm regression as the initial label preference functionthe method requires an itemsimilarity function which is equivalent to our similarity measure wijamong others we experimented with pspbased similarityfor consistency with supervised metric labeling results with this measure are reported under regpsp note this method does not use unlabeled data for training eitherpspi is defined in as the percentage of positive sentences in review xithe similarity between reviews xi xj is the cosine angle between the vectors and positive sentences are identified using a binary classifier trained on a separate snippet data set located at the same url as abovethe snippet data set contains 10662 short quotations taken from movie reviews appearing on the rottentomatoescom web siteeach snippet is labeled positive or negative based on the rating of the originating reviewpang and lee trained a naive bayes classifierthey showed that psp is a measure for comparing reviewsreviews with low ratings tend to receive low psp scores and those with higher ratings tend to get high psp scoresthus two reviews with a high pspbased similarity are expected to have similar ratingsfor our experiments we derived psp measurements in a similar manner but using a linear svm classifierwe observed the same relationship between psp and ratings the metric labeling method has parameters pang and lee tuned them on a perauthor basis using cross validation but did not report the optimal parameterswe were interested in learning a single set of parameters for use with all authorsin addition since we varied labeled set size it is convenient to tune c kl the fraction of labeled reviews used as neighbors instead of k we then used the same c α for all authors at all labeled set sizes in experiments involving pspbecause c is fixed k varies directly with l in an attempt to reproduce the findings in we tuned c α with cross validationtuning ranges are c 005 01 015 02 025 03 and α 001 01 051015 20 25 30 35 40 50the optimal parameters we found are c 02 and α 15note that we learned a single set of shared parameters for all authors whereas tuned k and α on a perauthor basisto demonstrate that our implementation of metric labeling produces comparable results we also determined the optimal authorspecific parameterstable 1 shows the accuracy obtained over 20 trials with l 09n for each author using svm regression regpsp using shared c α parameters and regpsp using authorspecific c α parameters the best result in each row of the table is highlighted in boldwe also show in bold any results that cannot be distinguished from the best result using a paired ttest at the 005 level found that their metric labeling method when applied to the 4class data we are using was not statistically better than regression though they observed some improvement for authors and using authorspecific parameters we obtained the same qualitative result but the improvement for and appears even less significant in our resultspossible explanations for this difference are the fact that we derived our psp measurements using an svm classifier instead of an nb classifier and that we did not use the same range of parameters for tuningthe optimal shared parameters produced almost the same results as the optimal authorspecific parameters and were used in subsequent experimentswe used the same pspbased similarity measure and the same shared parameters c 02 α 15 from our metric labeling experiments to perform graphbased semisupervised learningthe results are reported as sslpsp ssl has three additional parameters k q and m again we tuned k q with cross validationtuning ranges are k e 12 3 510 20 and q e 10001 001 01101001the optimal parameters are k 5 and q 10these were used for all authors and for all labeled set sizesnote that unlike k cl which decreases as the labeled set size decreases we let k remain fixed for all lwe set m arbitrarily to a large number 108 to ensure that the ratings of labeled reviews are respectedin addition to using psp as a similarity measure between reviews we investigated several alternative similarity measures based on the cosine of word vectorsamong these options were the cosine between the word vectors used to train the svm regressor and the cosine between word vectors containing only words with high mutual information valuesthe mutual information is computed with respect to the positive and negative classes in the 10662document snippet data set finally we experimented with using as a similarity measure the cosine between word vectors containing all words each weighted by its mutual informationwe found this measure to be the best among the options tested in pilot trial runs using the metric labeling algorithmspecifically we scaled the mutual information values such that the maximum value was onethen we used these values as weights for the corresponding words in the word vectorsfor words in the movie review data set that did not appear in the snippet data set we used a default weight of zero we repeated the experiments described in sections 42 and 43 with the only difference being that we used the mutualinformation weighted word vector similarity instead of psp whenever a similarity measure was requiredwe repeated the tuning procedures described in the previous sectionsusing this new similarity measure led to the optimal parameters c 01 α 15 k 5 and q 100the results are reported under regwv and sslwv respectivelywe tested the five algorithms for all four authors using each of the nine labeled set sizesthe results are presented in table 2each entry in the table represents the average accuracy across 20 trials for an author a labeled set size and an algorithmthe best result in each row is highlighted in boldany results on the same row that cannot be distinguished from the best result using a paired ttest at the 005 level are also boldthe results indicate that the graphbased semisupervised learning algorithm based on psp similarity achieved better performance than all other methods in all four author corpora when only 200 100 50 25 or 12 labeled documents were availablein 19 out of these 20 learning scenarios the unlabeled set accuracy by the sslpsp algorithm was significantly higher than all other methodswhile accuracy generally degraded as we trained on less labeled data the decrease for the ssl approach was less severe through the midrange labeled set sizessslpsp remains among the best methods with only 6 labeled examplesnote that the ssl algorithm appears to be quite sensitive to the similarity measure used to form the graph on which it is basedin the experiments where we used mutualinformation weighted word vector similarity we notice that regwv remained on par with regpsp at high labeled set sizes whereas sslwv appears significantly worse in most of these casesit is clear that psp is the more reliable similarity measuressl uses the similarity measure in more ways than the metric labeling approaches so it is not surprising that ssls accuracy would suffer more with an inferior similarity measureunfortunately our ssl approach did not do as well with large labeled set sizeswe believe this is due to two factors a the baseline svm regressor trained on a large labeled set can achieve fairly high accuracy for this difficult task without considering pairwise relationships between examples b psp similarity is not accurate enoughgain in variance reduction achieved by the ssl graph is offset by its bias when labeled data is abundantwe have demonstrated the benefit of using unlabeled data for rating inferencethere are several directions to improve the work 1we will investigate better document representations and similarity measures based on parsing and other linguistic knowledge as well as reviews sentiment patternsfor example several positive sentences followed by a few concluding negative sentences could indicate an overall negative review as observed in prior work 2our method is transductive new reviews must be added to the graph before they can be classifiedwe will extend it to the inductive learning setting based on 3we plan to experiment with crossreviewer and crossdomain analysis such as using a model learned on movie reviews to help classify product reviewswe thank bo pang lillian lee and anonymous reviewers for helpful comments
W06-3808
seeing stars when there are not many stars graphbased semisupervised learning for sentiment categorizationwe present a graphbased semisupervised learning algorithm to address the sentiment analysis task of rating inferencegiven a set of documents and accompanying ratings the task calls for inferring numerical ratings for unlabeled documents based on the perceived sentiment expressed by their textin particular we are interested in the situation where labeled data is scarcewe place this task in the semisupervised setting and demonstrate that considering unlabeled reviews in the learning process can improve rating inference performancewe do so by creating a graph on both labeled and unlabeled data to encode certain assumptions for this taskwe then solve an optimization problem to obtain a smooth rating function over the whole graphwhen only limited labeled data is available this method achieves significantly better predictive accuracy over other methods that ignore the unlabeled examples during trainingwe adapt semisupervised graphbased methods for sentiment analysis but do not incorporate lexical prior knowledge in the form of labeled featureswe propose a semisupervised learning approach to the rating inference problem in scenarios where labeled training data is scarce
chinese whispers an efficient graph clustering algorithm and its application to natural language processing problems we introduce chinese whispers a randomized graphclustering algorithm which is timelinear in the number of edges after a detailed definition of the algorithm and a discussion of its strengths and weaknesses the performance of chinese whispers is measured on natural language processing problems as diverse as language separation acquisition of syntactic word classes and word sense disambiguation at this the fact is employed that the smallworld property holds for many graphs in nlp clustering is the process of grouping together objects based on their similarity to each otherin the field of natural language processing there are a variety of applications for clusteringthe most popular ones are document clustering in applications related to retrieval and word clustering for finding sets of similar words or concept hierarchiestraditionally language objects are characterized by a feature vectorthese feature vectors can be interpreted as points in a multidimensional spacethe clustering uses a distance metric eg the cosine of the angle between two such vectorsas in nlp there are often several thousand features of which only a few correlate with each other at a time think about the number of different words as opposed to the number of words occurring in a sentence dimensionality reduction techniques can greatly reduce complexity without considerably losing accuracyan alternative representation that does not deal with dimensions in space is the graph representationa graph represents objects and their relations in nlp there are a variety of structures that can be naturally represented as graphs eg lexicalsemantic word nets dependency trees cooccurrence graphs and hyperlinked documents just to name a fewclustering graphs is a somewhat different task than clustering objects in a multidimensional space there is no distance metric the similarity between objects is encoded in the edgesobjects that do not share an edge cannot be compared which gives rise to optimization techniquesthere is no centroid or average cluster member in a graph permitting centroidbased techniquesas data sets in nlp are usually large there is a strong need for efficient methods ie of low computational complexitiesin this paper a very efficient graphclustering algorithm is introduced that is capable of partitioning very large graphs in comparatively short timeespecially for smallworld graphs high performance is reached in quality and speedafter explaining the algorithm in the next section experiments with synthetic graphs are reported in section 3these give an insight about the algorithms performancein section 4 experiments on three nlp tasks are reported section 5 concludes by discussing extensions and further application areasin this section the chinese whispers algorithm is outlinedafter recalling important concepts from graph theory we describe two views on the algorithmthe workshop on textgraphs at hltnaacl 2006 pages 7380 new york city june 2006 c2006 association for computational linguistics second view is used to relate cw to another graph clustering algorithm namely mcl we use the following notation throughout this paper let g be a weighted graph with nodes ev and weighted edges ee with weight wijif ee implies ee then the graph is undirectedif all weights are 1 g is called unweightedthe degree of a node is the number of edges a node takes part inthe neighborhood of a node v is defined by the set of all nodes v such that ee or ee it consists of all nodes that are connected to v the adjacency matrix ag of a graph g with n nodes is an nxn matrix where the entry aij denotes the weight of the edge between vi and vj 0 otherwisethe class matrix dg of a graph g with n nodes is an nxn matrix where rows represent nodes and columns represent classes ecthe value dij at row i and column j represents the amount of vi as belonging to a class cjfor convention class matrices are rownormalized the ith row denotes a distribution of vi over c if all rows have exactly one nonzero entry with value 1 dg denotes a hard partitioning of v soft partitioning otherwisecw is a very basic yet effective algorithm to partition the nodes of weighted undirected graphsit is motivated by the eponymous childrens game where children whisper words to each otherwhile the games goal is to arrive at some funny derivative of the original message by passing it through several noisy channels the cw algorithm aims at finding groups of nodes that broadcast the same message to their neighborsit can be viewed as a simulation of an agentbased social network for an overview of this field see the algorithm is outlined in figure 1 initialize forall vi in v classi while changes forall v in v randomized order classhighest ranked class in neighborhood of v intuitively the algorithm works as follows in a bottomup fashion first all nodes get different classesthen the nodes are processed for a small number of iterations and inherit the strongest class in the local neighborhoodthis is the class whose sum of edge weights to the current node is maximalin case of multiple strongest classes one is chosen randomlyregions of the same class stabilize during the iteration and grow until they reach the border of a stable region of another classnote that classes are updated immediately a node can obtain classes from the neighborhood that were introduced there in the same iterationfigure 2 illustrates how a small unweighted graph is clustered into two regions in three iterationsdifferent classes are symbolized by different shades of greyit is possible to introduce a random mutation rate that assigns new classes with a probability decreasing in the number of iterations as described in this showed having positive effects for small graphs because of slower convergence in early iterationsthe cw algorithm cannot cross component boundaries because there are no edges between nodes belonging to different componentsfurther nodes that are not connected by any edge are discarded from the clustering process which possibly leaves a portion of nodes unclusteredformally cw does not converge as figure 3 exemplifies here the middle nodes neighborhood consists of a tie which can be decided in assigning the class of the left or the class of the right nodes in any iteration all over againties however do not play a major role in weighted graphsapart from ties the classes usually do not change any more after a handful of iterationsthe number of iterations depends on the diameter of the graph the larger the distance between two nodes is the more iterations it takes to percolate information from one to anotherthe result of cw is a hard partitioning of the given graph into a number of partitions that emerges in the process cw is parameterfreeit is possible to obtain a soft partitioning by assigning a class distribution to each node based on the weighted distribution of classes in its neighborhood in a final stepthe outcomes of cw resemble those of mincut dense regions in the graph are grouped into one cluster while sparsely connected regions are separatedin contrast to mincut cw does not find an optimal hierarchical clustering but yields a nonhierarchical partitionfurthermore it does not require any threshold as input parameter and is more efficientanother algorithm that uses only local contexts for timelinear clustering is dbscan as described in needing two input parameters dbscan is especially suited for graphs with a geometrical interpretation ie the objects have coordinates in a multidimensional spacea quite similar algorithm to cw is majorclust which is based on a comparable idea but converges sloweras cw is a special case of markovchainclustering we spend a few words on explaining itmcl is the parallel simulation of all possible random walks up to a finite length on a graph g the idea is that random walkers are more likely to end up in the same cluster where they started than walking across clustersmcl simulates flow on a graph by repeatedly updating transition probabilities between all nodes eventually converging to a transition matrix after k steps that can be interpreted as a clustering of g this is achieved by alternating an expansion step and an inflation stepthe expansion step is a matrix multiplication of mg with the current transition matrixthe inflation step is a columnwise nonlinear operator that increases the contrast between small and large transition probabilities and normalizes the columnwise sums to 1the k matrix multiplications of the expansion step of mcl lead to its timecomplexity of oit has been observed in that only the first couple of iterations operate on dense matrices when using a strong inflation operator matrices in the later steps tend to be sparsethe author further discusses pruning schemes that keep only some of the largest entries per column leading to drastic optimization possibilitiesbut the most aggressive sort of pruning is not considered only keeping one single largest entryexactly this is conducted in the basic cw processlet maxrow be an operator that operates rowwise on a matrix and sets all entries of a row to zero except the largest entry which is set to 1then the algorithm is denoted as simple as this by applying maxrow dt1 has exactly n nonzero entriesthis causes the timecomplexity to be dependent on the number of edges namely oin the worst case of a fully connected graph this equals the timecomplexity of mcla problem with the matrix cw process is that it does not necessarily converge to an iterationinvariant class matrix d but rather to a pair of oscillating class matricesfigure 5 shows an examplethis is caused by the stepwise update of the class matrixas opposed to this the cw algorithm as outlined in figure 1 continuously updates d after the processing of each nodeto avoid these oscillations one of the following measures can be taken while converging to the same limits the continuous update strategy converges the fastest because prominent classes are spread much faster in early iterationsthe analysis of the cw process is difficult due to its nonlinear natureits runtime complexity indicates that it cannot directly optimize most global graph cluster measures because of their npcompleteness therefore we perform experiments on synthetic graphs to empirically arrive at an impression of our algorithm abilitiesall experiments were conducted with an implementation following figure 1for experiments with synthetic graphs we restrict ourselves to unweighted graphs if not stated explicitlya cluster algorithm should keep dense regions together while cutting apart regions that are sparsely connectedthe highest density is reached in fully connected subgraphs of n nodes aka ncliqueswe define an nbipartiteclique as a graph of two ncliques which are connected such that each node has exactly one edge going to the clique it does not belong tofigures 5 and 6 are npartite cliques for n410we clearly expect a clustering algorithm to cut the two cliques apartas we operate on unweighted graphs however cw is left with two choices producing two clusters or grouping all nodes into one clusterthis is largely dependent on the random choices in very early iterations if the same class is assigned to several nodes in both cliques it will finally cover the whole graphit is clearly a drawback that the outcome of cw is nondeterministiconly half of the experiments with 4bipartite cliques resulted in separationhowever the problem is most dramatic on small graphs and ceases to exist for larger graphs as demonstrated in figure 7a structure that has been reported to occur in an enormous number of natural systems is the small world graphspace prohibits an indepth discussion which can be found in here we restrict ourselves to swgraphs in language datain cooccurrence graphs as used in the experiment section are reported to possess the small world property ie a high clustering coefficient and short average path length between arbitrary nodessteyvers and tenenbaum show that association networks as well as semantic resources are scalefree swgraphs their degree distribution follows a power lawa generative model is provided that generates undirected scalefree swgraphs in the following way we start with a small number of fully connected nodeswhen adding a new node an existing node v is chosen with a probability according to its degreethe new node is connected to m nodes in the neighborhood of v the generative model is parameterized by the number of nodes n and the network mean connectivity which approaches 2m for large n let us assume that we deal with natural systems that can be characterized by small world graphsif two or more of those systems interfere their graphs are joined by merging some nodes retaining their edgesa graphclustering algorithm should split up the resulting graph in its previous parts at least if not too many nodes were mergedwe conducted experiments to measure cw performance on swgraph mixtures we generated graphs of various sizes merged them by twos to a various extent and measured the amount of cases where clustering with cw leads to the reconstruction of the original partswhen generating swgraphs with the steyverstenenbaum model we fixed m to 10 and varied n and the merge rate r which is the fraction of nodes of the smaller graph that is merged with nodes of the larger graphfigure 8 summarizes the results for equisized mixtures of 300 3000 and 30000 nodes and mixtures of 300 with 30000 nodesit is not surprising that separating the two parts is more difficult for higher r results are not very sensitive to size and size ratio indicating that cw is able to identify clusters even if they differ considerably in size it even performs best at the skewed mixturesat merge rates between 20 and 30 still more then half of the mixtures are separated correctly and can be found when averaging cws outcome over several runsas formally the algorithm does not converge it is important to define a stop criterion or to set the number of iterationsto show that only a few iterations are needed until almostconvergence we measured the normalized mutual information 1 between the clustering in the 50th iteration and the clusterings of earlier iterationsthis was conducted for two unweighted swgraphs with 1000 and 10000 nodes m5 and a weighted 7lingual cooccurrence graph with 22805 nodes and 232875 edgestable 1 indicates that for unweighted graphs changes are only small after 2030 iterationsin iterations 4050 the normalized mivalues do not improve any morethe weighted graph converges much faster due to fewer ties and reaches a stable plateau after only 6 iterationsin this section some experiments with graphs originating from natural language data are presentedfirst we define the notion of cooccurrence graphs which are used in sections 41 and 43 two words cooccur if they can both be found in a certain unit of text here a sentenceemploying a significance measure we determine whether their cooccurrences are significant or randomin this case we use the loglikelihood measure as described in we use the words as nodes in the graphthe weight of an 1 defined for two random variables x and y as hhmaxh with h entropya value of 0 denotes indepenence 1 is perfect congruence edge between two words is set to the significance value of their cooccurrence if it exceeds a certain thresholdin the experiments we used significances from 15 onthe entirety of words that are involved in at least one edge together with these edges is called cooccurrence graph in general cw produces a large number of clusters on realworld graphs of which the majority is very smallfor most applications it might be advisable to define a minimum cluster size or something alikethis section shortly reviews the results of where cw was first describedthe task was to separate a multilingual corpus by languages assuming its tokenization in sentencesthe cooccurrence graph of a multilingual corpus resembles the synthetic swgraphs every language forms a separate cooccurrence graph some words that are used in more than one language are members of several graphs connecting themby cwpartitioning the graph is split into its monolingual partsthese parts are used as word lists for wordbased language identification report almost perfect performance on getting 7lingual corpora with equisized parts sorted apart as well as highly skewed mixtures of two languagesin the process languageambiguous words are assigned to only one language which did not hurt performance due to the high redundancy of the taskhowever it would have been possible to use the soft partitioning to acquire a distribution over languages for each wordfor the acquisition of word classes we use a different graph the secondorder graph on neighboring cooccurrencesto set up the graph a cooccurrence calculation is performed which yields significant word pairs based on their occurrence as immediate neighborsthis can be perceived as a bipartite graph figure 9a gives a toy examplenote that if similar words occur in both parts they form two distinct nodesthis graph is transformed into a secondorder graph by comparing the number of common right and left neighbors for two wordsthe similarity between two words is the sum of common neighborsfigure 9b depicts the secondorder graph derived from figure 9a and its partitioning by cwthe wordclassambiguous word drink is responsible for all intracluster edgesthe hypothesis here is that words sharing many neighbors should usually be observed with the same partofspeech and get high weights in the second order graphin figure 9 three clusters are obtained that correspond to different partsofspeech to test this on a large scale we computed the secondorder similarity graph for the british national corpus excluding the most frequent 2000 words and drawing edges between words if they shared at least four left and right neighborsthe clusters are checked against a lexicon that contains the most frequent tag for each word in the bncthe largest clusters are presented in table 2 in total cw produced 282 clusters of which 26 exceed a size of 100the weighted average of cluster purity was measured at 888 which exceeds significantly the precision of 53 on word type as reported by schütze on a related taskhow to use this kind of word clusters to improve the accuracy of postaggers is outlined in the task of word sense induction is to find the different senses of a wordthe number of senses is not known in advance therefore has to be determined by the methodsimilar to the approach as presented in we construct a word graphwhile there edges between words are drawn iff words cooccur in enumerations we use the cooccurrence graphdorow and widdows construct a graph for a target word w by taking the subgraph induced by the neighborhood of w and clustering it with mclwe replace mcl by cwthe clusters are interpreted as representations of word sensesto judge results the methodology of is adopted to evaluate word sense induction two subgraphs induced by the neighborhood of different words are mergedthe algorithm ability to separate the merged graph into its previous parts can be measured in an unsupervised waybordag defines four measures we used the same program to compute cooccurrences on the same corpus therefore it is possible to directly compare our results to bordags who uses a tripletbased hierarchical graph clustering approachthe method was chosen because of its appropriateness for unlabelled data without linguistic preprocessing like tagging or parsing only the disambiguation mechanism is measured and not the quality of the preprocessing stepswe provide scores for his test 1 and test 3 data was obtained from bnc raw text evaluation was performed for 45 test wordsresults suggest that both algorithms arrive at about equal overall performance chinese whispers clustering is able to capture the same information as a specialized graphclustering algorithm for wsi given the same inputthe slightly superior performance on rr and rp indicates that cw leaves fewer words unclustered which can be advantageous when using the clusters as clues in word sense disambiguationchinese whispers an efficient graphclustering algorithm was presented and described in theory and practiceexperiments with synthetic graphs showed that for small graphs results can be inconclusive due to its nondeterministic naturebut while there exist plethora of clustering approaches that can deal well with small graphs the power of cw lies in its capability of handling very large graphs in reasonable timethe application field of cw rather lies in size regions where other approaches solutions are intractableon the nlp data discussed cw performs equally or better than other clustering algorithmsas cw like other graph clustering algorithms chooses the number of classes on its own and can handle clusters of different sizes it is especially suited for nlp problems where class distributions are often highly skewed and the number of classes is not known beforehandto relate the partitions it is possible to set up a hierarchical version of cw in the following way the nodes of equal class are joined to hypernodesedge weights between hypernodes are set according to the number of interclass edges between the corresponding nodesthis results in flat hierarchiesin further works it is planned to apply cw to other graphs such as the cocitation graph of citeseer the cocitation graph of web pages and the link structure of wikipediathanks go to stefan bordag for kindly providing his wsi evaluation frameworkfurther the author would like to thank sebastian gottwald and rocco gwizdziel for a platformindependent gui implementation of cw which is available for download from the authors homepage
W06-3812
chinese whispers an efficient graph clustering algorithm and its application to natural language processing problemswe introduce chinese whispers a randomized graphclustering algorithm which is timelinear in the number of edgesafter a detailed definition of the algorithm and a discussion of its strengths and weaknesses the performance of chinese whispers is measured on natural language processing problems as diverse as language separation acquisition of syntactic word classes and word sense disambiguationat this the fact is employed that the smallworld property holds for many graphs in nlpwe introduce the cooccurrence based graph clustering framework
inversion transduction grammar for joint phrasal translation modeling we present a phrasal inversion transduction grammar as an alternative to joint phrasal translation models this syntactic model is similar to its flatstring phrasal predecessors but admits polynomialtime algorithms for viterbi alignment and them training we demonstrate that the consistency constraints that allow flat phrasal models to scale also help itg algorithms producing an 80times faster insideoutside algorithm we also show that the phrasal translation tables produced by the itg are superior to those of the flat joint phrasal model producing up to a 25 point improvement in bleu score finally we explore for the first time the utility of a joint phrasal translation model as a word alignment method statistical machine translation benefits greatly from considering more than one word at a timeone can put forward any number of noncompositional translations to support this point such as the colloquial canadian frenchenglish pair where no clear wordtoword connection can be drawnnearly all current decoding methods have shifted to phrasal representations gaining the ability to handle noncompositional translations but also allowing the decoder to memorize phenomena such as monolingual agreement and shortrange movement taking pressure off of language and distortion modelsdespite the success of phrasal decoders knowledge acquisition for translation generally begins with a wordlevel analysis of the training text taking the form of a word alignmentattempts to apply the same statistical analysis used at the word level in a phrasal setting have met with limited success held back by the sheer size of phrasal alignment spacehybrid methods that combine wellfounded statistical analysis with highconfidence wordlevel alignments have made some headway but suffer from the daunting task of heuristically exploring a still very large alignment spacein the meantime synchronous parsing methods efficiently process the same bitext phrases while building their bilingual constituents but continue to be employed primarily for wordtoword analysis in this paper we unify the probability models for phrasal translation with the algorithms for synchronous parsing harnessing the benefits of both to create a statistically and algorithmically wellfounded method for phrasal analysis of bitextsection 2 begins by outlining the phrase extraction system we intend to replace and the two methods we combine to do so the joint phrasal translation model and inversion transduction grammar section 3 describes our proposed solution a phrasal itgsection 4 describes how to apply our phrasal itg both as a translation model and as a phrasal wordalignersection 5 tests our system in both these capacities while section 6 concludesphrasal decoders require a phrase table which contains bilingual phrase pairs and scores indicating their utilitythe surface heuristic is the most popular method for phrasetable constructionit extracts all consistent phrase pairs from wordaligned bitext the word alignment provides bilingual links indicating translation relationships between wordsconsistency is defined so that alignment links are never broken by phrase boundariesfor each token w in a consistent phrase pair p all tokens linked to w by the alignment must also be included in p each consistent phrase pair is counted as occurring once per sentence pairthe scores for the extracted phrase pairs are provided by normalizing these flat counts according to common english or foreign components producing the conditional distributions p and pthe surface heuristic can define consistency according to any word alignment but most often the alignment is provided by giza this alignment system is powered by the ibm translation models in which one sentence generates the otherthese models produce only onetomany alignments each generated token can participate in at most one linkmanytomany alignments can be created by combining two giza alignments one where english generates foreign and another with those roles reversed combination approaches begin with the intersection of the two alignments and add links from the union heuristicallythe growdiagfinal combination heuristic adds links so that each new link connects a previously unlinked tokenthe ibm models that power giza are trained with expectation maximization or them on sentencealigned bitexta translation model assigns probabilities to alignments these alignment distributions are used to count translation events which are then used to estimate new parameters for the translation modelsampling is employed when the alignment distributions cannot be calculated efficientlythis statisticallymotivated process is much more appealing than the flat counting described in section 21 but it does not directly include phrasesthe joint phrasal translation model or jptm applies the same statistical techniques from the ibm models in a phrasal settingthe jptm is designed according to a generative process where both languages are generated simultaneouslyfirst a bag of concepts or cepts c is generatedeach ci e c corresponds to a bilingual phrase pair ci these contiguous phrases are permuted in each language to create two sequences of phrasesinitially marcu and wong assume that the number of cepts as well as the phrase orderings are drawn from uniform distributionsthat leaves a joint translation distribution p to determine which phrase pairs are selectedgiven a lexicon of possible cepts and a predicate l that determines if a bag of cepts c can be bilingually permuted to create the sentence pair the probability of a sentence pair is if left unconstrained will consider every phrasal segmentation of e and f and every alignment between those phraseslater a distortion model based on absolute token positions is added to the jptm faces several problems when scaling up to large training sets all cooccurring phrases observed in the bitextthis is far too large to fit in main memory and can be unwieldly for storage on diskmarcu and wong address point 2 with a lexicon constraint monolingual phrases that are above a length threshold or below a frequency threshold are excluded from the lexiconpoint 3 is handled by hillclimbing to a likely phrasal alignment and sampling around ithowever point 1 remains unaddressed which prevents the model from scaling to large data setsbirch et al handle point 1 directly by reducing the size of the alignment spacethis is f will cover accomplished by constraining the jptm to only use phrase pairs that are consistent with a highconfidence word alignment which is provided by giza intersectionwe refer to this constrained jptm as a cjptmthis strikes an interesting middle ground between the surface heuristic described in section 21 and the jptmlike the surface heuristic a word alignment is used to limit the phrase pairs considered but the cjptm reasons about distributions over phrasal alignments instead of taking flat countsthe consistency constraint allows them to scale their cjptm up to 700000 sentence pairswith this constraint in place the use of hillclimbing and sampling during them training becomes one of the largest remaining weaknesses of the cjptmlike the jptm stochastic synchronous grammars provide a generative process to produce a sentence and its translation simultaneouslyinversion transduction grammar or itg is a wellstudied synchronous grammar formalismterminal productions of the form a ef produce a token in each stream or a token in one stream with the null symbol 0 in the otherto allow for movement during translation nonterminal productions can be either straight or invertedstraight productions with their nonterminals inside square brackets produce their symbols in the given order in both streamsinverted productions indicated by angled brackets are output in reverse order in the foreign stream onlythe work described here uses the binary bracketing itg which has a single nonterminal this grammar admits an efficient bitext parsing algorithm and holds no languagespecific biases cannot represent all possible permutations of concepts that may occur during translation because some permutations will require discontinuous constituents this itg constraint is characterized by the two forbidden structures shown in figure 1 empirical studies suggest that only a small percentage of human translations violate these constraints stochastic itgs are parameterized like their pcfg counterparts productions a x are assigned probability prthese parameters can be learned from sentencealigned bitext using the them algorithmthe expectation task of counting productions weighted by their probability is handled with dynamic programming using the insideoutside algorithm extended to bitext this paper introduces a phrasal itg in doing so we combine itg with the jptmitg parsing algorithms consider every possible twodimensional span of bitext each corresponding to a bilingual phrase paireach multitoken span is analyzed in terms of how it could be built from smaller spans using a straight or inverted production as is illustrated in figures 2 and to extend itg to a phrasal setting we add a third option for span analysis that the span under consideration might have been drawn directly from the lexiconthis option can be added to our grammar by altering the definition of a terminal production to include phrases a 6 1this third option is shown in figure 2 the model implied by this extended grammar is trained using insideoutside and themour approach differs from previous attempts to use itgs for phrasal bitext analysiswu used a binary bracketing itg to segment a sentence while simultaneously wordaligning it to its translation but the model was trained heuristically with a fixed segmentationvilar and vidal used itglike dynamic programming to drive both training and alignment for their recursive translation model but they employed a conditional model that did not maintain a phrasal lexiconinstead they scored phrase pairs using ibm model 1our phrasal itg is quite similar to the jptmboth models are trained with them and both employ generative stories that create a sentence and its translation simultaneouslythe similarities become more apparent when we consider the canonicalform binarybracketing itg shown here is employed in place of to reduce redundant alignments and clean up them expectations1 more importantly for our purposes it introduces a preterminal c which generates all phrase pairs or ceptswhen is parameterized as a stochastic itg the conditional distribution p is equivalent to the jptms p both are joint distributions over all possible phrase pairsthe distributions conditioned on the remaining three nonterminals assign probability to concept movement by tracking inversionslike the jptms distortion model these parameters grade each movement decision independentlywith terminal productions producing cepts and inversions measuring distortion our phrasal itg is essentially a variation on the jptm with an alternate distortion modelour phrasal itg has two main advantages over the jptmmost significantly we gain polynomialtime algorithms for both viterbi alignment and them expectation through the use of itg parsing and insideoutside algorithmsthese phrasal itg algorithms are no more expensive asymptotically than their wordtoword counterparts since each potential phrase needs to be analyzed anyway during constituent constructionwe hypothesize that using these methods in place of heuristic search and sampling will improve the phrasal translation model learned by themalso we can easily incorporate links to 0 by including the symbol among our terminalsto minimize redundancy we allow only single tokens not phrases to align to 0the jptm does not allow links to 0the phrasal itg also introduces two new complicationsitg viterbi and insideoutside algorithms have polynomial complexity but that polynomial is o where n is the length of the longer sentence in the pairthis is too slow to train on large data sets without massive parallelizationalso itg algorithms explore their alignment space perfectly but that space has been reduced by the itg constraint described in section 23we will address each of these issues in the following two subsectionsfirst we address the problem of scaling itg to large dataitg dynamic programming algorithms work by analyzing each bitext span only once storing its value in a table for future usethere are o of these spans and each analysis takes o timean effective approach to speeding up itg algorithms is to eliminate unlikely spans as a preprocessing step assigning them 0 probability and saving the time spent processing thempast approaches have pruned spans using ibm model 1 probability estimates or using agreement with an existing parse tree the former is referred to as tictactoe pruning because it uses both inside and outside estimateswe propose a new itg pruning method that leverages highconfidence links by pruning all spans that are inconsistent with a provided alignmentthis is similar to the constraint used in the cjptm but we do not just eliminate those spans as potential phrasetophrase links we never consider any itg parse that builds a nonterminal over a pruned span2 this fixedlink pruning will speed up both viterbi alignment and them training by reducing the number of analyzed spans and so long as we trust our highconfidence links it will do so harmlesslywe demonstrate the effectiveness of this pruning method experimentally in section 51our remaining concern is the itg constraintthere are some alignments that we just cannot build and sentence pairs requiring those alignments will occurthese could potentially pollute our training data if the system is unable to build the right alignment the counts it will collect from that pair must be wrongfurthermore if our highconfidence links are not itgcompatible our fixedlink pruning will prevent the aligner from forming any alignments at allhowever these two potential problems cancel each other outsentence pairs containing nonitg translations will tend to have highconfidence links that are also not itgcompatibleour them learner will simply skip these sentence pairs during training avoiding pollution of our training datawe can use a lineartime algorithm to detect nonitg movement in our highconfidence links and remove the offending sentence pairs from our training corpusthis results in only a minor reduction in training data in our frenchenglish training set we lose less than 1in the experiments described in section 5 all systems that do not use itg will take advantage of the complete training setany phrasal translation model can be used for two tasks translation modeling and phrasal word alignmentprevious work on jptm has focused on only the first taskwe are interested in phrasal alignment because it may be better suited to heuristic phraseextraction than wordbased modelsthis section describes how to use our phrasal itg first as a translation model and then as a phrasal alignerwe can test our models utility for translation by transforming its parameters into a phrase table for the phrasal decoder pharaoh any joint model can produce the necessary conditional probabilities by conditionalizing the joint table in both directionswe use our p distribution from our stochastic grammar to produce p and p values for its phrasal lexiconpharaoh also includes lexical weighting parameters that are derived from the alignments used to induce its phrase pairs using the phrasal itg as a direct translation model we do not produce alignments for individual sentence pairsinstead we provide a lexical preference with an ibm model 1 feature pm1 that penalizes unmatched words we include both pm1 and pm1we can produce a translation model using insideoutside without ever creating a viterbi parsehowever we can also examine the maximum likelihood phrasal alignments predicted by the trained modeldespite its strengths derived from using phrases throughout training the alignments predicted by our phrasal itg are usually unsatisfyingfor example the fragment pair is aligned as a phrase pair by our system linking every english word to every french wordthis is frustrating since there is a clear compositional relationship between the fragments component wordsthis happens because the system seeks only to maximize the likelihood of its training corpus and phrases are far more efficient than wordtoword connectionswhen aligning text annotators are told to resort to manytomany links only when no clear compositional relationship exists if we could tell our phrasal aligner the same thing we could greatly improve the intuitive appeal of our alignmentsagain we can leverage highconfidence links for helpin the highconfidence alignments provided by giza intersection each token participates in at most one linklinks only appear when two wordbased ibm translation models can agreetherefore they occur at points of high compositionality the two words clearly account for one anotherwe adopt an alignmentdriven definition of compositionality any phrase pair containing two or more highconfidence links is compositional and can be separated into at least two noncompositional phrasesby removing any phrase pairs that are compositional by this definition from our terminal productions we can ensure that our aligner never creates such phrases during training or alignmentdoing so produces far more intuitive alignmentsaligned with a model trained using this noncompositional constraint our example now forms three wordtoword connections rather than a single phrasal onethe phrases produced with this constraint are very small and include only noncompositional contexttherefore we use the constraint only to train models intended for viterbi alignment and not when generating phrase tables directly as in section 41in this section we first verify the effectiveness of fixedlink pruning and then test our phrasal itg both as an aligner and as a translation modelwe train all translation models with a frenchenglish europarl corpus obtained by applying a 25 token sentencelength limit to the training set provided for the hltnaacl smt workshop shared task the resulting corpus has 393132 sentence pairs3376 of these are omitted for itg methods because their highconfidence alignments have itgincompatible constructionslike our predecessors we apply a lexicon constraint no monolingual phrase can be used by any phrasal model unless it occurs at least five timeshighconfidence alignments are provided by intersecting giza alignments trained in each direction with 5 iterations each of model 1 hmm and model 4all giza alignments are trained with no sentencelength limit using the full 688k corpusto measure the speedup provided by fixedlink pruning we timed our phrasal insideoutside algorithm on the first 100 sentence pairs in our training set with and without pruningthe results are shown in table 1tictactoe pruning is included for comparisonwith fixedlink pruning on average 95 of the possible spans are pruned reducing running time by two orders of magnitudethis improvement makes itg training feasible even with large bitextsthe goal of this experiment is to compare the viterbi alignments from the phrasal itg to gold standard human alignmentswe do this to validate our noncompositional constraint and to select good alignments for use with the surface heuristicfollowing the lead of we handaligned the first 100 sentence pairs of our training set according to the blinker annotation guidelines we did not differentiate between sure and possible linkswe report precision recall and balanced fmeasure for comparison purposes we include the results of three types of giza combination including the growdiagfinal heuristic we tested our phrasal itg with fixed link pruning and then added the noncompositional constraint during development we determined that performance levels off for both of the itg models after 3 them iterationsthe results are shown in table 2the first thing to note is that giza intersection is indeed very high precisionour confidence in it as a constraint is not misplacedwe also see that both phrasal models have significantly higher recall than any of the giza alignments even higher than the permissive giza unionone factor contributing to this is the phrasal models use of cepts it completely interconnects any phrase pair while giza union and gdf may notits global view of phrases also helps in this regard evidence for a phrase can be built up over multiple sentencesfinally we note that in terms of alignment quality the noncompositional constraint is an unqualified success for the phrasal itgit produces a 25 point improvement in precision at the cost of 2 points of recallthis produces the highest balanced fmeasure observed on our test set but the utility of its alignments will depend largely on ones desired precisionrecall tradeoffin this section we compare a number of different methods for phrase table generation in a french to english translation taskwe are interested in answering three questions with this in mind we test five phrase tablestwo are conditionalized phrasal models each them trained until performance degrades we use the pharaoh decoder with the smt shared task baseline system weights for the loglinear model are set using the 500sentence tuning set provided for the shared task with minimum error rate training as implemented by venugopal and vogel results on the provided 2000sentence development set are reported using the bleu metric for all methods we report performance with and without ibm model 1 features along with the size of the resulting tables in millions of phrase pairsthe results of all experiments are shown in table 3we see that the phrasal itg surpasses the cjptm by more than 25 bleu pointsa large component of this improvement is due to the itgs use of insideoutside for expectation calculation though there are other differences between the two systems4 this improvement over search and sampling is demonstrated by the itgs larger table size by exploring more thoroughly it is extracting more phrase pairs from the same amount of databoth systems improve drastically with the addition of ibm model 1 features for lexical preferencethese features also narrow the gap between the two systemsto help calibrate the contribution of these features we parameterized the itgs phrase table using only model 1 features which scores 2717although itgm1 comes close neither phrasal model matches the performance of the surface heuristicwhatever the surface heuristic lacks in sophistication it makes up for in sheer coverage as demonstrated by its huge table sizeseven the phrasal itg viterbi alignments which overcommit wildly and have horrible precision score slightly higher than the best phrasal modelthe surface heuristic benefits from capturing as much context as possible while still covering smaller translation events with its flat countsit is not held back by any lexicon constraintswhen giza gdfm1 is forced to conform to a lexicon constraint by dropping any phrase with a frequency lower than 5 from its table it scores only 2926 for a reduction of 135 bleu pointsphrases extracted from our noncompositional viterbi alignments receive the highest bleu score but they are not significantly better than giza gdfthe two methods also produce similarlysized tables despite the itgs higher recall4unlike our system the birch implementation does table smoothing and internal lexical weighting both of which should help improve their resultsthe systems also differ in distortion modeling and 0 handling as described in section 3we have presented a phrasal itg as an alternative to the joint phrasal translation modelthis syntactic solution to phrase modeling admits polynomialtime training and alignment algorithmswe demonstrate that the same consistency constraints that allow joint phrasal models to scale also dramatically speed up itgs producing an 80times faster insideoutside algorithmwe show that when used to learn phrase tables for the pharaoh decoder the phrasal itg is superior to the constrained joint phrasal model producing tables that result in a 25 point improvement in bleu when used alone and a 1 point improvement when used with ibm model 1 featuresthis suggests that itgs perfect expectation counting does matter other phrasal models could benefit from either adopting the itg formalism or improving their sampling heuristicswe have explored for the first time the utility of a joint phrasal model as a word alignment methodwe present a noncompositional constraint that turns the phrasal itg into a highrecall phrasal aligner with an fmeasure that is comparable to gizawith search and sampling no longer a concern the remaining weaknesses of the system seem to lie with the model itselfphrases are just too efficient probabilistically were we to remove all lexicon constraints them would always align entire sentences to entire sentencesthis pressure to always build the longest phrase possible may be overwhelming otherwise strong correlations in our training dataa promising next step would be to develop a prior over lexicon size or phrase size allowing them to introduce large phrases at a penalty and removing the need for artificial constraints on the lexiconacknowledgments special thanks to alexandra birch for the use of her code and to our reviewers for their commentsthe first author is funded by alberta ingenuity and icore studentships
W07-0403
inversion transduction grammar for joint phrasal translation modelingwe present a phrasal inversion transduction grammar as an alternative to joint phrasal translation modelsthis syntactic model is similar to its flatstring phrasal predecessors but admits polynomialtime algorithms for viterbi alignment and them trainingwe demonstrate that the consistency constraints that allow flat phrasal models to scale also help itg algorithms producing an 80times faster insideoutside algorithmwe also show that the phrasal translation tables produced by the itg are superior to those of the flat joint phrasal model producing up to a 25 point improvement in bleu scorefinally we explore for the first time the utility of a joint phrasal translation model as a word alignment methodwe use synchronous itg and constraints to find noncompositional phrasal equivalences
ccg supertags in factored statistical machine translation combinatorial categorial grammar supertags present phrasebased machine translation with an opportunity to access rich syntactic information at a word levelthe challenge is incorporating this informa tion into the translation process factoredtranslation models allow the inclusion of supertags as a factor in the source or target language we show that this results in an i am provement in the quality of translation and that the value of syntactic supertags in flat structured phrasebased models is largely due to better local reorderings in largescale machine translation evaluationsphrasebased models generally outperform syntax based models1phrasebased models are effectivebecause they capture the lexical dependencies be tween languageshowever these models which are equivalent to finitestate machines are unable to model long range word order differencesphrasebased models also lack the ability to incorporate the generalisations implicit insyntactic knowledge and they do not respect linguistic phrase boundariesthis makes it difficult to i am prove reordering in phrasebased modelssyntaxbased models can overcome some of theproblems associated with phrasebased models because they are able to capture the long range struc tural mappings that occur in translationrecently 1wwwnistgovspeechtestsmtmt06eval official resultshtml there have been a few syntaxbased models that show performance comparable to the phrasebasedmodels how ever reliably learning powerful rules from parallel data is very difficult and prone to problems with sparsity and noise in the datathese models also suffer from a large search space when decoding with an integrated language model which can lead to search errors in this paper we investigate the idea of incorporating syntax into phrasebased models thereby lever aging the strengths of both the phrasebased models and syntactic structuresthis is done using ccg supertags which provide a rich source of syntactic informationccg contains most of the structure ofthe grammar in the lexicon which makes it possible to introduce ccg supertags as a factor in a factored translation model fac tored models allow words to be vectors of featuresone factor could be the surface form and other fac tors could contain linguistic informationfactored models allow for the easy inclusion of supertags in different waysthe first approach is to generate ccg supertags as a factor in the target and then apply an ngram model over them increasing the probability of more frequently seen sequencesof supertagsthis is a simple way of including syn tactic information in a phrasebased model and has also been suggested by hassan et al for both arabicenglish and our experiments in dutchenglish ngram models over ccg supertags improve the quality of translationby preferring more likely sequences of supertags it is conceivable that the output of the decoder is 9 more grammaticalhowever its not clear exactlyhow syntactic information can benefit a flat structured model the constraints contained within su pertags are not enforced and relationships between supertags are not linearwe perform experiments to explore the nature and limits of the contribution of supertags using different orders of ngram models reordering models and focussed manual evaluationit seems that the benefit of using ngram supertagsequence models is largely from improving reordering as much of the gain is eroded by using a lexi calised reordering modelthis is supported by the manual evaluation which shows a 44 improvement in reordering dutchenglish verb final sentencesthe second and novel way we use supertags is to direct the translation processsupertags on thesource sentence allows the decoder to make decisions based on the structure of the inputthe subcategorisation of a verb for instance might help select the correct translationusing multiple dependencies on factors in the source we need a strat egy for dealing with sparse datawe propose using a logarithmic opinion pool to combine the more specific models with more general mod els this paper is the first to suggest this approach for combining multiple information sources in machine translationalthough the addition of supertags to phrase based translation does show some improvement their overall impact is limitedsequence models over supertags clearly result in some improvementsin local reordering but syntactic information con tains long distance dependencies which are simply not utilised in phrasebased modelsinspired by work on factored language models koehn et al extend phrasebased models to incorporate multiple levels of linguistic knowledgeas factorsphrasebased models are limited to se quences of words as their units with no access to additional linguistic knowledgefactors allow for richer translation models for example the gender or tense of a word can be expressedfactors also allow the model to generalise for example the lemma of a word could be used to generalise to unseen inflected formsthe factored translation model combines features in a loglinear fashion the most likely target sentence it is calculated using the decision rule in equation 1 t argmax t mm1 mhm tm m1 mhm where m is the number of features hm are the feature functions over the factors and are the weights which combine the features which areoptimised using minimum error rate training each function depends on a vector sfs1 of source factors and a vector t ft1 of tar get factorsan example of a factored model used in upcoming experiments is tm m1 mhm where sw means the model depends on ource ords and twc means the model generates arget ords and cg supertagsthe model is shown graphically in figure 1wordword ccg source targetfigure 1factored translation with source words deter mining target words and ccg supertagsfor our experiments we used the following fea tures the translation probabilities pr and pr the lexical weights lex and lex and a phrase penalty e which allows the model to learn a preference for longer or shorter phrasesadded to these features 10 is the word penalty e1 which allows the model to learn a preference for longer or shorter sentences the distortion model d that prefers monotone word order and the language model probability prall these features are logged when combined in the loglinear model in order to retain the impact of very unlikely translations or sequencesone of the strengths of the factored model is it allows for ngram distributions over factors on the targetwe call these distributions sequence modelsby analogy with language models for example we can construct a bigram sequence model as follows p p ni2 p where f is a factor and n is the length of the stringsequence models over pos tags or supertags are smaller than language modelsbecause they have restricted lexiconshigher or der more powerful sequence models can therefore be usedapplying multiple factors in the source can lead to sparse data problemsone solution is to break down the translation into smaller steps and translate each factor separately like in the following model where source words are translated separately to the source supertags tm m1 mhm nn1 nhn however in many cases multiple dependenciesare desirablefor instance translating ccg supertags independently of words could introduce er rorsmultiple dependencies require some form of backing off to simpler models in order to cover the cases where for instance the word has been seen intraining but not with that particular supertagdif ferent backoff paths are possible and it would beinteresting but prohibitively slow to apply a strat egy similar to generalised parallel backoff which is used in factored language modelsbackoff in factored language models is made more difficult because there is no obvious backoff paththis is compounded for fac tored phrasebased translation models where one has to consider backoff in terms of factors and ngramlengths in both source and target languagesfur thermore the surface form of a word is probably themost valuable factor and so its contribution must al ways be taken into accountwe therefore did not use backoff and chose to use a loglinear combination of features and models insteadour solution is to extract two translation models tm m1 mhm nn1 nhn one model consists of more specific features m and would return log probabilities for example log2pr if the particular word and supertaghad been seen before in trainingotherwise it re turns c a negative constant emulating log2the other model consist of more general features n and always returns log probabilities for example log2prccgs have syntactically rich lexicons and a small set of combinatory operators which assemble the parsetreeseach word in the sentence is assigned a category from the lexicona category may either be atomic or complex np etccomplex categories have the general formor where and are themselves cate goriesan example of a ccg parse is given peter eats apples np np np snp s where the derivation proceeds as follows eatsis combined with applesunder the operation of forward applicationeatscan be thought of as a function that takes a np to the right and returns a snpsimilarly the phrase eats applescan be thought of as a function which takes a noun phrasenp to the left and returns a sentence s this opera tion is called backward applicationa sentence together with its ccg categories al ready contains most of the information present in a full parsebecause these categories are lexicalised 11they can easily be included into factored phrase based translationccg supertags are categories that have been provided by a supertaggersupertagswere introduced by bangalore as a way of in creasing parsing efficiency by reducing the number of structures assigned to each wordclark developed a suppertagger for ccg which uses a conditional maximum entropy model to estimate theprobability of words being assigned particular cat egorieshere is an example of a sentence that has been supertagged in the training corpus we all agree on that np npnp pp ppnp np the verb agreehas been assigned a complex su pertag pp which determines the type and direction of its argumentsthis information can be used to improve the quality of translationthe first set of experiments explores the effect of ccg supertags on the target translating from dutch into englishthe last experiment shows the effect of ccg supertags on the source translating from german into englishthese language pairs present a considerable reordering challengefor exampledutch and german have sovword order in subordi nate clausesthis means that the verb often appears at the end of the clause far from the position of the english verb41 experimental setupthe experiments were run using moses2 an opensource factored statistical machine translation systemthe srilm language modelling toolkit was used with modified kneserney discounting and interpolationthe ccg supertagger was pro vided with the cc language processing tools3the supertagger was trained on the ccgbank in english and in german the dutchenglish parallel training data comesfrom the europarl corpus and ex cludes the proceedings from the last quarter of 20002see httpwwwstatmtorgmoses 3see httpsvnaskitusydeduautraccandcwiki this consists of 855677 sentences with a maximum of 50 words per sentence500 sentences of tuning data and the 2000 sentences of test data are takenfrom the aclworkshop on building and using par allel texts4the germanenglish experiments use data from the naacl 2006 workshop on statistical machine translation5the data consists of 751088 sentences of training data 500 sentences of tuning data and3064 sentences of test datathe english and ger man training sets were pos tagged and supertagged before lowercasingthe language models and thesequence models were trained on the europarl train ing datawhere not otherwise specified the postag and supertag sequence models are 5gram mod els and the language model is a 3gram model42 sequence models over supertagsour first dutchenglish experiment seeks to estab lish what effect sequence models have on machinetranslationwe show that supertags improve trans lation qualitytogether with shen et al it is one of the first results to confirm the potential of the factored modelmodel bleu sw tw 2397 sw twp 2411 sw twc 2442 sw twpc 2443 table 1the effect of sequence models on dutchenglishbleu scorefactors are ords os tags cg su pertags on the source s or the target ttable 1 shows that sequence models over ccg su pertags in the target improves over the baseline which has no supertagssupertag sequence models also outperform models which apply pos tag sequence models and interestingly do just as well as models whichapply both pos tag and supertag sequence mod els supertags are more informative than pos tags as they contain the syntactic context of a wordthese experiments were run with the distortion limit set to 6this means that at most 6 words in 4see httpwwwstatmtorgwpt05 5see httpwwwstatmtorgwpt06 12 the source sentence can be skippedwe tried setting the distortion limit to 15 to see if allowing longer distance reorderings with ccg supertag sequence models could further improve performance however it resulted in a decrease in performance to a bleu score of 238443 manual analysisthe bleu score improvement in table 1 does not explain how the supertag sequence models affect the translation processas suggested by callisonburch et al we perform a focussed manual analysis of the output to see what changes have occurredfrom the test set we randomly selected 100 sentences which required reordering of verbs the dutch sentences ended with a verb which had to be moved forward in the english translationwe record whether or not the verb was correctly translated and whether it was reordered to the correct position in the target sentencemodel translated reordered sw tw 81 36 sw twc 87 52 table 2analysis of correct translation and reordering of verbs for dutchenglish translation in table 2 we can see that the addition of the ccgsupertag sequence model improved both the transla tion of the verbs and their reorderinghowever theimprovement is much more pronounced for reorderingthe difference in the reordering results is signif icant at p 005 using the 2 significance testthisshows that the syntactic information in the ccg su pertags is used by the model to prefer better word order for the target sentencein figure 2 we can see two examples of dutchenglish translations that have improved with the ap plication of ccg supertag sequence modelsin the first example the verb heeftoccurs at the end of the source sentencethe baseline model does not manage to translate heeftthe model with the ccg supertag sequence model translates it correctly as hasand reorders it correctly 4 placesto the leftthe second example also shows the se quence model correctly translating the dutch verb at the end of the sentence nodigone can see that it is still not entirely grammaticalthe improvements in reordering shown here are reorderings over a relatively short distance two or three positionsthis is well within the 5gram orderof the ccg supertag sequence model and we there fore consider this to be local reordering44 order of the sequence modelthe ccg supertags describe the syntactic context of the word they are attached totherefore theyhave an influence that is greater in scope than sur face words or pos tagsincreasing the order ofthe ccg supertag sequence model should also increase the ability to perform longer distance reorder inghowever at some point the reliability of the predictions of the sequence models is impaired due to sparse countsmodel none 1gram 3gram 5gram 7gram sw twc 2418 2396 2419 2442 2432 sw twpc 2434 2386 2409 2443 2414 table 3blue scores for dutchenglish models which apply ccg supertag sequence models of varying orders in table 3 we can see that the optimal order for the ccg supertag sequence models is 545 language model vs supertagsthe language model makes a great contribution to the correct order of the words in the target sentencein this experiment we investigate whether by using astronger language model the contribution of the sequence model will no longer be relevantthe relative contribution of the language mode and different sequence models is investigated for different lan guage model ngram lengthsmodel none 1gram 3gram 5gram 7gram sw tw 2122 2397 2405 2413 sw twp 2187 2183 2411 2425 2406 sw twc 2175 2170 2442 2467 2460 sw twpc 2199 2207 2443 2448 2442 table 4bleu scores for dutchenglish models which use language models of increasing ngram lengthcolumn none does not apply any language modelmodel sw tw does not apply any sequence models and model sw twpc applies both pos tag and supertag sequence modelsin table 4 we can see that if no language model is present the system benefits slightly from 13 sourcehij kan toch niet beweren dat hij daar geen exacte informatie over heeft reference how can he say he does not have any precise information sw twhe cannot say that he is not an exact information about sw twc he cannot say that he has no precise information on this source wij moeten hun verwachtingen niet beschamen meer dan ooit hebben al die landen thans onze bijstand nodig reference we must not disappoint them in their expectations and now more than ever these countries need our help sw twwe must not fail to their expectations more than ever to have all these countries now our assistance necessary sw twc we must not fail to their expectations more than ever those countries now need our assistance figure 2examples where the ccg supertag sequence model improves dutchenglish translation having access to all the other sequence modelshowever the language model contribution is verystrong and in isolation contributes more to transla tion performance than any other sequence modeleven with a high order language model applyingthe ccg supertag sequence model still seems to i am prove performancethis means that even if we usea more powerful language model the structural in formation contained in the supertags continues to be beneficial46 lexicalised reordering vs supertagsin this experiment we investigate using a strongerreordering model to see how it compares to the con tribution that ccg supertag sequence models makemoses implements the lexicalised reordering model described by tillman which learns whetherphrases prefer monotone inverse or disjoint orienta tions with regard to adjacent phraseswe apply this reordering models to the following experimentsmodel none lexreordsw tw 2397 2472 sw twc 2442 2478table 5dutchenglish models with and without a lexi calised reordering modelin table 5 we can see that lexicalised reordering improves translation performance for both modelshowever the improvement that was seen us ing ccg supertags without lexicalised reordering almost disappears when using a stronger reorderingmodelthis suggests that ccg supertagscontribution is similar to that of a reordering modelthe lex icalised reordering model only learns the orientation of a phrase with relation to its adjacent phrase so its influence is very limited in rangeif it can replace ccg supertags it suggests that supertagsinfluence is also within a local range47 ccg supertags on sourcesequence models over supertags improve the performance of phrasebased machine translationhowever this is a limited way of leveraging the rich syn tactic information available in the ccg categorieswe explore the potential of letting supertags direct translation by including them as a factor on the sourcethis is similar to syntaxdirected translation originally proposed for compiling and also used in machine translation information about thesource wordssyntactic function and subcategori sation can directly influence the hypotheses beingsearched in decodingthese experiments were per formed on the german to english translation task in contrast to the dutch to english results given in previous experimentswe use a model which combines more specificdependencies on source words and source ccg su pertags with a more general model which only has dependancies on the source word see equation 4we explore two different ways of balancing the sta tistical evidence from these multiple sourcesthe first way to combine the general and specific sources of information is by considering features from bothmodels as part of one large loglinear modelhowever by including more and less informative features in one model we may transfer too much ex planatory power to the more specific featurestoovercome this problem smith et al demon strated that using ensembles of separately trainedmodels and combining them in a logarithmic opin ion pool leads to better parameter valuesthis approach was used as the second way in which 14 we combined our modelsan ensemble of loglinearmodels was combined using a multiplicative con stant which we train manually using held out datat m m1 mhm typically the two models would need to be normalised before being combined but here the multi plicative constant fulfils this role by balancing theirseparate contributionsthis is the first work suggesting the application of lops to decoding in ma chine translationin the future more sophisticated translation models and ensembles of models willneed methods such as lops in order to balance sta tistical evidence from multiple sourcesmodel bleu sw tw 2330 swc tw 1973 single 2329 lop 2346 table 6germanenglish ccg supertags are used as a factor on the sourcethe simple models are combined in two ways either as a single loglinear model or as a lop of loglinear models table 6 shows that the simple general model performs considerably better thanthe simple specific model where there are multi ple dependencies on both words and ccg supertags this is because there are words in the test sentence that have been seen before but not with the ccg supertagstatistical evidence from multiple sources must be combinedthe first wayto combine them is to join them in one single log linear model which is trained over many featuresthis makes finding good weights difficult as the influence of the general model is greater and its dif ficult for the more specific model to discover goodweightsthe second method for combining the in formation is to use the weights from the separately trained simple models and then combine them in a lopheld out data is used to set the multiplicative constant needed to balance the contribution of the two modelswe can see that this second approach is more successful and this suggests that it is importantto carefully consider the best ways of combining dif ferent sources of information when using ensembles of modelshowever the results of this experiment are not very conclusivethere is no uncertainty inthe source sentence and the value of modelling it us ing ccg supertags is still to be demonstratedthe factored translation model allows for the inclusion of valuable sources of information in many dif ferent wayswe have shown that the syntacticallyrich ccg supertags do improve the translation pro cess and we investigate the best way of including them in the factored modelusing ccg supertagsover the target shows the most improvement especially when using targeted manual evaluationhow ever this effect seems to be largely due to improvedlocal reorderingreordering improvements can per haps be more reliably made using better reordering models or larger more powerful language modelsa further consideration is that supertags will always be limited to the few languages for which there are treebankssyntactic information represents embedded structures which are naturally incorporated intogrammarbased modelsthe ability of a flat struc tured model to leverage this information seems to be limitedccg supertagsability to guide translation would be enhanced if the constraints encoded in the tags were to be enforced using combinatory operatorswe thank hieu hoang for assistance with moses ju lia hockenmaier for access to ccgbank lexicons in german and english and stephen clark and james curran for providing the supertaggerthis work was supported in part under the gale program of thedefense advanced research projects agency con tract nohr001106c0022 and in part under theeuromatrix project funded by the european com mission 15
W07-0702
ccg supertags in factored statistical machine translationcombinatorial categorial grammar supertags present phrasebased machine translation with an opportunity to access rich syntactic information at a word levelthe challenge is incorporating this information into the translation processfactored translation models allow the inclusion of supertags as a factor in the source or target languagewe show that this results in an improvement in the quality of translation and that the value of syntactic supertags in flat structured phrasebased models is largely due to better local reorderingswe exploit factored phrasebased translation models to associate each word with a supertag which contains most of the information needed to build a full parse
mixturemodel adaptation for smt we describe a mixturemodel approach to adapting a statistical machine translation system for new domains using weights that depend on text distances to mixture components we investigate a number of variants on this approach including crossdomain versus dynamic adaptation linear versus loglinear mixtures language and translation model adaptation different methods of assigning weights and granularity of the source unit being adapted to the best methods achieve gains of approximately one bleu percentage point over a stateofthe art nonadapted baseline system language varies significantly across different genres topics styles etcthis affects empirical models a model trained on a corpus of carrepair manuals for instance will not be well suited to an application in the field of tourismideally models should be trained on text that is representative of the area in which they will be used but such text is not always availablethis is especially the case for bilingual applications because parallel training corpora are relatively rare and tend to be drawn from specific domains such as parliamentary proceedingsin this paper we address the problem of adapting a statistical machine translation system by adjusting its parameters based on some information about a test domainwe assume two basic settingsin crossdomain adaptation a small sample of parallel indomain text is available and it is used to optimize for translating future texts drawn from the same domainin dynamic adaptation no domain information is available ahead of time and adaptation is based on the current source text under translationapproaches developed for the two settings can be complementary an indomain development corpus can be used to make broad adjustments which can then be fine tuned for individual source textsour method is based on the classical technique of mixture modeling this involves dividing the training corpus into different components training a model on each part then weighting each model appropriately for the current contextmixture modeling is a simple framework that encompasses many different variants as described belowit is naturally fairly low dimensional because as the number of submodels increases the amount of text available to train each and therefore its reliability decreasesthis makes it suitable for discriminative smt training which is still a challenge for large parameter sets techniques for assigning mixture weights depend on the settingin crossdomain adaptation knowledge of both source and target texts in the indomain sample can be used to optimize weights directlyin dynamic adaptation training poses a problem because no reference text is availableour solution is to construct a multidomain development sample for learning parameter settings that are intended to generalize to new domains we do not learn mixture weights directly with this method because there is little hope that these would be well suited to new domainsinstead we attempt to learn how weights should be set as a function of distanceto our knowledge this approach to dynamic adaptation for smt is novel and it is one of the main contributions of the papera second contribution is a fairly broad investigation of the large space of alternatives defined by the mixturemodeling framework using a simple genrebased corpus decompositionwe experimented with the following choices crossdomain versus dynamic adaptation linear versus loglinear mixtures language and translation model adaptation various text distance metrics different ways of converting distance metrics into weights and granularity of the source unit being adapted tothe remainder of the paper is structured follows section 2 briefly describes our phrasebased smt system section 3 describes mixturemodel adaptation section 4 gives experimental results section 5 summarizes previous work and section 6 concludesour baseline is a standard phrasebased smt system given a source sentence s this tries to find the target sentence t that is the most likely translation of s using the viterbi approximation where alignment a tk are target phrases such that t t1 tk sk are source phrases such that s gj1 sjk and 9k is the translation of the kth target phrase tkto model p we use a standard loglinear approach where each fi is a feature function and weights αi are set using ochs algorithm to maximize the systems bleu score on a development corpusthe features used in this study are the length of t a singleparameter distortion penalty on phrase reordering in a as described in phrase translation model probabilities and 4gram language model probabilities logp using kneserney smoothing as implemented in the srilm toolkitphrase translation model probabilities are features of the form log p ekk1 log pwe use two different estimates for the conditional probabilities p and p relative frequencies and lexical probabilities as described in in both cases the forward phrase probabilities p are not used as features but only as a filter on the set of possible translations for each source phrase s that matches some ngram in s only the 30 topranked translations t according to p are retainedto derive the joint counts c from which p are estimated we use the phrase induction algorithm described in with symmetrized word alignments generated using ibm model 2 our approach to mixturemodel adaptation can be summarized by the following general algorithm from several different domainsset mixture weights as a function of the distances from corpus components to the current source text4combine weighted component models into a single global model and use it to translate as described in the previous sectionwe now describe each aspect of this algorithm in more detailwe partition the corpus into different genres defined as being roughly identical to corpus sourcethis is the simplest way to exploit heterogeneous training material for adaptationan alternative which we have not explored would be to cluster the corpus automatically according to topicwe adapt both language and translation model features within the overall loglinear combination to train translation models on each corpus component we used a global ibm2 model for word alignment then extracted componentspecific relative frequencies for phrase pairslexical probabilities were also derived from the global ibm2 model and were not adaptedthe procedure for training componentspecific language models on the target halves of each corpus component is identical to the procedure for the global model described in section 2in addition to the component models we also used a large static global modelthe most commonlyused framework for mixture models is a linear one where p is either a language or translation model pc is a model trained on component c and λc is the corresponding weightan alternative suggested by the form of the global model is a loglinear combination where we write αc to emphasize that in this case the mixing parameters are global weights like the weights on the other features within the loglinear modelthis is in contrast to linear mixing where the combined model p receives a loglinear weight but the weights on the components do not participate in the global loglinear combinationone consequence is that it is more difficult to set linear weights using standard minimumerror training techniques which assume only a flat loglinear modelwe used four standard distance metrics to capture the relation between the current source or target text q and each corpus component1 all are monolingualthey are applied only to source text or only to target textthe tfidf metric commonly used in information retrieval is defined as cos where vr and vq are vectors derived from component c and document q each consisting of elements of the form plog pdoc where p is the relative frequency of word w within the component or document and pdoc is the proportion of components it appears inlatent semantic analysis is a technique for implicitly capturing the semantic properties of texts based on the use of singular value decomposition to produce a rankreduced approximation of an original matrix of word and document frequencieswe applied this technique to all documents in the training corpus reduced the rank to 100 then calculated the projections of the component and document vectors described in the previous paragraph into the reduced spaceperplexity is a standard way of evaluating the quality of a language model on a test textwe define a perplexitybased distance metric pc1q where pc is the probability assigned to q by an ngram language model trained on component c the final distance metric which we call them is based on expressing the probability of q as a wordlevel mixture model p q ec dcpc where q wl wq and pc is the ngram probability of w following word sequence h in component c it is straighforward to use the them algorithm to find the set of weights dc vc that maximizes the likelihood of qthe weight dc is defined as the distance to component c for all experiments described below we used a probability difference threshold of 0001 as the them convergence criterionour focus in this paper is on adaptation via mixture weightshowever we note that the usual loglinear parameter tuning described in section 2 can also be considered adaptation in the crossdomain setting because learned preferences for word penalty relative lmtm weighting etc will reflect the target domainthis is not the case for dynamic adaptation where in the absence of an indomain development corpus the only information we can hope to glean are the weights on adapted models compared to other features of the systemthe method used for adapting mixture weights depends on both the combining framework and the adaptive setting as described belowwhen using a loglinear combining framework as described in section 33 mixture weights are set in the same way as the other loglinear parameters when performing crossdomain adaptationloglinear mixture models were not used for dynamic adaptationfor both adaptive settings linear mixture weights were set as a function of the distance metrics described in section 34given a set of metrics d1 dm let dic be the distance from the current text to component c according to metric dia simple approach to weighting is to choose a single metric di and set the weights in to be proportional to the corresponding distances because different distance metrics may capture complementary information and because optimal weights might be a nonlinear function of distance we also experimented with a linear combination of metrics transformed using a sigmoid function where qi reflects the relative predictive power of di and the sigmoid parametes ai and bi can be set to selectively suppress contributions from components that are far awayhere we assume that qi absorbs a normalization constant so that the acs sum to 1in this approach there are three parameters per distance metric to learn qi ai and biin general these parameters are also specific to the particular model being adapted ie the lm or the tmto optimize these parameters we fixed global loglinear weights at values obtained with ochs algorithm using representative adapted models based on a single distance metric in then used the downhill simplex algorithm to maximize bleu score on the development corpusfor tractability we followed standard practice with this technique and considered only monotonic alignments when decoding the two approaches just described avoid conditioning ac explicitly on c this is necessary for dynamic adaptation since any genre preferences learned from the development corpus cannot be expected to generalizehowever it is not necessary for crossdomain adaptation where the genre of the development corpus is assumed to represent the test domaintherefore we also experimented with using downhill simplex optimization to directly learn the set of linear weights ac that yield maximum bleu score on the development corpusa final variant on setting linear mixture weights is a hybrid between crossdomain and dynamic adaptationin this approach both the global loglinear weights and if they are being used the mixture parameters qi ai bi are set to characterize the test domain as in crossdomain adaptationwhen translating however distances to the current source text are used in or instead of distances to the indomain development corpusthis obviously limits the metrics used to ones that depend only on source textall experiments were run on the nist mt evaluation 2006 chinese data settable 1 summarizes the corpora usedthe training corpus was divided into seven components according to genre in all cases these were identical to ldc corpora with the exception of the newswire component which was amalgamated from several smaller corporathe target genre for crossdomain adaptation was newswire for which highquality training material is availablethe crossdomain development set nist04nw is the newswire subset of the nist 2004 evaluation set and the dynamic adaptation development set nist04mix is a balanced mixedgenre subset of nist 2004the nist 2005 evaluation set was used for testing crossdomain adaptation and the nist 2006 evaluation set was used to test dynamic adaptationbecause different development corpora are used for crossdomain and dynamic adaptation we trained one static baseline model for each of these adaptation settings on the corresponding development setall results given in this section are bleu scores newswire sp speeches ed editorial ng newsgroup bn broadcast news and because broadcast conversationtable 2 shows a comparison between linear and loglinear mixing frameworks with uniform weights used in the linear mixtureboth types of mixture model are better than the baseline but the linear mixture is slightly better than the loglinear mixturethis is quite surprising because these results are on the development set the loglinear model tunes its component weights on this set whereas the linear model only adjusts global lm and tm weightswe speculated that this may have been due to nonsmooth component models and tried various smoothing schemes including kneserney phrase table smoothing similar to that described in and binary features to indicate phrasepair presence within different componentsnone helped however and we conclude that the problem is most likely that ochs algorithm is unable to find a good maximimum in this settingdue to this result all experiments we describe below involve linear mixtures onlytable 3 compares the performance of all distance metrics described in section 34 when used on their own as defined in the difference between them is fairly small but appears to be consistent across lm and tm adaptation and across source and target side matchingin general lm metrics seem to have a slight advantage over the vector space metrics with them being the best overallwe focus on this metric for most of the experiments that follow the nist04nw development settable 4 shows the performance of the parameterized weighting function described by with sourceside them and lsa metrics as inputsthis is compared to direct weight optimization as both these techniques use downhill simplex for parameter tuningunfortunately neither is able to beat the performance of the normalized sourceside them metric on its own in additional tests we verified that this also holds for the test corpuswe speculate that this disappointing result is due to compromises made in order to run downhill simplex efficiently including holding global weights fixed using only a single starting point and running with monotone decoding tion on the nist04nw development set formance of cross domain adaptation is slightly better for the indomain test set but worse than dynamic adaptation on the two mixeddomain setstable 5 shows results for crossdomain adaptation using the sourceside them metric for linear weightingboth lm and tm adaptation are effective with testset improvements of approximately 1 bleu point over the baseline for lm adaptation and somewhat less for tm adaptationperformance also improves on the nist06 outofdomain test set however combined lm and tm adaptation is not better than lm adaptation on its own indicating that the individual adapted models may be capturing the same informationtable 6 contains results for dynamic adaptation using the sourceside them metric for linear weightingin this setting tm adaptation is much less effective not significantly better than the baseline performance of combined lm and tm adaptation is also lowerhowever lm adaptation improves over the baseline by up to a bleu pointthe pertable 7 shows results for the hybrid approach described at the end of section 352 global weights are learned on nist04nw but linear weights are derived dynamically from the current test fileperformance drops slightly compared to pure crossdomain adaptation indicating that it may be important to have a good fit between global and mixture weightsthe results of the final experiment to determine the effects of source granularity on dynamic adaptation are shown in table 8sourceside them distances are applied to the whole test set to genres within the set and to each document individuallyglobal weights were tuned specifically for each of these conditionsthere appears to be little difference among these approaches although genrebased adaptation perhaps has a slight advantagemixture modeling is a standard technique in machine learning it has been widely used to adapt language models for speech recognition and other applications for instance using crossdomain topic mixtures dynamic topic mixtures hierachical mixtures and cache mixtures most previous work on adaptive smt focuses on the use of ir techniques to identify a relevant subset of the training corpus from which an adapted model can be learnedbyrne et al use cosine distance from the current source document to find relevant parallel texts for training an adapted translation model with background information for smoothing alignmentshildebrand et al describe a similar approach but apply it at the sentence level and use it for language model as well as translation model adaptationthey rely on a perplexity heuristic to determine an optimal size for the relevant subsetzhao et al apply a slightly different sentencelevel strategy to language model adaptation first generating an nbest list with a baseline system then finding similar sentences in a monolingual targetlanguage corpusthis approach has the advantage of not limiting lm adaptation to a parallel corpus but the disadvantage of requiring two translation passes ueffing describes a selftraining approach that also uses a twopass algorithma baseline system generates translations that after confidence filtering are used to construct a parallel corpus based on the test setstandard phraseextraction techniques are then applied to extract an adapted phrase table from the systems own outputfinally zhang et al cluster the parallel training corpus using an algorithm that heuristically minimizes the average entropy of sourceside and targetside language models over a fixed number of clusterseach source sentence is then decoded using the language model trained on the cluster that assigns highest likelihood to that sentencethe work we present here is complementary to both the ir approaches and ueffings method because it provides a way of exploiting a preestablished corpus divisionthis has the potential to allow sentences having little surface similarity to the current source text to contribute statistics that may be relevant to its translation for instance by raising the probability of rare but pertinent wordsour work can also be seen as extending all previous approaches in that it assigns weights to components depending on their degree of relevance rather than assuming a binary distinction between relevant and nonrelevant componentswe have investigated a number of approaches to mixturebased adaptation using genres for chinese to english translationthe most successful is to weight component models in proportion to maximumlikelihood weights for the current text given an ngram language model mixture trained on corpus componentsthis resulted in gains of around one bleu pointa more sophisticated approach that attempts to transform and combine multiple distance metrics did not yield positive results probably due to an unsucessful optmization procedureother conclusions are linear mixtures are more tractable than loglinear ones lmbased metrics are better than vsbased ones lm adaptation works well and adding an adapted tm yields no improvement crossdomain adaptation is optimal but dynamic adaptation is a good fallback strategy and source granularity at the genre level is better than the document or testset levelin future work we plan to improve the optimization procedure for parameterized weight functionswe will also look at bilingual metrics for crossdomain adaptation and investigate better combinations of crossdomain and dynamic adaptation
W07-0717
mixturemodel adaptation for smtwe describe a mixturemodel approach to adapting a statistical machine translation system for new domains using weights that depend on text distances to mixture componentswe investigate a number of variants on this approach including crossdomainversus dynamic adaptation linear versus loglinear mixtures language and translation model adaptation different methods of assigning weights and granularity of the source unit being adapted tothe best methods achieve gains of approximately one bleu percentage point over a stateofthe art nonadapted baseline systemwe conclude that the best approach was to combine submodels of the same type linearly while combining models of different types log linearlywe interpolate the in and generaldomain phrase tables together assigning either linear or loglinear weights to the entries in the tables before combining overlapping entries this is now standard practice
evaluation of machine translation j schroeder ed ac uk abstract this paper evaluates the translation quality of machine translation systems for 8 language pairs translating french german spanish and czech to english and back we carried out an extensive human evaluation which allowed us not only to rank the different mt systems but also to perform higherlevel analysis of the evaluation process we measured timing and intraand interannotator agreement for three types of subjective evaluation we measured the correlation of automatic evaluation metrics with human judgments this metaevaluation reveals surprising facts about the most commonly used methodologies this paper presents the results for the shared translation task of the 2007 acl workshop on statistical machine translationthe goals of this paper are twofold first we evaluate the shared task entries in order to determine which systems produce translations with the highest qualitysecond we analyze the evaluation measures themselves in order to try to determine best practices when evaluating machine translation researchprevious acl workshops on machine translation were more limited in scope the 2005 workshop evaluated translation quality only in terms of bleu scorethe 2006 workshop additionally included a limited manual evaluation in the style of nist machine translation evaluation workshophere we apply eleven different automatic evaluation metrics and conduct three different types of manual evaluationbeyond examining the quality of translations produced by various systems we were interested in examining the following questions about evaluation methodologies how consistent are people when they judge translation qualityto what extent do they agree with other annotatorscan we improve human evaluationwhich automatic evaluation metrics correlate most strongly with human judgments of translation qualitythis paper is organized as follows interannotator agreement figures for the manual evaluation and correlation numbers for the automatic metrics2 shared task overview there are over 30 million words of training data per language from the europarl corpus and 1 million words from the news commentary corpusfigure 1 provides some statistics about the corpora used this yearthis years shared task changed in some aspects from last years similar to the iwslt international workshop on spoken language translation and the nist machine translation evaluation workshop we provide the shared task participants with a common set of training and test data for all language pairsthe major part of data comes from current and upcoming full releases of the europarl data set the data used in this years shared task was similar to the data used in last years shared taskthis years data included training and development sets for the news commentary data which was the surprise outofdomain test set last yearthe majority of the training data for the spanish french and german tasks was drawn from a new version of the europarl multilingual corpusadditional training data was taken from the news commentary corpusczech language resources were drawn from the news commentary dataadditional resources for czech came from the czeng parallel corpus overall to lower the barrier of entrance to the competition we provided a complete baseline mt system along with data resourcesto summarize we provided the performance of this baseline system is similar to the best submissions in last years shared taskthe test data was again drawn from a segment of the europarl corpus from the fourth quarter of 2000 which is excluded from the training dataparticipants were also provided with three sets of parallel text to be used for system development and tuningin addition to the europarl test set we also collected editorials from the project syndicate website1 which are published in all the five languages of the shared taskwe aligned the texts at a sentence level across all five languages resulting in 2007 sentences per languagefor statistics on this test set refer to figure 1the news commentary test set differs from the europarl data in various waysthe text type are editorials instead of speech transcriptsthe domain is general politics economics and sciencehowever it is also mostly political content and opinionwe received submissions from 15 groups from 14 institutions as listed in table 1this is a slight increase over last years shared task where submissions were received from 14 groups from 11 institutionsof the 11 groups that participated in last years shared task 6 groups returned this yearthis year most of these groups follow a phrasebased statistical approach to machine translationhowever several groups submitted results from systems that followed a hybrid approachwhile building a machine translation system is a serious undertaking we hope to attract more newcomers to the field by keeping the barrier of entry as low as possiblethe creation of parallel corpora such as the europarl the czeng and the news commentary corpora should help in this direction by providing freely available language resources for building systemsthe creation of an open source baseline system should also go a long way towards achieving this goalfor more on the participating systems please refer to the respective system description in the proceedings of the workshopwe evaluated the shared task submissions using both manual evaluation and automatic metricswhile automatic measures are an invaluable tool for the daytoday development of machine translation systems they are an imperfect substitute for human assessment of translation qualitymanual evaluation is time consuming and expensive to perform so comprehensive comparisons of multiple systems are rarefor our manual evaluation we distributed the workload across a number of people including participants in the shared task interested volunteers and a small number of paid annotatorsmore than 100 people participated in the manual evaluation with 75 of those people putting in at least an hours worth of efforta total of 330 hours of labor was invested nearly doubling last years allvolunteer effort which yielded 180 hours of effortbeyond simply ranking the shared task submissions we had a number of scientific goals for the manual evaluationfirstly we wanted to collect data which could be used to assess how well automatic metrics correlate with human judgmentssecondly we wanted to examine different types of manual evaluation and assess which was the besta number of criteria could be adopted for choosing among different types of manual evaluation the ease with which people are able to perform the task their agreement with other annotators their reliability when asked to repeat judgments or the number of judgments which can be collected in a fixed time periodthere are a range of possibilities for how human evaluation of machine translation can be donefor instance it can be evaluated with reading comprehension tests or by assigning subjective scores to the translations of individual sentences we examined three different ways of manually evaluating machine translation quality the most widely used methodology when manually evaluating mt is to assign values from two five point scales representing fluency and adequacythese scales were developed for the annual nist machine translation evaluation workshop by the linguistics data consortium the five point scale for adequacy indicates how much of the meaning expressed in the reference translation is also expressed in a hypothesis translation the second five point scale indicates how fluent the translation iswhen translating into english the values correspond to separate scales for fluency and adequacy were developed under the assumption that a translation might be disfluent but contain all the information from the sourcehowever in principle it seems that people have a hard time separating these two aspects of translationthe high correlation between peoples fluency and adequacy scores indicate that the distinction might be falsefigure 2 in constituentbased evaluation the source sentence was parsed and automatically aligned with the reference translation and systems translations another problem with the scores is that there are no clear guidelines on how to assign values to translationsno instructions are given to evaluators in terms of how to quantify meaning or how many grammatical errors separates the different levels of fluencybecause of this many judges either develop their own rules of thumb or use the scales as relative rather than absolutethese are borne out in our analysis of interannotator agreement in section 6because fluency and adequacy were seemingly difficult things for judges to agree on and because many people from last years workshop seemed to be using them as a way of ranking translations we decided to try a separate evaluation where people were simply asked to rank translationsthe instructions for this task were rank each whole sentence translation from best to worst relative to the other choices these instructions were just as minimal as for fluency and adequacy but the task was considerably simplifiedrather than having to assign each translation a value along an arbitrary scale people simply had to compare different translations of a single sentence and rank themin addition to having judges rank the translations of whole sentences we also conducted a pilot study of a new type of evaluation methodology which we call constituentbased evaluationin our constituentbased evaluation we parsed the source language sentence selected constituents from the tree and had people judge the translations of those syntactic phrasesin order to draw judges attention to these regions we highlighted the selected source phrases and the corresponding phrases in the translationsthe corresponding phrases in the translations were located via automatic word alignmentsfigure 2 illustrates the constituent based evaluation when applied to a german source sentencethe german source sentence is parsed and various phrases are selected for evaluationword alignments are created between the source sentence and the reference translation and the source sentence and each of the system translations we parsed the test sentences for each of the languages aside from czechwe used cowan and collins s parser for spanish arun and keller s for french dubey s for german and bikel s for englishthe word alignments were created with giza applied to a parallel corpus containing 200000 sentence pairs of the training data plus sets of 4007 sentence pairs created by pairing the test sentences with the reference translations and the test sentences paired with each of the system translationsthe phrases in the translations were located using techniques from phrasebased statistical machine translation which extract phrase pairs from word alignments because the wordalignments were created automatically and because the phrase extraction is heuristic the phrases that were selected may not exactly correspond to the translations of the selected source phrasewe noted this in the instructions to judges rank each constituent translation from best to worst relative to the other choices grade only the highlighted part of each translationplease note that segments are selected automatically and they should be taken as an approximate guidethey might include extra words that are not in the actual alignment or miss words on either endthe criteria that we used to select which constituents were to be evaluated were the final criterion helped reduce the number of alignment errorswe collected judgments using a webbased toolshared task participants were each asked to judge 200 sets of sentencesthe sets consisted of 5 system outputs as shown in figure 3the judges were presented with batches of each type of evaluationwe presented them with five screens of adequacyfluency scores five screens of sentence rankings and ten screens of constituent rankingsthe order of the types of evaluation were randomizedin order to measure intraannotator agreement 10 of the items were repeated and evaluated twice by each judgein order to measure interannotator agreement 40 of the items were randomly drawn from a common pool that was shared across all judges were allowed to select whichever data set they wanted and to evaluate translations into whatever languages they were proficient inshared task participants were excluded from judging their own systemstable 2 gives a summary of the number of judgments that we collected for translations of individual sentencessince we had 14 translation tasks and four different types of scores there were 55 different conditions2 in total we collected over 81000 judgmentsdespite the large number of conditions we managed to collect more than 1000 judgments for most of themthis provides a rich source of data for analyzing the quality of translations produced by different systems the different types of human evaluation and the correlation of automatic metrics with human judgments3 2we did not perform a constituentbased evaluation for czech to english because we did not have a syntactic parser for czechwe considered adapting our method to use bojar s dependency parser for czech but did not have the timethe past two acl workshops on machine translation used bleu as the sole automatic measure of translation qualitybleu was used exclusively since it is the most widely used metric in the field and has been shown to correlate with human judgments of translation quality in many instances however recent work suggests that bleus correlation with human judgments may not be as strong as previously thought the results of last years workshop further suggested that bleu systematically underestimated the quality of rulebased machine translation systems we used the manual evaluation data as a means of testing the correlation of a range of automatic metrics in addition to bleuin total we used eleven different automatic evaluation measures to rank the shared task submissionsthey are against a referenceit flexibly matches words using stemming and wordnet synonymsits flexible matching was extended to french spanish german and czech for this workshop 4the gtm scores presented here are an fmeasure with a weight of 01 which counts recall at 10x the level of precisionthe exponent is set at 12 which puts a mild preference towards items with words in the correct orderthese parameters could be optimized empirically for better resultster calculates the number of edits required to change a hypothesis translation into a reference translationthe possible edits in ter include insertion deletion and substitution of single words and an edit which moves sequences of contiguous wordsthe scores produced by these are given in the tables at the end of the paper and described in section 5we measured the correlation of the automatic evaluation metrics with the different types of human judgments on 12 data conditions and report these in section 6the results of the human evaluation are given in tables 9 10 11 and 12each of those tables present four scores there was reasonably strong agreement between these four measures at which of the entries was the best in each data conditionthere was complete 5since different annotators can vary widely in how they assign fluency and adequacy scores we normalized these scores on a perjudge basis using the method suggested by blatz et al in chapter 5 page 97 agreement between them in 5 of the 14 conditions and agreement between at least three of them in 10 of the 14 casestable 3 gives a summary of how often different participants entries were ranked 1 by any of the four human evaluation measuressystrans entries were ranked the best most often followed by university of edinburgh university of catalonia and limsicnrsthe following systems were the best performing for the different language pairs systran was ranked the highest in germanenglish university of catalonia was ranked the highest in spanishenglish limsicnrs was ranked highest in frenchenglish and the university of maryland and a commercial system were the highest for agreement for the different types of manual evaluation czechenglishwhile we consider the human evaluation to be primary it is also interesting to see how the entries were ranked by the various automatic evaluation metricsthe complete set of results for the automatic evaluation are presented in tables 13 14 15 and 16an aggregate summary is provided in table 4the automatic evaluation metrics strongly favor the university of edinburgh which garners 41 of the topranked entries significantly the automatic metrics disprefer systran which was strongly favored in the human evaluationin addition to evaluating the translation quality of the shared task entries we also performed a metaevaluation of our evaluation methodologieswe measured pairwise agreement among annotators using the kappa coefficient which is widely used in computational linguistics for measuring agreement in category judgments it is defined as where p is the proportion of times that the annotators agree and p is the proportion of time that they would agree by chancewe define chance agreement for fluency and adequacy as 5 since they are based on five point scales and for ranking as s since there are three possible out comes when ranking the output of a pair of systems a b a b a b a b or a bfor intraannotator agreement we did similarly but gathered items that were annotated on multiple occasions by a single annotatortable 5 gives k values for interannotator agreement and table 6 gives k values for intraannoator agreementthese give an indication of how often different judges agree and how often single judges are consistent for repeated judgments respectivelythe interpretation of kappa varies but according to landis and koch 0 2 is slight 21 4 is fair 416 is moderate 618 is substantial and the rest almost perfectthe k values for fluency and adequacy should give us pause about using these metrics in the futurewhen we analyzed them as they are intended to bescores classifying the translations of sentences into different typesthe interannotator agreement was barely considered fair and the intraannotator agreement was only moderateeven when we reassessed fluency and adequacy as relative ranks the agreements increased only minimallythe agreement on the other two types of manual evaluation that we introduced were considerably betterthe both the sentence and constituent ranking had moderate interannotator agreement and substantial intraannotator agreementbecause the constituent ranking examined the translations of short phrases often times all systems produced the same translationssince these trivially increased agreement we also evaluated the inter and intraannotator agreement when those items were excludedthe agreement remained very high for constituentbased evaluationwe used the web interface to collect timing informationthe server recorded the time when a set of sentences was given to a judge and the time when the judge returned the sentenceswe divided the time that it took to do a set by the number of sentences in the setthe average amount of time that it took to assign fluency and adequacy to a single sentence was 26 seconds6 the average amount of time it took to rank a sentence in a set was 20 secondsthe average amount of time it took to rank a highlighted constituent was 11 secondsfigure 4 shows the distribution of times for these tasks6sets which took longer than 5 minutes were excluded from these calculations because there was a strong chance that annotators were interrupted while completing the taskthese timing figures are promising because they indicate that the tasks which the annotators were the most reliable on were also much quicker to complete than the ones that they were unreliable on this suggests that fluency and adequacy should be replaced with ranking tasks in future evaluation exercisesto measure the correlation of the automatic metrics with the human judgments of translation quality we used spearmans rank correlation coefficient p we opted for spearman rather than pearson because it makes fewer assumptions about the dataimportantly it can be applied to ordinal data spearmans rank correlation coefficient is equivalent to pearson correlation on ranksafter the raw scores that were assigned to systems by an automatic metric and by one of our manual evaluation techniques have been converted to ranks we can calculate p using the simplified equation where di is the difference between the rank for systemi and n is the number of systemsthe possible values of p range between 1 and 1 thus an automatic evaluation metric with a higher value for p is making predictions that are more similar to the human judgments than an automatic evaluation metric with a lower p table 17 reports p for the metrics which were used to evaluate translations into english7table 7 summarizes the results by averaging the correlation numbers by equally weighting each of the data conditionsthe table ranks the automatic evaluation metrics based on how well they correlated with human judgmentswhile these are based on a relatively few number of items and while we have not performed any tests to determine whether the differences in p are statistically significant the results are nevertheless interesting since three metrics have higher correlation than bleu tables 18 and 8 report p for the six metrics which were used to evaluate translations into the other languageshere we find that bleu and ter are the closest to human judgments but that overall the correlations are much lower than for translations into englishsimilar to last years workshop we carried out an extensive manual and automatic evaluation of machine translation performance for translating from four european languages into english and vice versathis year we substantially increased the number of automatic evaluation metrics and were also able to nearly double the efforts of producing the human judgmentsthere were substantial differences in the results results of the human and automatic evaluationswe take the human judgments to be authoritative and used them to evaluate the automatic metricswe measured correlation using spearmans coefficient and found that three less frequently used metrics were stronger predictors of human judgments than bleuthey were semantic role overlap paraevalrecall and meteoralthough we do not claim that our observations are indisputably conclusive they again indicate that the choice of automatic metric can have a significant impact on comparing systemsunderstanding the exact causes of those differences still remains an important issue for future researchthis years evaluation also measured the agreement between human assessors by computing the kappa coefficientone striking observation is that interannotator agreement for fluency and adequacy can be called fair at beston the other hand comparing systems by ranking them manually resulted in much higher interannotator agreementthis work was supported in part by the euromatrix project funded by the european commission and in part by the gale program of the us defense advanced research projects agency contract nohr001106c0022we are grateful to jesus gimenez dan melamed maja popvic ding liu liang zhou and abhaya agarwal for scoring the entries with their automatic evaluation metricsthanks to brooke cowan for parsing the spanish test sentences to josh albrecht for his script for normalizing fluency and adequacy on a per judge basis and to dan melamed rebecca hwa alon lavie colin bannard and mirella lapata for their advice about statistical tests
W07-0718
evaluation of machine translationthis paper evaluates the translation quality of machine translation systems for 8 language pairs translating french german spanish and czech to english and backwe carried out an extensive human evaluation which allowed us not only to rank the different mt systems but also to perform higherlevel analysis of the evaluation processwe measured timing and intra and interannotator agreement for three types of subjective evaluationwe measured the correlation of automatic evaluation metrics with human judgmentsthis metaevaluation reveals surprising facts about the most commonly used methodologieswe show that ranking sentences gives higher interannotator agreement than scoring adequacy and fluency
experiments in domain adaptation for statistical machine translation 2 test set performance of our systems and outputreference length ratio 43 training and decoding parameters we tried to improve performance by increasing some of the limits imposed on the training and decoding setup during training long sentences are removed from the training data to speed up the giza word alignment process traditionally we worked with a sentence length limit of 40 we found that increasing this limit to about 80 gave better results without causing undue problems with running the word alignment we also tried to increase beam sizes and the limit on the number of translation options per coverage span this has shown to be successful in our experiments with arabicenglish and chineseenglish systems surprisingly increasing the maximum stack size to 1000 and ttablelimit to 100 has barely any efon translation performance the changed only by less than 005 and often worsened 44 germanenglish system the germanenglish language pair is especially challenging due to the large differences in word order collins et al suggest a method to reorder the german input before translating using a set of manually crafted rules in our germanenglish submissions this is done both to the training data and the input to the machine translation system 5 conclusions our submission to the wmt 2007 shared task is a fairly straightforward use of the moses mt system using default parameters in a sense we submitted baseline performance of this system for all our systems on the test sets are displayed in table 2 compared to other submitted systems these are very good scores often the best or second highest scores for these tasks we made a special effort in two areas we explored domain adaptation methods for the news commentary test sets and we used reordering rules for the germanenglish language pair the open source moses mt system was originally developed at the university of edinburgh and received a major boost through a 2007 johns hopkins workshopit is now used at several academic institutions as the basic infrastructure for statistical machine translation researchthe moses system is an implementation of the phrasebased machine translation approach in this approach an input sentence is first split into text chunks which are then mapped onetoone to target phrases using a large phrase translation tablephrases may be reordered but typically a reordering limit is usedsee figure 1 for an illustrationphrase translation probabilities reordering probabilities and language model probabilities are combined to give each possible sentence translation a scorethe bestscoring translation is searched for by the decoding algorithm and outputted by the system as the best translationthe different system components hi are combined in a loglinear model to obtain the score for the translation e for an input sentence f the weights of the components ai are set by a discriminative training method on heldout development data the basic components used in our experiments are two phrase translation probabilities and p two word translation probabilities and p phrase count output word count language model distancebased reordering model and lexicalized reordering modelfor a more detailed description of this model please refer to since training data for statistical machine translation is typically collected opportunistically from wherever it is available the application domain for a machine translation system may be very different from the domain of the systems training datafor the wmt 2007 shared task the challenge was to use a large amount of outofdomain training data proceedings of the second workshop on statistical machine translation pages 224227 prague june 2007 c2007 association for computational linguistics combined with a much smaller amount of indomain training data to optimize translation performance on that particular domainwe carried out these experiments on frenchenglishthe first baseline system is trained only on the outofdomain europarl corpus which has the following corpus statistics the second baseline system is trained only on the indomain newscommentary corpusthis corpus is much smaller french english sentences 42884 words 1198041 1018503 to make use of all the training data the straightforward way is to simply concatenate the two training corpora and use the combined data for both translation model and language model trainingin our situation however the outofdomain training data overwhelms the indomain training data due to the sheer relative sizehence we do not expect the best performance from this simplistic approachone way to force a drift to the jargon of the target domain is the use of the language modelin our next setup we used only indomain data for training the language modelthis enables the system to use all the translation knowledge from the combined corpus but it gives a preference to word choices that are dominant in the indomain training dataessentially the goal of our subsequent approaches is to make use of all the training data but to include a preference for the indomain jargon by giving more weight to the indomain training datathis and the next approach explore methods to bias the language model while the final approach biases the translation modelwe trained two language models one for each the outofdomain and the indomain training datalanguage modeling software such as the srilm toolkit we used allows the interpolation of these language modelswhen interpolating we give the outofdomain language model a weight in respect to the indomain language modelsince we want to obtain a language model that gives us the best performance on the target domain we set this weight so that the perplexity of the development set from that target domain is optimizedwe searched for the optimal weight setting by simply testing a set of weights and focusing on the most promising range of weightsfigure 2 displays all the weights we explored during this process and the corresponding perplexity of the resulting language model on the development set the optimal weight can be picked out easily from this very smooth curvethe loglinear modeling approach of statistical machine translation enables a straightforward combination of the indomain and outofdomain language modelswe included them as two separate features whose weights are set with minimum error rate trainingthe relative weight for each model is set directly by optimizing translation performancefinally besides biasing the language model to a specific target domain we may also bias the translation modelhere we take advantage of a feature of the moses decoders factored translation model frameworkin factored translation models the representation of words is extended to a vector of factors the mapping of an input phrase to an output phrase is decomposed into several translation and generation steps each using a different translation or generation table respectivelysuch a decomposition is called a decoding patha more recent feature of the factored translation model framework is the possible use of multiple alternative decoding pathsthis alternate decoding path model was developed by birch et al for our purposes we use two decoding paths each consisting of only one translation stepone decoding path is the indomain translation table and the other decoding path is the outofdomain translation tableagain respective weights are set with minimum error rate trainingtable 1 shows results of our domain adaptation experiments on the development test set the results suggest that the language model is a useful tool for domain adaptationwhile training on all the data is essential for good performance using an indomain language model alone already gives fairly high performance the performance with the interpolated language model and two language models are similarall perform better than the three baseline approachesthe results also suggest that higher performance can be obtained by using two translation models through the moses decoders alternative decoding path frameworkwe saw our best results under this condition we participated in all categoriesgiven the four language pairs with two translation directions and two test domains this required us to build 14 translation systemswe had access to a fairly large computer cluster to carry out our experiments over the course of a few weekshowever speed issues with the decoder and load issues on the crowded cluster caused us to take a few shortcutsalso a bug crept in to our englishfrench experiments where we used the wrong detokenizer resulting drop of 23 points in bleuminimum error rate training is the most timeconsuming aspects of the training processdue to time constraints we did not carry out this step for all but the czech systems for the other systems we reused weight settings from our last years submissionone of the most crucial outcomes of tuning is a proper weight setting for output length which is especially important for the bleu scoresince the training corpus and tokenization changed our reused weights are not always optimal in this respectbut only in one case we felt compelled to manually adjust the weight for the word count feature since the original setup led to a outputreference length ratio of 088 on the development test setfor the europarl test sets we did not use any domain adaptation techniques but simply used either just the europarl training data or the combined data whatever gave the higher score on the development test set although scores differed by only about 0102 bleuin order to be able to reuse the old weights we were limited to domain adaptation methods that did not change the number of componentswe decided to use the interpolated language model method described in section 25for the different language pairs optimal interpolation weights differed we tried to improve performance by increasing some of the limits imposed on the training and decoding setupduring training long sentences are removed from the training data to speed up the giza word alignment processtraditionally we worked with a sentence length limit of 40we found that increasing this limit to about 80 gave better results without causing undue problems with running the word alignment we also tried to increase beam sizes and the limit on the number of translation options per coverage span this has shown to be successful in our experiments with arabicenglish and chineseenglish systemssurprisingly increasing the maximum stack size to 1000 and ttablelimit to 100 has barely any effect on translation performancethe bleu score changed only by less than 005 and often worsenedthe germanenglish language pair is especially challenging due to the large differences in word ordercollins et al suggest a method to reorder the german input before translating using a set of manually crafted rulesin our germanenglish submissions this is done both to the training data and the input to the machine translation systemour submission to the wmt 2007 shared task is a fairly straightforward use of the moses mt system using default parametersin a sense we submitted a baseline performance of this systembleu and nist scores for all our systems on the test sets are displayed in table 2compared to other submitted systems these are very good scores often the best or second highest scores for these taskswe made a special effort in two areas we explored domain adaptation methods for the newscommentary test sets and we used reordering rules for the germanenglish language pairthis work was supported in part under the gale program of the defense advanced research projects agency contract nohr001106c0022 and in part under the euromatrix project funded by the european commission
W07-0733
experiments in domain adaptation for statistical machine translationthe special challenge of the wmt 2007 shared task was domain adaptationwe took this opportunity to experiment with various ways of adapting a statistical machine translation systems to a special domain when most of the training data is from a different domain this paper also gives a description of the submission of the university of edinburgh to the shared taskfactored translation models is used for the integration of domain adaptationwe use two language models and two translation models one indomain and other outofdomain to adapt the systemwe learn mixture weights for language models trained with indomain and out ofdomain data respectively by minimizing the perplexity of a tuning set and interpolating the models
meteor an automatic metric for mt evaluation with high levels of correlation with human judgments an automatic metric for machine translation evaluation which has been demonstrated to have high levels of correlation with human judgments of translation quality significantly outperforming the more used it is one of several automatic metrics used in this years shared task within the acl wmt07 workshop this paper recaps the technical details underlying the metric and describes recent improvements in the metric the latest release includes improved metric parameters and extends the metric to support evaluation of mt output in spanish french and automatic metrics for mt evaluation have been receiving significant attention in recent yearsevaluating an mt system using such automatic metrics is much faster easier and cheaper compared to human evaluations which require trained bilingual evaluatorsautomatic metrics are useful for comparing the performance of different systems on a common translation task and can be applied on a frequent and ongoing basis during mt system developmentthe most commonly used mt evaluation metric in recent years has been ibms bleu metric bleu is fast and easy to run and it can be used as a target function in parameter optimization training procedures that are commonly used in stateoftheart statistical mt systems various researchers have noted however various weaknesses in the metricmost notably bleu does not produce very reliable sentencelevel scoresmeteor as well as several other proposed metrics such as gtm ter and cder aim to address some of these weaknessesmeteor initially proposed and released in 2004 was explicitly designed to improve correlation with human judgments of mt quality at the segment levelprevious publications on meteor have described the details underlying the metric and have extensively compared its performance with bleu and several other mt evaluation metricsthis paper recaps the technical details underlying meteor and describes recent improvements in the metricthe latest release extends meteor to support evaluation of mt output in spanish french and german in addition to englishfurthermore several parameters within the metric have been optimized on languagespecific training datawe present experimental results that demonstrate the improvements in correlations with human judgments that result from these parameter tuningsmeteor evaluates a translation by computing a score based on explicit wordtoword matches between the translation and a given reference translationif more than one reference translation is available the translation is scored against each reference independently and the best scoring pair is usedgiven a pair of strings to be compared meteor creates a word alignment between the two stringsan alignment is mapping between words such that every word in each string maps to at most one word in the other stringthis alignment is incrementally produced by a sequence of wordmapping modulesthe exact module maps two words if they are exactly the samethe porter stem module maps two words if they are the same after they are stemmed using the porter stemmerthe wn synonymy module maps two words if they are considered synonyms based on the fact that they both belong to the same synsetquot in wordnetthe wordmapping modules initially identify all proceedings of the second workshop on statistical machine translation pages 228231 prague june 2007 c2007 association for computational linguistics possible word matches between the pair of stringswe then identify the largest subset of these word mappings such that the resulting set constitutes an alignment as defined aboveif more than one maximal cardinality alignment is found meteor selects the alignment for which the word order in the two strings is most similar the order in which the modules are run reflects wordmatching preferencesthe default ordering is to first apply the exact mapping module followed by porter stemming and then wn synonymyonce a final alignment has been produced between the system translation and the reference translation the meteor score for this pairing is computed as followsbased on the number of mapped unigrams found between the two strings the total number of unigrams in the translation and the total number of unigrams in the reference we calculate unigram precision p mt and unigram recall are mrwe then compute a parameterized harmonic mean of p and are precision recall and fmean are based on singleword matchesto take into account the extent to which the matched unigrams in the two strings are in the same word order meteor computes a penalty for a given alignment as followsfirst the sequence of matched unigrams between the two strings is divided into the fewest possible number of chunks such that the matched unigrams in each chunk are adjacent and in identical word orderthe number of chunks and the number of matches is then used to calculate a fragmentation fraction frag chmthe penalty is then computed as the value of y determines the maximum penalty the value of 0 determines the functional relation between fragmentation and the penaltyfinally the meteor score for the alignment between the two strings is calculated as score fmean in all previous versions of meteor the values of the three parameters mentioned above were set to be a 09 0 30 and y 05 based on experimentation performed in early 2004in the latest release we tuned these parameters to optimize correlation with human judgments based on more extensive experimentation as reported in section 4we have recently expanded the implementation of meteor to support evaluation of translations in spanish french and german in addition to englishtwo main languagespecific issues required adaptation within the metric languagespecific wordmatching modules and languagespecific parameter tuningthe wordmatching component within the english version of meteor uses stemming and synonymy modules in constructing a wordtoword alignment between translation and referencethe resources used for stemming and synonymy detection for english are the porter stemmer and english wordnet in order to construct instances of meteor for spanish french and german we created new languagespecific stemming moduleswe use the freely available perl implementation packages for porter stemmers for the three languages unfortunately we have so far been unable to obtain freely available wordnet resources for these three languagesmeteor versions for spanish french and german therefore currently include only exact and stemming matching moduleswe are investigating the possibility of developing new synonymy modules for the various languages based on alternative methods which could then be used in place of wordnetthe second main languagespecific issue which required adaptation is the tuning of the three parameters within meteor described in section 4the original version of meteor has instantiated values for three parameters in the metric one for controlling the relative weight of precision and recall in computing the fmean score one governing the shape of the penalty as a function of fragmentation and one for the relative weight assigned to the fragmentation penalty in all versions of meteor to date these parameters were instantiated with the values a 09 0 30 and y 05 based on early data experimentationwe recently conducted a more thorough investigation aimed at tuning these parameters based on several available data sets with the goal of finding parameter settings that maximize correlation with human judgmentshuman judgments come in the form of adequacy and fluencyquot quantitative scoresin our experiments we looked at optimizing parameters for each of these human judgment types separately as well as optimizing parameters for the sum of adequacy and fluencyparameter adaptation is also an issue in the newly created meteor instances for other languageswe suspected that parameters that were optimized to maximize correlation with human judgments for english would not necessarily be optimal for other languagesfor english we used the nist 2003 arabictoenglish mt evaluation data for training and the nist 2004 arabictoenglish evaluation data for testingfor spanish german and french we used the evaluation data provided by the shared task at last years wmt workshopsizes of various corpora are shown in table 1some but not all of these data sets have multiple human judgments per translation hypothesisto partially address human bias issues we normalize the human judgments which transforms the raw judgment scores so that they have similar distributionswe use the normalization method described in multiple judgments are combined into a single number by taking their averagewe performed a hill climbing search to find the parameters that achieve maximum correlation with human judgments on the training setwe use pearsons correlation coefficient as our measure of correlationwe followed a leave one out training procedure in order to avoid overfittingwhen n systems were available for a particular language we train the parameters n times leaving one system out in each training and pooling the segments from all other systemsthe final parameter values are calculated as the mean of the n sets of trained parameters that were obtainedwhen evaluating a set of parameters on test data we compute segmentlevel correlation with human judgments for each of the systems in the test set and then report the mean over all systemsthe optimal parameter values found are somewhat different than our previous metric parameters the new parameters result in moderate but noticeable improvements in correlation with human judgments on both training and testing datatests for statistical significance using bootstrap sampling indicate that the differences in correlation levels are all significant at the 95 levelanother interesting observation is that precision receives slightly more weight when optimizing correlation with fluency judgments recall however is still given more weight than precisionanother interesting observation is that the value of y is higher for fluency optimizationsince the fragmentation penalty reflects wordordering which is closely related to fluency these results are consistent with our expectationswhen optimizing correlation with the sum of adequacy and fluency optimal values fall in between the values found for adequacy and fluencysimilar to english we trained parameters for spanish french and german on the available wmt06 training datawe optimized for maximum correlation with human judgments of adequacy fluency and for the sum of the tworesulting parameters are shown in table 432for all three languages the parameters that were found to be optimal were quite different than those that were found for english and using the languagespecific optimal parameters results in significant gains in pearson correlation levels with human judgments on the training data note that the training sets used for these optimizations are comparatively very small and that we currently do not have unseen test data to evaluate the parameters for these three languagesfurther validation will need to be performed once additional data becomes availablein this paper we described newly developed languagespecific instances of the meteor metric and the process of optimizing metric parameters for different human measures of translation quality and for different languagesour evaluations demonstrate that parameter tuning improves correlation with human judgmentsthe stability of the optimized parameters on different data sets remains to be investigated for languages other than englishwe are currently exploring broadening the set of features used in meteor to include syntaxbased features and alternative notions of synonymythe latest release of meteor is freely available on our website at httpwwwcscmuedualaviemeteorthe work reported in this paper was supported by nsf grant iis0534932
W07-0734
meteor an automatic metric for mt evaluation with high levels of correlation with human judgmentsmeteor is an automatic metric for machine translation evaluation which has been demonstrated to have high levels of correlation with human judgments of translation quality significantly outperforming the more commonly used bleu metricit is one of several automatic metrics used in this years shared task within the acl wmt07 workshopthis paper recaps the technical details underlying the metric and describes recent improvements in the metricthe latest release includes improved metric parameters and extends the metric to support evaluation of mt output in spanish french and german in addition to englishin an mt evaluation setting sense clusters have been integrated into an mt evaluation metric and brought about an increase of the metric correlation with human judgments of translation quality in different languages
the third pascal recognizing textual entailment challenge we would like to thank the people and organizations that made these sources available for the challenge in addition we thank idan szpektor and roy bar haim from barilan university for their assistance and advice and valentina bruseghini celct for managing the rte3 we would also like to acknowledge the people and organizations involved in creating and annotating the data pamela forner errol hayman cameron fordyce from celct and courtenay hendricks adam savel and annika hamalainen this work was supported in part by the ist programme of the european community under the network of ist2002 506778 we wish to thank the managers of the pascal challenges program michele sebag and florence dalchebuc for their efforts and support which made this challenge possible we also thank david askey who helped manage the rte 3 website the goal of the rte challenges has been to create a benchmark task dedicated to textual entailment recognizing that the meaning of one text is entailed ie can be inferred by another1in the recent years this task has raised great interest since applied semantic inference concerns many practical natural language processing applications such as question answering information extraction summarization machine translation and paraphrasing and certain types of queries in information retrieval more specifically the rte challenges have aimed to focus research and evaluation on this common underlying semantic inference task and separate it from other problems that different nlp applications need to handlefor example in addition to textual entailment qa systems need to handle issues such as answer retrieval and question type recognitionby separating out the general problem of textual entailment from these taskspecific problems progress on semantic inference for many application areas can be promotedhopefully research on textual entailment will finally lead to the development of entailment engines which can be used as a standard module in many applications in the following sections a detailed description of rte3 is presentedafter a quick review of the previous challenges section 2 describes the preparation of the datasetin section 3 the evaluation process and the results are presented together with an analysis of the performance of the participating systemsthe first rte challenge2 aimed to provide the nlp community with a new benchmark to test progress in recognizing textual entailment and to compare the achievements of different groupsthis goal proved to be of great interest and the community response encouraged the gradual expansion of the scope of the original taskthe second rte challenge3 built on the success of the first with 23 groups from around the world submitting the results of their systemsrepresentatives of participating groups presented their work at the pascal challenges workshop in april 2006 in venice italythe event was successful and the number of participants and their contributions to the discussion demonstrated that textual entailment is a quickly growing field of nlp researchin addition the workshops spawned an impressive number of publications in major conferences with more work in progressanother encouraging sign of the growing interest in the rte challenge was represented by the increase in the number of downloads of the challenge datasets with about 150 registered downloads for the rte2 development setrte3 followed the same basic structure of the previous campaigns in order to facilitate the participation of newcomers and to allow quotveteransquot to assess the improvements of their systems in a comparable test exercisenevertheless some innovations were introduced on the one hand to make the challenge more stimulating and on the other to encourage collaboration between system developersin particular a limited number of longer texts ie up to a paragraph in length were incorporated in order to move toward more comprehensive scenarios which incorporate the need for discourse analysishowever the majority of examples remained similar to those in the previous challenges providing pairs with relatively short textsanother innovation was represented by a resource pool4 where contributors had the possibility to share the resources they usedin fact one of the key conclusions at the second rte challenge workshop was that entailment modeling requires vast knowledge resources that correspond to different types of entailment reasoningmoreover entailment systems also utilize general nlp tools such as pos taggers parsers and namedentity recognizers sometimes posing specialized requirements to such toolsin response to these demands the rte resource pool was built which may serve as a portal and forum for publicizing and tracking resources and reporting on their usein addition an optional pilot task called quotextending the evaluation of inferences from textsquot was set up by the us national institute of standards and technology in order to explore two other subtasks closely related to textual entailment differentiating unknown entailments from identified contradictions and providing justifications for system decisionsin the first subtask the idea was to drive systems to make more precise informational distinctions taking a threeway decision between quotyesquot quotnoquot and quotunknown so that a hypothesis being unknown on the basis of a text would be distinguished from a hypothesis being shown falsecontradicted by a textas for the other subtask the goal for providing justifications for decisions was to explore how eventual users of tools incorporating entailment can be made to understand how decisions were reached by a system as users are unlikely to trust a system that gives no explanation for its decisionsthe pilot task exploited the existing rte3 challenge infrastructure and evaluation process by using the same test set while utilizing human assessments for the new subtasks jobs but opponents say it is vicious and endangers the species also threatened by global warming ir the italian parliament may approve a draft law allow italian royal fam no ing descendants of the exiled royal family to return ily returns home homethe family was banished after the second world war because of the king collusion with the fascist regime but moves were introduced this year to allow their returnqa aeschylus is often called the father of greek tragedy quotthe persiansquot yes he wrote the earliest complete plays which survive from was written by ancient greecehe is known to have written more than aeschylus90 plays though only seven survivethe most famous of these are the trilogy known as orestiaalso wellknown are the persians and prometheus boundsum a pentagon committee and the congressionally char bush will meet no tered iraq study group have been preparing reports for the presidents of bush and iran has asked the presidents of iraq and iraq and syria in syria to meet in tehrantehranthe textual entailment recognition task required the participating systems to decide given two text snippets t and h whether t entails h textual entailment is defined as a directional relation between two text fragments called text and hypothesis so that a human being with common understanding of language and common background knowledge can infer that h is most likely true on the basis of the content of t as in the previous challenges the rte3 dataset consisted of 1600 texthypothesis pairs equally divided into a development set and a test setwhile the length of the hypotheses was the same as in the past datasets a certain number of texts were longer than in previous datasets up to a paragraphthe longer texts were marked as l after being selected automatically when exceeding 270 bytesin the test set they were about 17 of the totalas in rte2 four applications namely ie ir qa and sum were considered as settings or contexts for the pairs generation 200 pairs were selected for each application in each datasetalthough the datasets were supposed to be perfectly balanced the number of negative examples were slightly higher in both development and test sets positive entailment examples where t entailed h were annotated yes the negative ones where entailment did not hold noeach pair was annotated with its related task and entailment judgment table 1 shows some examples taken from the development setthe examples in the dataset were based mostly on outputs of webbased systemsin order to avoid copyright problems input data was limited to either what had already been publicly released by official competitions or else was drawn from freely available sources such as wikinews and wikipediain choosing the pairs the following judgment criteria and guidelines were considered as entailment is a directional relation the hypothesis must be entailed by the given text but the text need not be entailed by the hypothesis the hypothesis must be fully entailed by the textjudgment must be no if the hypothesis includes parts that cannot be inferred from the text cases in which inference is very probable were judged as yes common world knowledge was assumed eg the capital of a country is situated in that country the prime minister of a state is also a citizen of that state and so onas in rte2 human annotators generated th pairs within 4 application settingsthe ie task was inspired by the information extraction application where texts and structured templates were replaced by th pairsas in the 2006 campaign the pairs were generated using four different approaches the common aim of all these processes was to simulate the need of ie systems to recognize that the given text indeed entails the semantic relation that is expected to hold between the candidate template slot fillersin the ir application setting the hypotheses were propositional ir queries which specify some statement egrobots are used to find avalanche victimsthe hypotheses were adapted and simplified from standard ir evaluation datasets texts that did or did not entail the hypotheses were selected from documents retrieved by different search engines for each hypothesisin this application setting it was assumed that relevant documents should entail the given propositional hypothesisfor the qa task annotators used questions taken from the datasets of official qa competitions such as trec qa and qaclef datasets and the corresponding answers extracted from the web by actual qa systemsthen they transformed the questionanswer pairs into th pairs as follows an answer term of the expected answer type was picked from the answer passage either a correct or an incorrect one the question was turned into an affirmative sentence plugging in the answer term th pairs were generate using the affirmative sentences as hypotheses and the original answer passages as texts for example given the question how high is mount everest and a text the above mentioned expedition team comprising of 10 members was permitted to climb 8848m high mteverest from normal route for the period of 75 days from 15 april 2007 under the leadership of mr wolf herbert of austria the annotator extracting the piece of information 8848m from the text would turn the question into an the affirmative sentence mount everest is 8848m high generating a positive entailment pairthis process simulated the need of a qa system to verify that the retrieved passage text actually entailed the provided answerin the sum setting the entailment pairs were generated using two proceduresin the first one ts and hs were sentences taken from a news document cluster a collection of news articles that describe the same news itemannotators were given the output of multidocument summarization systems including the document clusters and the summary generated for each clusterthen they picked sentence pairs with high lexical overlap preferably where at least one of the sentences was taken from the summary for positive examples the hypothesis was simplified by removing sentence parts until it was fully entailed by t negative examples were simplified in a similar mannerin alternative pyramids produced for the experimental evaluation mehod in duc 2005 were exploitedin this new evaluation method humans select subsentential content units in several manually produced summaries on a subject and collocate them in a pyramid which has at the top the scus with the higher frequency ie those which are present in most summarieseach scu is identified by a label a sentence in natural language which expresses the contentafterwards the annotators individuate the scus present in summaries generated automatically and link them to the ones present in the pyramid in order to assign each peer a weightin this way the scus in the automatic summaries linked to the scus in the higher tiers of the pyramid are assigned a heavier weight than those at the bottomfor the sum setting the rte3 annotators selected relevant passages from the peers and used them as ts meanwhile the labels of the corresponding scus were used as hssmall adjustments were allowed whenever the texts were not grammatically acceptablethis process simulated the need of a summarization system to identify information redundancy which should be avoided in the summaryeach pair of the dataset was judged by three annotatorsas in previous challenges pairs on which the annotators disagreed were filteredouton the test set the average agreement between each pair of annotators who shared at least 100 examples was 878 with an average kappa level of 075 regarded as substantial agreement according to landis and koch 192 of the pairs in the dataset were removed from the test set due to disagreementthe disagreement was generally due to the fact that the h was more specific than the t for example because it contained more information or made an absolute assertion where t proposed only a personal opinionin addition 94 of the remaining pairs were discarded as they seemed controversial too difficult or too similar when compared to other pairsas far as the texts extracted from the web are concerned spelling and punctuation errors were sometimes fixed by the annotators but no major change was allowed so that the language could be grammatically and stylistically imperfectthe hypotheses were finally doublechecked by a native english speakerthe evaluation of all runs submitted in rte3 was automaticthe judgments returned by the system were compared to the gold standard compiled by the human assessorsthe main evaluation measure was accuracy ie the percentage of matching judgmentsfor systems that provided a confidenceranked list of the pairs in addition to the yesno judgment an average precision measure was also computedthis measure evaluates the ability of systems to rank all the th pairs in the test set according to their entailment confidence average precision is computed as the average of the system precision values at all points in the ranked list in which recall increases that is at all points in the ranked list for which the gold standard annotation is yes or more formally where n is the number of the pairs in the test set r is the total number of positive pairs in the test set e is 1 if the ith pair is positive and 0 otherwise and i ranges over the pairs ordered by their rankingin other words the more the system was confident that t entails h the higher was the ranking of the paira perfect ranking would have placed all the positive pairs before all the negative ones yielding an average precision value of 1twentysix teams participated in the third challenge three more than in previous yeartable 2 presents the list of the results of each submitted runs and the components used by the systemsoverall we noticed a move toward deep approaches with a general consolidation of approaches based on the syntactic structure of text and hypothesisthere is an evident increase of systems using some form of logical inferences however these approaches with few notably exceptions do not seem to be consolidated enough as several systems show results not still at the state of art for many systems an open issue is the availability and integration of different and complex semantic resourcesa more extensive and fine grained use of specific semantic phenomena is also emergingas an example tatu and moldovan carry on a sophisticated analysis of named entities in particular person names distinguishing first names from last namessome form of relation extraction either through manually built patterns or through the use of an information extraction system have been introduced this year even if still on a small scale on the other hand rte3 confirmed that both machine learning using lexicalsyntactic features and transformationbased approaches on dependency representations are well consolidated techniques to address textual entailmentthe extension of transformationbased approaches toward probabilistic settings is an interesting direction investigated by some systems on the side of light approaches to textual entailment malakasiotis and androutpoulos provide a useful baseline for the task using only pos tagging and then applying stringbased measures to estimate the similarity between text and hypothesisas far as resources are concerned lexical databases are still widely usedextended wordnet is also a common resource and the extended wordnet knowledge base has been successfully used in verboriented resources are also largely present in several systems including framenet verbnet and propbank it seems that the use of the web as a resource is more limited when compared to the previous rte workshophowever as in rte2 the use of large semantic resources is still a crucial factor affecting the performance of systems finally an interesting aspect is that stimulated by the percentage of longer texts included this year a number of participating systems addressed anaphora resolution the accuracy achieved by the participating systems ranges from 49 to 80 while most of the systems obtained a score in between 59 and 66one submission hickl and bensley achieved 80 accuracy scoring 8 higher than the second system and obtaining the best absolute result achieved in the three rte challengesas far as the pertask results are concerned the trend registered in rte2 was confirmed in that there was a marked difference in the performances obtained in different task settingsin fact the average accuracy achieved in the qa setting was 20 points higher than that achieved in the ie setting the average accuracy in the ir and sum settings was 066 and 058 respectivelyin rte2 the best results were achieved in sum while the lower score was always recorded in ieas already pointed out by barhaim these differences should be further investigated as they could lead to a sensible improvement of the performanceas for the long pairs which represented a new element of this years challenge no substantial difference was noted in the systems performances the average accuracy over the long pairs was 5872 compared to 6193 over the short onesat its third round the recognizing textual entailment task has reached a noticeable level of maturity as the very high interest in the nlp community and the continuously increasing number of participants in the challenges demonstratethe relevance of textual entailment recognition to different applications such as the ave5 track at qa at clef6 has also been acknowledgedfurthermore the debates and the numerous publications about the textual entailment have contributed to the better understanding the task and its natureto keep a good balance between the consolidated main task and the need for moving forward longer texts were introduced in the dataset in order to make the task more challenging and a pilot task was proposedthe third rte challenge have also confirmed that the methodology for the creation of the datasets developed in the first two campaigns is robustoverall the transition of the challenge coordination from barilan which organized the first two challenges to celct was successful though some problems were encountered especially in the preparation of the data setthe systems which took part in rte3 showed that the technology applied to entailment recognition has made significant progress confirmed by the results which were generally better than last yearin particular visible progress in defining several new principled scenarios for rte was represented such as hickls commitmentbased approach bar haims proof system harmelings probabilistic model and standfords use of natural logicif on the one hand the success that rte has had so far is very encouraging on the other it incites to overcome certain current limitations and to set realistic and at the same time stimulating goals for the futurefirst at all theoretical refinements both of the task and the models applied to it need to be developedin particular more efforts are required to improve knowledge acquisition as little progress has been made on this front so faralso the data set generation and the evaluation methodology need to be refined and extendeda major problem in the current setting of the data collection is that the distribution of the examples is arbitrary to a large extent being determined by manual selectiontherefore new evaluation methodologies which can reflect realistic distributions should be investigated as well as the possibility of evaluating textual entailment recognition within additional concrete application scenarios following the spirit of the qa answer validation exercisethe following sources were used in the preparation of the data httpwww1cscolumbiaeduaniduc2005 we would like to thank the people and organizations that made these sources available for the challengein addition we thank idan szpektor and roy bar haim from barilan university for their assistance and advice and valentina bruseghini from celct for managing the rte3 websitewe would also like to acknowledge the people and organizations involved in creating and annotating the data pamela forner errol hayman cameron fordyce from celct and courtenay hendricks adam savel and annika hamalainen this work was supported in part by the ist programme of the european community under the pascal network of excellence ist2002506778we wish to thank the managers of the pascal challenges program michele sebag and florence dalchebuc for their efforts and support which made this challenge possiblewe also thank david askey who helped manage the rte 3 website
W07-1401
the third pascal recognizing textual entailment challengethis paper presents the third pascal recognising textual entailment challenge providing an overview of the dataset creating methodology and the submitted systemsin creating this years dataset a number of longer texts were introduced to make the challenge more oriented to realistic scenariosadditionally a pool of resources was offered so that the participants could share common toolsa pilot task was also set up aimed at differentiating unknown entailments from identified contradictions and providing justifications for overall system decisions26 participants submitted 44 runs using different approaches and generally presenting new entailment models and achieving higher scores than in the previous challengesthe task of recognizing textual entailment is to decide whether the hypothesis sentence can be entailed by the premise sentence textual entailment has become a prominent paradigm for modeling semantic inference capturing the needs of a broad range of text understanding applications
detection of grammatical errors involving prepositions this paper presents ongoing work on the detection of preposition errors of nonnative speakers of english since prepositions account for a substantial proportion of all grammatical errors by esl learners developing an nlp application that can reliably detect these types of errors will provide an invaluable learning resource to esl students to address this problem we use a maximum entropy classifier combined with rulebased filters to detect preposition errors in a corpus of student essays although our work is preliminary we achieve a precision of 08 with a recall of 03 the national clearinghouse for english language acquisition estimates that 96 of the students in the us public school population speak a language other than english and have limited english proficiencyclearly there is a substantial and increasing need for tools for instruction in english as a second language in particular preposition usage is one of the most difficult aspects of english grammar for nonnative speakers to masterpreposition errors account for a significant proportion of all esl grammar errorsthey represented the largest category about 29 of all the errors by 53 intermediate to advanced esl students and 18 of all errors reported in an intensive analysis of one japanese writer preposition errors are not only prominent among error types they are also quite frequent in esl writingdalgish analyzed the essays of 350 esl college students representing 15 different native languages and reported that preposition errors were present in 18 of sentences in a sample of text produced by writers from first languages as diverse as korean greek and spanishthe goal of the research described here is to provide software for detecting common grammar and usage errors in the english writing of nonnative english speakersour work targets errors involving prepositions specifically those of incorrect preposition selection such as arrive to the town and those of extraneous prepositions as in most ofpeoplewe present an approach that combines machine learning with rulebased filters to detect preposition errors in a corpus of esl essayseven though this is work in progress we achieve precision of 08 with a recall of 03the paper is structured as follows in the next section we describe the difficulty in learning english preposition usage in section 3 we discuss related work in sections 47 we discuss our methodology and evaluationwhy are prepositions so difficult to masterperhaps it is because they perform so many complex rolesin english prepositions appear in adjuncts they mark the arguments of predicates and they combine with other parts of speech to express new meaningsthe choice of preposition in an adjunct is largely constrained by its object and the intended meaning since adjuncts are optional and tend to be flexible in their position in a sentence the task facing the learner is quite complexprepositions are also used to mark the arguments of a predicateusually the predicate is expressed by a verb but sometimes it takes the form of an adjective a noun or a nominalization the choice of the preposition as an argument marker depends on the type of argument it marks the word that fills the argument role the particular word used as the predicate and whether the predicate is a nominalizationeven with these constraints there are still variations in the ways in which arguments can be expressedlevin catalogs verb alternations such as they loaded hay on the wagon vsthey loaded the wagon with hay which show that depending on the verb an argument may sometimes be marked by a preposition and sometimes notenglish has hundreds of phrasal verbs consisting of a verb and a particle to complicate matters phrasal verbs are often used with prepositions phrasal verbs are particularly difficult for nonnative speakers to master because of their noncompositionality of meaning which forces the learner to commit them to rote memoryif mastering english prepositions is a daunting task for the second language learner it is even more so for a computerto our knowledge only three other groups have attempted to automatically detect errors in preposition usageeegolofsson et al used 31 handcrafted matching rules to detect extraneous omitted and incorrect prepositions in swedish text written by native speakers of english arabic and japanesethe rules which were based on the kinds of errors that were found in a training set of text produced by nonnative swedish writers targeted spelling errors involving prepositions and some particularly problematic swedish verbsin a test of the system 11 of 40 preposition errors were correctly detectedizumi et al and used errorannotated transcripts of japanese speakers in an interviewbased test of spoken english to train a maximum entropy classifier to recognize 13 different types of grammatical and lexical errors including errors involving prepositionsthe classifier relied on lexical and syntactic featuresoverall performance for the 13 error types reached 251 precision with 71 recall on an independent test set of sentences from the same source but the researchers do not separately report the results for preposition error detectionthe approach taken by izumi and colleagues is most similar to the one we have used which is described in the next sectionmore recently used a language model and stochastic grammar to replace prepositions removed from a dialogue corpuseven though they reported a precision of 088 and recall of 078 their evaluation was on a very restricted domain with only a limited number of prepositions nouns and verbsa preposition error can be a case of incorrect preposition selection use of a preposition in a context where it is prohibited or failure to use a preposition in a context where it is obligatory to detect the first type of error incorrect selection we have employed a maximum entropy model to estimate the probability of each of 34 prepositions based on the features in their local contextsthe me principle says that the best model will satisfy the constraints found in the training and for those situations not covered in the training the best model will assume a distribution of maximum entropythis approach has been shown to perform well in combining heterogeneous forms of evidence as in word sense disambiguation it also has the desirable property of handling interactions among features without having to rely on the assumption of feature independence as in a naive bayesian modelour me model was trained on 7 million events consisting of an outcome and its associated context these 7 million prepositions and their contexts were extracted from the metametrics corpus of 1100 and 1200 lexile text and newspaper text from the san jose mercury newsthe sentences were then postagged and then chunked into noun phrases and verb phrases by a heuristic chunkerthe maximum entropy model was trained with 25 contextual featuressome of the features represented the words and tags found at specific locations adjacent to the preposition others represented the head words and tags of phrases that preceded or followed the prepositiontable 1 shows a subset of the feature listsome features had only a few values while others had manyphr pre is the preceding phrase feature that indicates whether the preposition was preceded by a noun phrase or a verb phrase in the example in table 2 the preposition into is preceded by an npin a sentence that begins after the crowd was whipped up into a frenzy of anticipation the preposition into is preceded by a vpthere were only two featurevalue pairs for this feature phr prenp and phr prevpother features had hundreds or even thousands of different values because they represented the occurrence of specific words that preceded or followed a prepositionany featurevalue pairs which occurred with very low frequency in the training were eliminated to avoid the need for smoothing their probabilitieslemma forms of words were used as feature values to further reduce the total number and to allow the model to generalize across inflectional variantseven after incorporating these reductions the number of values was still very largeas table 1 indicates tgr the word sequence including the preposition and the two words to its right had 54906 different valuessumming across all features the model contained a total of about 388000 featurevalue pairstable 2 shows an example of where some of the features are derived fromthe model was tested on 18157 preposition contexts extracted from 12 files randomly selected from a portion of 1100 lexile text that had not been used for trainingfor each context the model predicted the probability of each preposition given the contextual representationthe highest probability preposition was then compared to the preposition that had actually been used by the writerbecause the test corpus consisted of published edited text we assumed that this material contained few if any errorsin this and subsequent tests the model was used to classify each context as one of 34 classes results of the comparison between the classifier and the test set showed that the overall proportion of agreement between the text and the classifier was 069the value of kappa was 064when we examined the errors we discovered that frequently the classifiers most probable preposition differed from the second most probable by just a few percentage pointsthis corresponded to a situation in which two or more prepositions were likely to be found in a given contextconsider the context they thanked him for his consideration this matter where either of or in could fill the blankbecause the classifier was forced to make a choice in this and other close cases it incurred a high probability of making an errorto avoid this situation we reran the test allowing the classifier to skip any preposition if its top ranked and second ranked choices differed by less than a specified amountin other words we permitted it to respond only when it was confident of its decisionwhen the difference between the first and second ranked choices was 060 or greater 50 of the cases received no decision but for the remaining half of the test cases the proportion of agreement was 090 and kappa was 088this suggests that a considerable improvement in performance can be achieved by using a more conservative approach based on a higher confidence level for the classifierto evaluate the me models suitability for analyzing ungrammatical text 2000 preposition contexts were extracted from randomly selected essays written on esl tests by native speakers of chinese japanese and russianthis set of materials was used to look for problems that were likely to arise as a consequence of the mismatch between the training corpus and the testing corpus when the model was used to classify prepositions in the esl essays it became obvious almost immediately that a number of new performance issues would have to be addressedthe student essays contained many misspelled wordsbecause misspellings were not in the training the model was unable to use the features associated with them in its decision makingthe tagger was also affected by spelling errors so to avoid these problems the classifier was allowed to skip any context that contained misspelled words in positions adjacent to the preposition or in its adjacent phrasal headsa second problem resulted from punctuation errors in the student writingthis usually took the form of missing commas as in i disagree because from my point of view there is no evidencein the training corpus commas generally separated parenthetical expressions such as from my point of view from the rest of the sentencewithout the comma the model selected of as the most probable preposition following because instead of froma set of heuristics was used to locate common sites of comma errors and skip these contextsthere were two other common sources of classification error antonyms and benefactivesthe model very often confused prepositions with opposite meanings so when the highest probability preposition was an antonym of the one produced by the writer we blocked the classifier from marking the usage as an errorbenefactive phrases of the form for personorganization were also difficult for the model to learn most likely because as adjuncts they are free to appear in many different places in a sentence and the preposition is not constrained by its object resulting in their frequency being divided among many different contextswhen a benefactive appeared in an argument position the models most probable preposition was generally not the preposition forin the sentence they described a part for a kid the preposition of has a higher probabilitythe classifier was prevented from marking for personorganization as a usage error in such contextsto summarize the classifier consisted of the me model plus a program that blocked its application in cases of misspelling likely punctuation errors antonymous prepositions and benefactivesanother difference between the training corpus and the testing corpus was that the latter contained grammatical errorsin the sentence this was my first experience about choose friends there is a verb error immediately following the prepositionarguably the preposition is also wrong since the sequence about choose is illformedwhen the classifier marked the preposition as incorrect in an ungrammatical context it was credited with correctly detecting a preposition errornext the classifier was tested on the set of 2000 preposition contexts with the confidence threshold set at 09each preposition in these essays was judged for correctness of usage by one or two human ratersthe judged rate of occurrence of preposition errors was 0109 for rater 1 and 0098 for rater 2 ie about 1 out of every 10 prepositions was judged to be incorrectthe overall proportion of agreement between rater1 and rater 2 was 0926 and kappa was 0599table 3 shows the results for the classifier vs rater 1 using rater 1 as the gold standardnote that this is not a blind test of the classifier inasmuch as the classifiers confidence threshold was adjusted to maximize performance on this setthe overall proportion of agreement was 0942 but kappa was only 0365 due to the high level of agreement expected by chance as the classifier used the response category of correct more than 97 of the timewe found similar results when comparing the judgements of the classifier to rater 2 agreement was high and kappa was lowin addition for both raters precision was much higher than recallas noted earlier the table does not include the cases that the classifier skipped due to misspelling antonymous prepositions and benefactivesboth precision and recall are low in these comparisons to the human raterswe are particularly concerned about precision because the feedback that students receive from an automated writing analysis system should above all avoid false positives ie marking correct usage as incorrectwe tried to improve precision by adding to the system a naive bayesian classifier that uses the same features found in table 1as expected its performance is not as good as the me model but when the bayesian classifier was given a veto over the decision of the me classifier overall precision did increase substantially though with a reduction in recall to address the problem of low recall we have targeted another type of esl preposition error extraneous prepositionsour strategy of training the me classifier on grammatical edited text precluded detection of extraneous prepositions as these did not appear in the training corpusof the 500600 errors in the esl test set 142 were errors of this typeto identify extraneous preposition errors we devised two rulebased filters which were based on analysis of the development setboth used pos tags and chunking informationplural quantifier constructions this filter addresses the second most common extraneous preposition error where the writer has added a preposition in the middle of a plural quantifier construction for example some ofpeoplethis filter works by checking if the target word is preceded by a quantifier and if the head noun of the quantifier phrase is pluralthen if there is no determiner in the phrase the target word is deemed an extraneous preposition errorrepeated prepositions these are cases such as people can find friends with with the same interests where a preposition occurs twice in a rowrepeated prepositions were easily screened by checking if the same lexical item and pos tag were used for both wordsthese filters address two types of extraneous preposition errors but there are many other types even though these filters cover just one quarter of the 142 extraneous errors they did improve precision from 0778 to 0796 and recall from 0259 to 0304 we have presented a combined machine learning and rulebased approach that detects preposition errors in esl essays with precision of 080 or higher our work is novel in that we are the first to report specific performance results for a preposition error detector trained and evaluated on general corporawhile the training for the me classifier was done on a separate corpus and it was this classifier that contributed the most to the high precision it should be noted that some of the filters were tuned on the evaluation corpuscurrently we are in the course of annotating additional esl essays for preposition errors in order to obtain a largersized test setwhile most nlp systems are a balancing act between precision and recall the domain of designing grammatical error detection systems is distinguished in its emphasis on high precision over high recallessentially a false positive ie an instance of an error detection system informing a student that a usage is incorrect when in fact it is indeed correct must be reduced at the expense of a few genuine errors slipping through the system undetectedgiven this we chose to set the threshold for the system so that it ensures high precision which in turn resulted in a recall figure that leaves us much room for improvementour plans for future system development include 1using more training dataeven a cursory examination of the training corpus reveals that there are many gaps in the dataseven million seems like a large number of examples but the selection of prepositions is highly dependent on the presence of other specific words in the contextmany fairly common combinations of verbprepositionnoun or nounprepositionnoun are simply not attested even in a sizable corpusconsistent with this there is a strong correlation between the relative frequency of a preposition and the classifiers ability to predict its occurrence in edited textthat is prediction is better for prepositions that have many examples in the training set and worse for those with fewer examplesthis suggests the need for much more data model in this study contains no semantic informationone way to extend and improve its coverage might be to include features of verbs and their noun arguments from sources such as framenet which detail the semantics of the frames in which many english words appear
W07-1604
detection of grammatical errors involving prepositionsthis paper presents ongoing work on the detection of preposition errors of nonnative speakers of englishsince prepositions account for a substantial proportion of all grammatical errors by esl learners developing an nlp application that can reliably detect these types of errors will provide an invaluable learning resource to esl studentsto address this problem we use a maximum entropy classifier combined with rulebased filters to detect preposition errors in a corpus of student essaysalthough our work is preliminary we achieve a precision of 08 with a recall of 03chodorow et al employed a maximum entropy model to estimate the probability of 34 prepositions based on 25 local context features ranging from words to npvp chunksa context is represented by 25 lexical features and 4 combination features lexical token and pos ngrams in a 2 word window around the preposition plus the head verb in the preceding verb phrase the head noun in the preceding noun phrase and the head noun in the following noun phrase when available
semeval2007 task 02 evaluating word sense induction and discrimination systems the goal of this task is to allow for comparison across senseinduction and discrim ination systems and also to compare thesesystems to other supervised and knowledgebased systems in total there were 6 participating systems we reused the semeval 2007 english lexical sample subtask of task17 and set up both clusteringstyle unsuper vised evaluation and a supervised evaluation we provide a comparison to the results ofthe systems participating in the lexical sam ple subtask of task 17 word sense disambiguation is a key enablingtechnologysupervised wsd techniques are the best performing in public evaluations butneed large amounts of handtagging dataexist ing handannotated corpora like semcor which is annotated with wordnetsenses allow for a small improve ment over the simple most frequent sense heuristicas attested in the allwords track of the last senseval competition in the ory larger amounts of training data would improve the perfor mance of supervised wsd but no current projectexists to provide such an expensive resourcean other problem of the supervised approach is that theinventory and distribution of senses changes dra matically from one domain to the other requiring additional handtagging of corpora supervised wsd is based on the fixedlist of sensesparadigm where the senses for a target wordare a closed list coming from a dictionary or lex iconlexicographers and semanticists have long warned about the problems of such an approachwhere senses are listed separately as discrete entities and have argued in favor of more complex rep resentations where for instance senses are dense regions in a continuum unsupervised word sense induction and discrimination has followed this line of think ing and tries to induce word senses directly fromthe corpustypical wsid systems involve clustering techniques which group together similar examplesgiven a set of induced clusters each new occurrence of the target word will be compared to the clusters and the most similar cluster will be selected as its senseone of the problems of unsupervised systems isthat of managing to do a fair evaluationmost of cur rent unsupervised systems are evaluated inhouse with a brief comparison to a reimplementation of aformer system leading to a proliferation of unsuper vised systems with little ground to compare amongthemthe goal of this task is to allow for comparison across senseinduction and discrimination systems and also to compare these systems to other su pervised and knowledgebased systemsthe paper is organized as followssection 2 presents the evaluation framework used in this tasksection 3 presents the systems that participated in 1wsid approaches prefer the term word usesto word sensesin this paper we use them interchangeably to refer to both the induced clusters and to the word senses from some reference lexicon7 the task and the official resultsfinally section 5 draws the conclusionsall wsid algorithms need some addition in orderto be evaluatedone alternative is to manually de cide the correctness of the clusters assigned to each occurrence of the wordsthis approach has twomain disadvantagesfirst it is expensive to manually verify each occurrence of the word and dif ferent runs of the algorithm need to be evaluatedin turnsecond it is not an easy task to manu ally decide if an occurrence of a word effectively corresponds with the use of the word the assignedcluster refers to especially considering that the person is given a short list of words linked to the clus terwe also think that instead of judging whether the cluster returned by the algorithm is correct theperson should have independently tagged the occur rence with his own senses which should have been then compared to the cluster returned by the systemthis is paramount to compare a corpus which has been handtagged with some reference senses with the clustering resultthe gold standard tags are taken to be the def inition of the classes and standard measures from the clustering literature can be used to evaluate the clusters against the classesa second alternative would be to devise a method to map the clusters returned by the systems to thesenses in a lexiconpantel and lin automatically map the senses to wordnet and then mea sure the quality of the mappingmore recently themapping has been used to test the system on publicly available benchmarks a third alternative is to evaluate the systems ac cording to some performance in an application eg information retrieval this is a veryattractive idea but requires expensive system devel opment and it is sometimes difficult to separate the reasons for the good performancein this task we decided to adopt the first two alternatives since they allow for comparison over pub licly available systems of any kindwith this goal onmind we gave all the participants an unlabeled cor pus and asked them to induce the senses and create a clustering solution on itwe evaluate the results according to the following types of evaluation 1evaluate the induced senses as clusters of examplesthe induced clusters are compared to the sets of examples tagged with the given gold standard word senses and evaluated using the fscore measure for clusterswe will call this evaluation unsupervised2map the induced senses to gold standardsenses and use the mapping to tag the test cor pus with gold standard tagsthe mapping is automatically produced by the organizers and the resulting results evaluated according to theusual precision and recall measures for super vised word sense disambiguation systemswe call this evaluation supervisedwe will see each of them in turn21 unsupervised evaluationin this setting the results of the systems are treated as clusters of examples and gold standard senses are classesin order to compare the clusters with the classes hand annotated corpora is neededthe testset is first tagged with the induced sensesa per fect clustering solution will be the one where each cluster has exactly the same examples as one of the classes and vice versafollowing standard cluster evaluation practice we consider the fs core measure for measuring the performance of the systemsthe fscore is used in a similar fashion to information retrieval exercises with precisionand recall defined as the percentage of correctly retrievedexamples for a cluster and recall as the percentage of correctly retrievedexamples for a cluster given a particular class sr of size nr and a cluster hi of size ni suppose nir examples in the class sr belong to hithe f value of this class and cluster is defined to be f 2p r p r where p nir nr is the precision value and are nir ni is the recall value defined for classsr and cluster hithe fscore of class sr is the max imum f value attained at any cluster that is 8 f max hi f and the fscore of the entire clustering solution is fscore cr1 nr n f where q is the number of classes and n is the size of the clustering solutionif the clustering is theidentical to the original classes in the datasets fs core will be equal to one which means that the higher the fscore the better the clustering is for the sake of completeness we also include thestandard entropy and purity measures in the unsupervised evaluationthe entropy measure consid ers how the various classes of objects are distributedwithin each clusterin general the smaller the entropy value the better the clustering algorithm per formsthe purity measure considers the extent to which each cluster contained objects from primarilyone classthe larger the values of purity the bet ter the clustering algorithm performsfor a formal definition refer to 22 supervised evaluationwe have followed the supervised evaluation frame work for evaluating wsid systems as described in first we split the corpus intoa traintest partusing the handannotated sense in formation in the train part we compute a mappingmatrix m that relates clusters and senses in the fol lowing waysuppose there are m clusters and n senses for the target wordthen m mij 1 i m 1 j n and each mij p that is mij is the probability of a word having sense jgiven that it has been assigned cluster i this probability can be computed counting the times an occur rence with sense sj has been assigned cluster hi in the train corpusthe mapping matrix is used to transform any cluster score vector h returned by the wsid algorithm into a sense score vector s it suffices to multiply the score vector by m ie s hm we use the m mapping matrix in order to convert the cluster score vector of each test corpus instance into a sense score vector and assign the sense with all nouns verbs train 22281 14746 9773 test 4851 2903 2427 all 27132 17649 12200 table 1 number of occurrences for the 100 target words in the corpus following the traintest splitmaximum score to that instancefinally the result ing test corpus is evaluated according to the usual precision and recall measures for supervised word sense disambiguation systemsin this section we will introduce the gold standard and corpus used the description of the systems andthe results obtainedfinally we provide some mate rial for discussiongold standardthe data used for the actual evaluation was bor rowed from the semeval2007 english lexical sample subtaskof task 17the texts come from the wall street journal corpus and were handannotated with ontonotes senses note that ontonotes senses are coarser than wordnet senses and thus the number of senses to be induced is smaller in this caseparticipants were provided with information about 100 target words each target word having a set of contexts where the word appearsafter removing the sense tags from the train corpus the train and test parts were joined into the official corpus and given to the participantsparticipants had to tag with the induced senses all the examples in this corpustable 1 summarizes the size of the corpusparticipant systems in total there were 6 participant systemsone of them was not a sense induction system but rather a knowledgebased wsd systemwe include their data in the results section below for coherence with the official results submitted to participants but we will not mention it herei2r this team used a cluster validation method to estimate the number of senses of a target word in untagged data and then grouped the instances of thistarget word into the estimated number of clusters us ing the sequential information bottleneck algorithm9 ubcas a two stage graphbased clustering where a cooccurrence graph is used to compute similarities against contextsthe context similarity matrix is pruned and the resulting associated graphis clustered by means of a randomwalk type al gorithmthe parameters of the system are tuned against the senseval3 lexical sample dataset and some manual tuning is performed in order to reduce the overall number of induced sensesnote that thissystem was submitted by the organizersthe orga nizers took great care in order to participate under the same conditions as the rest of participantsumnd2 a system which clusters the second or der cooccurrence vectors associated with each word in a contextclustering is done using kmeans and the number of clusters was automatically discoveredusing the adapted gap statisticno parameter tun ing is performedupv si a selfterm expansion method based oncoocurrence where the terms of the corpus are ex panded by its best coocurrence terms in the samecorpusthe clustering is done using one implemen tation of the kstar method where the stop criterionhas been modifiedthe trial data was used for de termining the corpus structureno further tuning is performeduoy a graph based system which creates a co occurrence hypergraph modelthe hypergraph isfiltered and weighted according to some associa tion rulesthe clustering is performed by selecting the nodes of higher degree until a stop criterion isreachedwsd is performed by assigning to each in duced cluster a score equal to the sum of weights of hyperedges found in the local context of the target wordthe system was tested and tuned on 10 nouns of senseval3 lexicalsampleofficial results participants were required to induce the senses of the target words and cluster all target word contextsaccordingly2table 2 summarizes the average num ber of induced senses as well as the real senses in the gold standard2they were allowed to label each context with a weighted score vector assigning a weight to each induced sensein the unsupervised evaluation only the sense with maximum weightwas considered but for the supervised one the whole score vector was usedhowever none of the participating systems la beled any instance with more than one sensesystem all nouns verbs i2r 308 311 306 ubcas132 163 115 umnd2 136 171 117 upv si 557 72 469 uoy 928 1128 82 gold standard test 287 286 286 train 36 391 343 all 368 394 354table 2 average number of clusters as returned by the par ticipants and number of classes in the gold standardnote that ubcasis the system submitted by the organizers of the tasksystem r all nouns verbs fscpurentrfscfsc1c1word 1 789 798 454 807 768 ubcas2 787 805 438 808 763 upv si 3 663 838 332 699 622 umnd2 4 661 817 405 671 650 i2r 5 639 840 328 680 593 uofl6 615 822 378 623 605 uoy 7 561 861 271 658 451 random 8 379 861 277 381 377 1c1inst 9 95 100 0 66 127 table 3 unsupervised evaluation on the test corpus including 3 baselinespurity and entropy are also providedubcaswas submitted by the organizersuoflis not a sense induction systemsystem rank supervised evaluation all nouns verbs i2r 1 816 868 757 umnd2 2 806 845 762 upv si 3 791 825 753 mfs 4 787 809 762 ubcas5 785 807 760 uoy 6 777 816 733 uofl7 771 805 733table 4 supervised evaluation as recallubcaswas submitted by the organizersuoflis not a sense induction sys temtable 3 shows the unsupervised evaluation of the systems on the test corpuswe also include three baselines the one cluster per wordbaseline which groups all instances of a word intoa single cluster the one cluster per instancebaseline where each instance is a distinct clus ter and a random baseline where the induced wordsenses and their associated weights have been ran domly producedthe random baseline figures in this paper are averages over 10 runsas shown in table 3 no system outperforms the 1c1word baseline which indicates that this baseline 10is quite strong perhaps due the relatively small num ber of classes in the gold standardhowever all systems outperform by far the random and 1c1instbaselines meaning that the systems are able to in duce correct sensesnote that the purity and entropy measures are not very indicative in this settingfor completeness we also computed the fscore usingthe complete corpus the re sults are similar and the ranking is the samewe omit them for brevitythe results of the supervised evaluation can be seen in table 4the evaluation is also performed over the test corpusapart from participants we also show the most frequent sense which tags every test instance with the sense that occurredmost often in the training partnote that the su pervised evaluation combines the information in theclustering solution implicitly with the mfs information via the mapping in the training partpre vious senseval evaluation exercises have shown thatthe mfs baseline is very hard to beat by unsuper vised systemsin fact only three of the participant systems are above the mfs baseline which showsthat the clustering information carries over the map ping successfully for these systemsnote that the1c1word baseline is equivalent to mfs in this settingwe will review the random baseline in the dis cussion section belowfurther results table 5 shows the results of the best systems from the lexical sample subtask of task 17the best sense induction system is only 69 percentage points belowthe best supervised and 35 percentage points be low the best semisupervised systemif the sense induction system had participated it would be deemed as semisupervised as it uses albeit in ashallow way the training data for mapping the clusters into sensesin this sense our supervised evalu ation does not seek to optimize the available training dataafter the official evaluation we realized that con trary to previous lexical sample evaluation exercises task 17 organizers did not follow a random traintest splitwe decided to produce a random traintestsplit following the same 8218 proportion as the of ficial split and reevaluated the systemsthe results are presented in table 6 where we can see that all system supervised evaluation best supervised 887 best semisupervised 851 best induction 816 mfs 787 best unsupervised 538 table 5 comparing the best induction system in this task with those of task 17system supervised evaluation i2r 822 uoy 813 umnd2 801 upv si 799 ubcas 790 mfs 784 table 6 supervised evaluation as recall using a random traintest splitparticipants are above the mfs baseline showingthat all of them learned useful clustering informa tionnote that uoy was specially affected by the original splitthe distribution of senses in this split did not vary finally we also studied the supervised evalua tion of several random clustering algorithms which can attain performances close to mfs thanks to the mapping informationthis is due to the fact that therandom clusters would be mapped to the most fre quent sensestable 7 shows the results of random solutions using varying numbers of clusters random2 is only 01 below mfs but as the number of clusters increases some clusters do not get mapped and the recall of the random baselines decreasethe evaluation of clustering solutions is not straightforwardall measures have some bias towards cer tain clustering strategy and this is one of the reasonsof adding the supervised evaluation as a complemen tary information to the more standard unsupervised evaluationin our case we noticed that the fscore penal ized the systems with a high number of clusters and favored those that induce less sensesgiven the fact that fscore tries to balance precision and recall this was not expectedwe were also surprised to see that no system could 11 system supervised evaluation random2 786 random10 776 ramdom100 642 random1000 318 table 7 supervised evaluation of several random baselinesbeat the one cluster one wordbaselinean expla nation might lay in that the goldstandard was based on the coarsegrained ontonotes senseswe also noticed that some words had hundreds of instancesand only a single sensewe suspect that the partic ipating systems would have beaten all baselines if a finegrained sense inventory like wordnet had been used as was customary in previous wsd evaluation exercisessupervised evaluation seems to be more neutral regarding the number of clusters as the ranking of systems according to this measure include diverse cluster averageseach of the induced clusters is mapped into a weighted vector of senses and thus inducing a number of clusters similar to the number of senses is not a requirement for good resultswith this measure some of the systems3 are able to beat all baselineswe have presented the design and results of thesemeval2007 task 02 on evaluating word sense induction and discrimination systems6 systems participated but one of them was not a sense induction systemwe reused the data from the semeval 2007 english lexical sample subtask of task 17 andset up both clusteringstyle unsupervised evaluation and a su pervised evaluation we also provide a compari son to the results of the systems participating in the lexical sample subtask of task 17evaluating clustering solutions is not straightfor wardthe unsupervised evaluation seems to besensitive to the number of senses in the gold stan dard and the coarse grained sense inventory usedin the gold standard had a great impact in the resultsthe supervised evaluation introduces a mapping step which interacts with the clustering solu tionin fact the ranking of the participating systems 3all systems in the case of a random traintest split varies according to the evaluation method usedwe think the two evaluation results should be taken to be complementary regarding the information learned by the clustering systems and that the evaluation of word sense induction and discrimination systemsneeds further developments perhaps linked to a cer tain application or purposeacknowledgments we want too thank the organizers of semeval2007 task 17 for kindly letting us use their corpuswe are also grateful to ted pedersen for his comments on the evaluation resultsthis work has been partially funded by the spanish education ministry and by the regional government of gipuzkoa
W07-2002
semeval2007 task 02 evaluating word sense induction and discrimination systemsthe goal of this task is to allow for comparison across senseinduction and discrimination systems and also to compare these systems to other supervised and knowledgebased systemsin total there were 6 participating systemswe reused the semeval2007 english lexical sample subtask of task 17 and set up both clusteringstyle unsupervised evaluation and a supervised evaluation we provide a comparison to the results of the systems participating in the lexical sample subtask of task 17the object of the sense induction task of senseval4 is to cluster 27132 instances of 100 different words into senses or classesgraphbased methods have been employed for word sense induction
semeval2007 task 07 coarsegrained english allwords task this paper presents the coarsegrained en glish allwords task at semeval2007 we describe our experience in producing acoarse version of the wordnet sense inven tory and preparing the sensetagged corpusfor the task we present the results of participating systems and discuss future direc tions it is commonly thought that one of the major obstacles to highperformance word sense disambiguation is the fine granularity of sense inventoriesstateoftheart systems attained a disam biguation accuracy around 65 in the senseval3 allwords task wherewordnet was adopted as a ref erence sense inventoryunfortunately wordnet is a finegrained resource encoding sense distinctionsthat are difficult to recognize even for human an notators makingwsd an enabling technique for endtoend applications clearly depends on the ability to deal with rea sonable sense distinctionsthe aim of this task was to explicitly tackle the granularity issue and study the performance of wsd systems on an allwords basis when a coarser set of senses is provided for the target wordsgiven the need of the nlp community to work on freelyavailable resources the solution of adopting a dif ferent computational lexicon is not viableon the other hand the production of a coarsegrained sense inventory is not a simple taskthe main issue is certainly the subjectivity of sense clustersto overcome this problem different strategies can be adoptedfor instance in the ontonotes project senses are grouped until a 90 interannotator agreement is achievedin contrast as we describe in this paper our approach is based on a mapping to a previously existing inventory which encodes sense distinctions at different levelsof granularity thus allowing to induce a sense clus tering for the mapped senseswe would like to mention that another semeval 2007 task dealt with the issue of sense granularityfor wsd namely task 17 coarse grained english lexical sample wsdin this paper we report our experience in organizing task 07the task required participating systems to annotate openclass words in a test corpus with the most appropriate sense from a coarsegrained version of the wordnet sense inventory21 test corpusthe test data set consisted of 5377 words of run ning text from five different articles the first three were obtained from the wsj corpus the fourth was the wikipedia entry for computer programming1 the fifth was an excerpt of amy steedmans knights of the art biographies of italian painters2we decided to add the last two 1httpenwikipediaorgwikicomputer programming 2httpwwwgutenbergorgetext529 30 article domain words annotated d001 journalism 951 368 d002 book review 987 379 d003 travel 1311 500 d004 computer science 1326 677 d005 biography 802 345 total 5377 2269 table 1 statistics about the five articles in the test data settexts to the initial dataset as we wanted the corpus to have a size comparable to that of previous editions of allwords tasksin table 1 we report the domain number of run ning words and number of annotated words for the five articleswe observe that articles d003 and d004 are the largest in the corpus 22 creation of a coarsegrained senseinventory to tackle the granularity issue we produced acoarsergrained version of the wordnet sense inven tory3 based on the procedure described by naviglithe method consists of automatically map ping wordnet senses to top level numbered entries in the oxford dictionary of english the semantic mapping be tween wordnet and ode entries was obtained intwo steps first we disambiguated with the ssi algo rithm the definitions ofthe two dictionaries together with additional infor mation second foreach wordnet sense we determined the best match ing ode coarse entryas a result wordnet senses mapped to the same ode entry were assigned to the same sense clusterwordnet senses with no match were associated with a singleton sensein contrast to the automatic method above thesense mappings for all the words in our test cor pus were manually produced by the third author anexpert lexicographer with the aid of a mapping in terfacenot all the words in the corpus could be mapped directly for several reasons lacking entries in ode 3we adopted wordnet 21 available from httpwordnetprincetonedu different spellings derivatives in most of the cases weasked the lexicographer to map senses of the orig inal word to senses of lexicallyrelated words when this mapping was not straightforward we just adopted the wordnet sense inventory for that wordwe released the entire sense groupings and made them available to the participants23 sense annotationall openclass words with an existing sense in the wordnetinventory were manually annotated by the third authormultiword expressions were explicitly iden tified in the test set and annotated as such we excluded auxiliary verbs uncovered phrasal and idiomatic verbs exclamatory uses etc the annotator was allowed to tag words with multiple coarse senses but was asked to make a single sense assignment whenever possiblethe lexicographer annotated an overall numberof 2316 content words47 of them were excluded because no wordnet sense was deemed ap propriatethe remaining 2269 content words thusconstituted the test data setonly 8 of them were as signed more than one sense specifically two coarse senses were assigned to a single word instance4 and two distinct finegrained senses were assigned to 7 word instancesthis was a clear hint that the sense clusters were not ambiguous for the vast majority of wordsin table 2 we report information about the polysemy of the word instances in the test setover all 2988 of the word instances weremonosemous the average polysemy of the test set with the coarsegrained sense inventory was 306 compared to an average polysemy with the wordnet inventory 4d005s004t015 31 polysemy n v a r all monosemous 358 86 141 93 678 polysemous 750 505 221 115 1591 total 1108 591 362 208 2269 table 2 statistics about the test set polysemy of 61824 interannotator agreementrecent estimations of the interannotator agreement when using the wordnet inventory report figures of 725 agreement in the preparation of the english allwords test set at senseval3 and 673 on the open mind word expert an notation exercise as the interannotator agreement is often considered an upper bound for wsd systems it was de sirable to have a much higher number for our task given its coarsegrained natureto this end besidethe expert lexicographer a second author indepen dently performed part of the manual sense mapping described in section 22the pairwise agreement was 8644we repeated the same agreement evaluation onthe sense annotation task of the test corpusa sec ond author independently annotated part of the test set the pairwise agreement between the two authors was 9380this figure compared to those in the literature for finegrained human annotations gives us a clear indication that the agreement of human annotators strictly depends on the granularity of the adopted sense inventorywe calculated two baselines for the test corpus a random baseline in which senses are chosen at random and the most frequent baseline in which we assign the first wordnet sense to each word in the datasetformally the accuracy of the random baseline was calculated as follows blrand 1t t i1 1 coarsesenses where t is our test corpus wi is the ith word instance in t and coarsesenses is the set ofcoarse senses for wi according to the sense cluster ing we produced as described in section 22the accuracy of the mfs baseline was calculated as blmfs 1t t i1 where equals 1 when the kth sense ofword wi belongs to the cluster manually associ ated by the lexicographer to word wi notice that our calculation of the mfs is based on the frequencies in the semcor corpus as we exploit wordnet sense rankings12 teams submitted 14 systems overall according to the semeval policy for task organizers we remark that the system labelled as uorssi was submitted by the first author with a lexical knowledge base composed by word net and approximately 70000 relatedness edgeseven though we did not specifically enrich the al gorithms knowledge base on the task at hand we list the system separately from the overall rankingthe results are shown in table 3we calculated a mfs baseline of 7889 and a random baseline of 5243in table 4 we report the f1 mea sures for all systems where we used the mfs as abackoff strategy when no sense assignment was at tempted comparedto previous results on finegrained evaluation exer cises the systemsresults are much higheron the other hand the difference in performancebetween the mfs baseline and stateoftheart sys tems on coarsegrained disambiguationis comparable to that of the senseval3 allwords ex ercisehowever given the novelty of the task webelieve that systems can achieve even better perfor 32 system a p r f1 nuspt 1000 8250 8250 8250 nusml 1000 8158 8158 8158 lccwsd 1000 8145 8145 8145 gplsi 1000 7955 7955 7955 blmfs 1000 7889 7889 7889 upvwsd 1000 7863 7863 7863 tkbuo 1000 7021 7021 7021 pubcd 901 6972 6280 6608 racaisynwsd 1000 6571 6571 6571 sussxfr 728 7173 5223 6044 usyd 953 5879 5602 5737 uofl 927 5259 4874 5060 sussxcwd 728 5454 3971 4596 sussxcr 728 5430 3953 4575 uorssi1000 8321 8321 8321 table 3 system scores sorted by f1 measure mance by heavily exploiting the coarse nature of the sense inventoryin table 5 we report the results for each of the five articlesthe interesting aspect of the table is that documents from some domains seem to havepredominant senses different from those in sem corspecifically the mfs baseline performs more poorly on documents d004 and d005 from the computer science and biography domains respectivelywe believe this is due to the fact that these documents have specific predominant senses which correspond less often to the most frequent sense in semcor than for the other three documentsit is also interesting to observe that different systemsperform differently on the five documents finally we calculated the systemsperformance by part of speechthe results are shown in table 6again we note that different systems show different performance depending on the partofspeechtaganother interesting aspect is that the performance of the mfs baseline is very close to stateof theart systems for adjectives and adverbs whereas it is more than 3 points below for verbs and around 5 for nounssystem f1 nuspt 8250 nusml 8158 lccwsd 8145 gplsi 7955 blmfs 7889 upvwsd 7863 sussxfr 7704 tkbuo 7021 pubcd 6972 racaisynwsd 6571 sussxcwd 6452 sussxcr 6435 usyd 5879 uofl 5461 uorssi8321 table 4 system scores sorted by f1 measure with mfs adopted as a backoff strategy when no sense assignment is attempted systems affected are marked in boldsystem n v a r nuspt 8231 7851 8564 8942 nusml 8141 7817 8260 9038 lccwsd 8069 7817 8536 8798 gplsi 8005 7445 8232 8654 blmfs 7744 7530 8425 8750 upvwsd 7933 7276 8453 8125 tkbuo 7076 6261 7873 7404 pubcd 7141 5969 6657 5567 racaisynwsd 6402 6210 7155 7500 sussxfr 6809 5102 5738 4938 usyd 5606 6043 5800 5431 uofl 5765 4882 2587 6080 sussxcwd 5218 3564 4295 4630 sussxcr 5187 3544 4295 4630 uorssi8412 7834 8536 8846 table 6 system scores by partofspeech tag sorted by overall f1 measure 33 d001 d002 d003 d004 d005 system p r p r p r p r p r nuspt 8832 8832 8813 8813 8340 8340 7607 7607 8145 8145 nusml 8614 8614 8839 8839 8140 8140 7666 7666 7913 7913 lccwsd 8750 8750 8760 8760 8140 8140 7548 7548 8000 8000 gplsi 8342 8342 8654 8654 8040 8040 7371 7371 7797 7797 blmfs 8560 8560 8470 8470 7780 7780 7519 7519 7420 7420 upvwsd 8424 8424 8074 8074 7600 7600 7711 7711 7710 7710 tkbuo 7880 7880 7256 7256 6940 6940 7075 7075 5855 5855 pubcd 7716 6794 7552 6755 6496 5820 6886 6174 6442 6087 racaisynwsd 7147 7147 7282 7282 6680 6680 6086 6086 5971 5971 sussxfr 7910 5761 7372 5330 7486 5240 6797 4889 6520 5159 usyd 6253 6169 5978 5726 6097 5780 6057 5628 4715 4551 uofl 6141 5924 5593 5224 4800 4560 5342 4727 4438 4116 sussxcwd 6642 4837 6131 4433 5514 3860 5072 3648 4213 3333 sussxcr 6605 4810 6058 4380 5914 4140 4867 3501 4029 3188 uorssi8614 8614 8549 8549 7960 7960 8685 8685 7565 7565 table 5 system scores by article in order to allow for a critical and comparative inspection of the system results we asked the partici pants to answer some questions about their systemsthese included information about whether 1the system used semanticallyannotated and unannotated resources 2the system used the mfs as a backoff strategy 3the system used the coarse senses provided by the organizers 4the system was trained on some corpuswe believe that this gives interesting information to provide a deeper understanding of the resultswesummarize the participantsanswers to the questionnaires in table 7we report about the use of semantic resources as well as semantically annotated corpora as well as information about the use of unannotated corpora training mfs and the coarse senses provided by the organizers as expected several systems used lexicosemantic information from the wordnet semantic networkandor were trained on the semcor semantically annotated corpusfinally we point out that all the systems perform ing better than the mfs baseline adopted it as a backoff strategy when they were not able to output a sense assignmentit is commonly agreed that word sense disambiguation needs emerge and show its usefulness in end toend applications after decades of research in the field it is still unclear whether wsd can provide a relevant contribution to realworld applications such as information retrieval question answeringetc in previous senseval evaluation exercises state oftheart systems achieved performance far below70 and even the agreement between human annotators was discouragingas a result of the discus sion at the senseval3 workshop in 2004 one of the aims of semeval2007 was to tackle the problems at the roots of wsdin this task we dealt with the granularity issue which is a major obstacle to bothsystem and human annotatorsin the hope of over coming the current performance upper bounds we 34 system sc dso se omwe xwn wn wnd other uc tr mfs cs gplsi lccwsd nusml nuspt parallel corpus pubcd racaisynwsd sussxcwd sussxcr sussxfr tkbuo uofl uorssissi lkb upvwsd usyd table 7 information about participating systems proposed the adoption of a coarsegrained sense inventorywe found the results of participating sys tems interesting and stimulatinghowever some questions arisefirst it is unclear whether given the novelty of the task systems really achieved the state of the art or can still improve their performancebased on a heavier exploitation of coarse and finegrained information from the adopted sense inven torywe observe that on a technical domain suchas computer science most supervised systems per formed worse due to the nature of their training setsecond we still need to show that coarse senses can be useful in real applicationsthird a full coarse sense inventory is not yet available this is a majorobstacle to largescale in vivo evaluationswe be lieve that these aspects deserve further investigation in the years to comeacknowledgments this work was partially funded by the interop noe 6th european union fpwe would like to thank martha palmer for providing us the first three texts of the test corpus
W07-2006
semeval2007 task 07 coarsegrained english allwords taskthis paper presents the coarsegrained english allwords task at semeval2007we describe our experience in producing a coarse version of the wordnet sense inventory and preparing the sensetagged corpus for the taskwe present the results of participating systems and discuss future directionswe show that the performance of wsd systems clearly indicates that wsd is not easy unless one adopts a coarsegrained approach and then systems tagging all words at best perform a few percentage points above the most frequent sense heuristic
semeval2007 task 10 english lexical substitution task in this paper we describe the english lexical substitution task for semeval in the task annotators and systems find an alternative substitute word or phrase for a target word in context the task involves both finding the synonyms and disambiguating the contextparticipating systems are free to use any lexical resource there is a subtask which re quires identifying cases where the word isfunctioning as part of a multiword in the sen tence and detecting what that multiword is word sense disambiguation has been de scribed as a task in need of an applicationwhilstresearchers believe that it will ultimately prove useful for applications which need some degree of se mantic interpretation the jury is still out on this pointone problem is that wsd systems have been tested on finegrained inventories rendering the task harder than it need be for many applications another significant problem is that there is no clear choice of inventory for any given task the lexical substitution task follows on fromsome previous ideas to examine the capabilities of wsd systems built by re searchers on a task which has potential for nlp applicationsfinding alternative words that canoccur in given contexts would potentially be useful to many applications such as question answer ing summarisation paraphrase acquisition text simplification and lexical acquisi tion crucially this task does not specify the inventory for use beforehand to avoidbias to one predefined inventory and makes it eas ier for those using automatically acquired resources to enter the arenaindeed since the systems in semeval did not know the candidate substitutes for a word before hand the lexical resource is evaluatedas much as the context based disambiguation com ponentthe task involves a lexical sample of nouns verbsadjectives and adverbsboth annotators and sys tems select one or more substitutes for the target word in the context of a sentencethe data wasselected from the english internet corpus of english produced by sharoff from the inter net this is a balanced corpus similar in flavour to the bnc though with less bias to british english obtained by sampling data from the webannotators are notprovided with the pos but the systems areannotators can provide up to three substitutes but all should be equally as goodthey are instructed that they can provide aphrase if they cannot think of a good single word sub stitutethey can also use a slightly more general word if that is close in meaningthere is a nameresponse if the target is part of a proper name and nilresponse if annotators cannot think of a good substitutethe subjects are also asked to identify 48 if they feel the target word is an integral part of a phrase and what that phrase wasthis optionwas envisaged for evaluation of multiword detectionannotators did sometimes use it for paraphras ing a phrase with another phrasehowever for an item to be considered a constituent of a multiword a majority of at least 2 annotators had to identify the same multiword1 the annotators were 5 native english speakers from the ukthey each annotated the entire datasetall annotations were semiautomatically lemma tised unless the lemmatised version would change the meaning of the substitute or if it was not obvious what the canonical version of the multiword should be21 data selectionthe data set comprises 2010 sentences 201 target words each with 10 sentenceswe released 300 for the trial data and kept the remaining 1710 for the test release298 of the trial and 1696 of the test release remained after filtering items with less than 2 non nil and non name responses and a few witherroneous pos tagsthe words included were se lected either manually from examinationof a variety of lexical resources and corpora or au tomatically using information in these lexical resourceswords were selected from those having a number of different meanings each with at least onesynonymsince typically the distribution of mean ings of a word is strongly skewed for the test set we randomly selected 20 words ineach pos for which we manually selected the sen tences 2 whilst forthe remaining words the sentences were se lected randomly22 inter annotator agreementsince we have sets of substitutes for each item andannotator pairwise agreement was calculated between each pair of sets from each pos sible pairing as p1p2p p1p2 p1p2 p 1full instructions given to the annotators are posted at httpwwwinformaticssusxacukresearchnlpmccarthyfiles instructionspdf2there were only 19 verbs due to an error in automatic selection of one of the verbs picked for manual selection of sen tencespairwise interannotator agreement was 27757393 had modes and pairwise agreement with themode was 5067agreement is increased if we remove one annotator who typically gave 2 or 3 sub stitutes for each item which increased coverage butreduced agreementwithout this annotator inter annotator agreement was 3113 and 647 with modemultiword detection pairwise agreement was 9230 and agreement on the identification of the exact form of the actual multiword was 4413we have 3 separate subtasks 1 best 2 oot and 3 mw which we describe below3 in the equations and results tables that follow we use p for precision r for recall and mode p and mode r where we calculate precision and recall against the substitute chosen by the majority of annotators provided that there is a majoritylet h be the set of annotators t be the set of test items with 2 or more responses and hi be the set of responses for an item i t for annotator h h for each i t we calculate the mode ie the most frequent response provided that there is a response more frequent than the othersthe set of items where there is such a mode is referred to as tm let a be the set of items from t where the system provides at least one sub stitutelet ai i a be the set of guesses from the system for item i for each i we calculate the multiset union for all hi for all h h and for each unique type in hi willhave an associated frequency for the num ber of times it appears in hifor example given an item for happya supposing the annotators had supplied answers as follows annotator responses 1 glad merry 2 glad 3 cheerful glad 4 merry 5 jovial3the scoring measures are as described in the doc ument at httpnlpcsswarthmoreedusemevaltaskstask10 task10documentationpdf released with our trial data49 then hi would be glad glad glad merry merry cheerful jovialthe res with associated frequencies would be glad 3 merry 2 cheerful 1 and jovial 1best measures this requires the best file produced by the system which gives as many guesses as the system believes are fitting but where the credit for each correct guess is divided by the number of guessesthe first guess in the list is taken as the best guess p aiia resai freqres ai hi a are aiit resai freqres ai hi t mode p bgiam 1 if bg mi am mode are bgitm 1 if bg mi tm a system is permitted to provide more than one response just as the annotators werethey cando this if they are not sure which response is bet ter however systems will maximise the score if theyguess the most frequent response from the annotatorsfor p and r the credit is divided by the number of guesses that a system makes to prevent a system simply hedging its bets by providing many re sponsesthe credit is also divided by the number of responses from annotatorsthis gives higher scores to items with less variationwe want to emphasise test items with better agreementusing the example for happya id 9999 above ifthe systems responses for this item was glad cheer ful the credit for a9999 in the numerator of p and r would be 31 2 7 286 for mode p and mode r we use the systemsfirst guess and compare this to the mode of the anno tators responses on items where there was a response more frequent than the othersoot measures this allows a system to make up to 10 guessesthe credit for each correct guess is not divided by the number of guessesthis allows for the fact that there is a lot of variation for the task and we only have 5 annotatorswith 10 guesses there is a better chance that the systems find the responses of these 5 annotatorsthere is no ordering of the guesses and the mode scores give credit where the mode was found in one of the systems 10 guessesp aiia resai freqres hi a are aiit resai freqres hi t mode p aiiam 1 if any guess ai mi am mode are aiitm 1 if any guess ai mi tm mw measures for this measure a system must identify items where the target is part of a multiword and what the multiword is the annotators do not all have linguistics background they are simply asked if the target is an integral part of a phrase and if so what the phrase is sometimes this option is usedby the subjects for paraphrasing a phrase of the sentence but typically it is used when there is a mul tiwordfor scoring a multiword item is one with a majority vote for the same multiword with more than 1 annotator identifying the multiwordlet mw be the subset of t for which there is such a multiword response from a majority of at least 2 annotatorslet mwi mw be the multiword identified by majority vote for item i let mwsys be the subset of t for which there is a multiword response from the system and mwsysi be a multiword specified by the system for item i detection p mwsysimwsys 1 if mwi exists at i mwsys detection are mwsysimw 1 if mwi exists at i mw identification p mwsysimwsys 1 if mwsysi mwi mwsys 50 identification are mwsysimw 1 if mwsysi mwi mw 31 baselineswe produced baselines using wordnet 21 and a number of distributional similarity measuresfor the wordnet best baseline we found the best ranked synonym using the criteria 1 to 4 below in orderfor wordnet oot we found up to 10 synonyms using criteria 1 to 4 in order until 10 were found 1synonyms from the first synset of the targetword and ranked with frequency data obtained from the bnc 2synonyms from the hypernyms or closely related classes of that first synset ranked with the frequency data3synonyms from all synsets of the target wordand ranked using the bnc frequency data4synonyms from the hypernyms or closely related classes of all synsets of the target ranked with the bnc frequency datawe also produced best and oot baselines usingthe distributional similarity measures l1 jaccard co sine lin and sd 4we tookthe word with the largest similarity for best and the top 10 for ootfor mw detection and identification we used wordnet to detect if a multiword in wordnet which includes the target word occurs within a window of 2 words before and 2 words after the target word9 teams registered and 8 participated and two ofthese teams each entered two sys tems we distinguish the first and second systems with a 1 and 2 suffix respectivelythe systems all used 1 or more predefined inven toriesmost used web queries or web data to obtain counts fordisambiguation with some using algorithms to derive domain or cooccurrence infor mation from the bncmost systems did not use sense tagged data for disambiguation though melbdid use semcor for filtering in frequent synonyms and unt used a semisupervised word sense disambiguation combined with a host ofother techniques including machine translation en ginesin tables 1 to 3 we have ordered systems according to r on the best task and in tables 4 to 6 according to r on ootwe show all scores as per centages ie we multiply the scores in section 3 by 100in tables 3 and 6 we show results using the subset of items which were i not identified asmultiwords ii scored only on non multi word substitutes from both annotators and systems unfortunately we do not have space to show the analysis for the man and rand subsets hereplease refer to the task website for these results5 we retain the same ordering for the further analysis tables when we look at subsets of the dataalthough there are further differences in the systems which would warrant reranking on anindividual analysis since we combined the subanal yses in one table we keep the order as for 1 and 4 respectively for ease of comparisonthere is some variation in rank order of the sys tems depending on which measures are used6 kuis highest ranking on r for bestunt is best at find ing the mode particularly on oot though it is the most complicated system exploiting a great many knowledge sources and componentsirst2 does well at finding the mode in bestthe irst2 best r score is lower because it supplied many answers for each item however it achieves the best r score on the oot taskthe baselines are outperformed by most systemsthe wordnet baseline outperformsthose derived from distributional methodsthe dis tributional methods especially lin show promising results given that these methods are automatic and 5the task website is at httpwwwinformaticssussexacuk researchnlpmccarthytask10indexhtml6there is not a big difference between p and r because systems typically supplied answers for most items51 systems p r mode p mode r ku 1290 1290 2065 2065 unt 1277 1277 2073 2073 melb 1268 1268 2041 2041 hit 1135 1135 1886 1886 usyd 1123 1088 1822 1764 irst1 806 806 1309 1309 irst2 695 694 2033 2033 tor 298 298 472 472 table 1 best results systems p r mode p mode r wordnet 995 995 1528 1528 lin 884 853 1469 1423 l1 811 782 1335 1293 lee 699 674 1134 1098 jaccard 684 660 1117 1081 cos 507 489 764 740 table 2 best baseline results do not require handcrafted inventoriesas yet we have not combined the baselines with disambiguation methodsonly hit attempted the mw taskit outperforms all baselines from wordnet51 post hoc analysischoosing a lexical substitute for a given word is not clear cut and there is inherently variation in thetasksince it is quite likely that there will be syn onyms that the five annotators do not think of we conducted a post hoc analysis to see if the synonyms selected by the original annotators were better on the whole than those in the systems responseswe randomly selected 100 sentences from the subset of items which had more than 2 single word substitutes no name responses and where the target word was nmwt nmws systems p r p r ku 1339 1339 1433 1398 unt 1346 1346 1379 1379 melb 1335 1335 1419 1382 hit 1197 1197 1255 1238 usyd 1168 1134 1248 1210 irst1 844 844 898 892 irst2 725 724 767 766 tor 322 322 332 332 table 3 further analysis for best systems p r mode p mode r irst2 6903 6890 5854 5854 unt 4919 4919 6626 6626 ku 4615 4615 6130 6130 irst1 4123 4120 5528 5528 usyd 3607 3496 4366 4228 swag2 3780 3466 5018 4602 hit 3388 3388 4691 4691 swag1 3553 3283 4741 4382 tor 1119 1119 1463 1463 table 4 oot results systems p r mode p mode r wordnet 2970 2935 4057 4057 lin 2770 2672 4047 3919 l1 2409 2323 3610 3496 lee 2009 1938 2981 2886 jaccard 1823 1758 2687 2602 cos 1407 1358 2082 2016 table 5 oot baseline results nmwt nmws systems p r p r irst2 7204 7190 7619 7606 unt 5113 5113 5401 5401 ku 4843 4843 4972 4972 irst1 4311 4308 4513 4511 usyd 3726 3617 4013 3889 swag2 3995 3651 4097 3775 hit 3560 3560 3663 3663 swag1 3749 3464 3836 3567 tor 1177 1177 1222 1222 table 6 further analysis for oot hit wordnet bl p r p r detection 4534 5615 4364 3692 identification 4161 5154 4000 3385 table 7 mw results 52 good reasonable bad sys 907 1908 7185 origa 3736 4101 2163 table 8 post hoc resultsnot one of those identified as a multiword we then mixed the substitutes from the human annotators with those of the systemsthree fresh annotators7 were given the test sentence and asked to categorise the randomly ordered substitutes as good reasonable or badwe take the majority verdict for each substitute but if there is one reasonable and one good verdict thenwe categorise the substitute as reasonablethe per centage of substitutes for systems and original annotators categorised as good reasonableand bad by the post hoc annotators are shown in ta ble 8we see the substitutes from the humans have a higher proportion of good or reasonable responsesby the post hoc annotators compared to the substi tutes from the systemswe think this task is an interesting one in which to evaluate automatic approaches of capturing lexical meaningthere is an inherent variation in the task because several substitutes may be possible for a given contextthis makes the task hard and scoring is less straightforward than a task which has fixed choiceson the other hand we believe the task taps into human understanding of word meaning and we hope that computers that perform well on this task will have potential in nlp applicationssince a predefined inventory is not used the task allows usto compare lexical resources as well as disambiguation techniques without a bias to any predefined inventoryit is possible for those interested in disam biguation to focus on this rather than the choice of substitutes by using the union of responses from the annotators in future experimentswe acknowledge support from the royal society uk for fund ing the annotation for the project and for a dorothy hodgkin 7again these were native english speakers from the ukfellowship to the first authorwe also acknowledge support to the second author from interop noe we thank the annotators for their hard workwe thank serge sharoff for the use of his internet corpus julie weeds for the software we used for producing the distributional similarity baselines and suzanne stevenson for suggesting the oot task
W07-2009
semeval2007 task 10 english lexical substitution taskin this paper we describe the english lexical substitution task for semevalin the task annotators and systems find an alternative substitute word or phrase for a target word in contextthe task involves both finding the synonyms and disambiguating the contextparticipating systems are free to use any lexical resourcethere is a subtask which requires identifying cases where the word is functioning as part of a multiword in the sentence and detecting what that multiword isin the lexical substitution task a system attempts to generate a word to replace a target word such that the meaning of the sentence is preservedwe establish a benchmark for contextsensitive lexical similarity models
the semeval2007 weps evaluation establishing a benchmark for the web people search task this paper presents the task definition resources participation and comparative re sults for the web people search task which was organized as part of the semeval2007 evaluation exercise this task consists of clustering a set of documents that mention an ambiguous person name according to the actual entities referred to using that name finding information about people in the world wide web is one of the most common activities of internetusersperson names however are highly ambigu ousin most cases the results for a person name search are a mix of pages about different peoplesharing the same namethe user is then forced ei ther to add terms to the query orto browse every document in order to filter the infor mation about the person he is actually looking forin an ideal system the user would simply type aperson name and receive search results clustered ac cording to the different people sharing that nameand this is in essence the weps task we have proposed to semeval2007 participants systems receive a set of web pages and they have to cluster them in as many sets as entities sharing the namethis task has close links with word sense disambiguation which is generally formulated as the taskof deciding which sense a word has in a given contextin both cases the problem addressed is the resolution of the ambiguity in a natural language expressiona couple of differences make our problem differentwsd is usually focused on open class words the first difference is that boundaries be tween word senses in a dictionary are often subtle or even conflicting making binary decisions harderand sometimes even useless depending on the ap plicationin contrast distinctions between people should be easier to establishthe second differenceis that wsd usually operates with a dictionary con taining a relatively small number of senses that can be assigned to each wordour task is rather a case of word sense discrimination because the number of senses is unknown a priori and it is in average much higher than in the wsd taskthere is also a strong relation of our proposedtask with the coreference resolution problem fo cused on linking mentions ina textour task can be seen as a coreference resolution problem where the focus is on solving inter document coreference disregarding the linking of all the mentions of an entity inside each documentan early work in name disambiguation uses the similarity between doc uments in a vector space using a bag of wordsrepresentationan alternative approach by mann and yarowsky is based on a rich feature space of automatically extracted biographic informationfleischman and hovy propose a maximum entropy model trained to give the probability that 64 two names refer to the same individual 1the paper is organized as followssection 2 provides a description of the experimental methodology the training and test data provided to the par ticipants the evaluation measures baseline systemsand the campaign designsection 3 gives a description of the participant systems and provides the evaluation resultsfinally section 4 presents some con clusions21 datafollowing the general semeval guidelines we have prepared trial training and test data sets for the task which are described below211 trial datafor this evaluation campaign we initially deliv ered a trial corpus for the potential participantsthe trial data consisted of an adapted version of the weps corpus described in the predominant feature of this corpus is a high number of entities in each document set due to the fact that the ambiguous names were extracted from the most common names in the us censusthis corpus did not completely match task specifications because it did not consider documents with internal ambiguity nor it did consider nonperson entities but it was however a costeffective way of releasing data toplay around withduring the first weeks after releasing this trial data to potential participants some annotation mistakes were noticedwe preferred how ever to leave the corpus as is and concentrate our efforts in producing clean training and test datasets rather than investing time in improving trial data212 training data in order to provide different ambiguity scenarios we selected person names from different sources us censuswe reused the web03 corpus which contains 32 names randomly picked from the us census and was well suited for the taskwikipediaanother seven names were sampledfrom a list of ambiguous person names in the en glish wikipediathese were expected to have a1for a comprehensive bibliography on person name disam biguation refer to httpnlpunedesweps few predominant entities and therefore a lower ambiguity than the previous setecdlfinally ten additional names were ran domly selected from the program committee listing of a computer science conference this set offers a scenario of potentially low am biguity with the added value of the a priori knowl edge of a domain specific type of entity present in the dataall datasets consist of collections of web pages obtained from the 100 top results for a person name query to an internet search engine 2note that 100 is an upper bound because in some occasions the url returned by the search engine no longer existsthe second and third datasets consist of 17 person names and 1685 associated documents in total each web page was down loaded and stored for offline processingwe also stored the basic metadata associated to each search result including the original url title position in the results ranking and the corresponding snippet generated by the search enginein the process of generating the corpus the selection of the names plays an important role poten tially conditioning the degree of ambiguity that will be found later in the web search resultsthe reasonsfor this variability in the ambiguity of names are diverse and do not always correlate with the straight forward census frequencya much more decisivefeature is for instance the presence of famous en tities sharing the ambiguous name with less popular peopleas we are considering top search results these can easily be monopolized by a single entity that is popular in the internetafter the annotation of this data we found our predictions about the averageambiguity of each dataset not to be completely ac curatein table 1 we see that the ecdl06 average ambiguity is indeed relatively low wikipedia names have an average ambiguity of 2314 entities2we used the yahooapi from yahoosearch web ser vices 65 name entities documents discarded wikipedia names john kennedy 27 99 6 george clinton 27 99 6 michael howard 32 99 8 paul collins 37 98 6 tony abbott 7 98 9 alexander macomb 21 100 14 david lodge 11 100 9 average 2314 9900 829 ecdl06 names edward fox 16 100 36 allan hanbury 2 100 32 donna harman 7 98 6 andrew powell 19 98 48 gregory crane 4 99 17 jane hunter 15 99 59 paul clough 14 100 35 thomas baker 60 100 31 christine borgman 7 99 11 anita coleman 9 99 28 average 1530 9920 3030 web03 corpus tim whisler 10 33 8 roy tamashiro 5 23 6 cynthia voigt 1 405 314 miranda bollinger 2 2 0 guy dunbar 4 51 34 todd platts 2 239 144 stacey doughty 1 2 0 young dawkins 4 61 35 luke choi 13 20 6 gregory brennan 32 96 38 ione westover 1 4 0 patrick karlsson 10 24 8 celeste paquette 2 17 2 elmo hardy 3 55 15 louis sidoti 2 6 3 alexander markham 9 32 16 helen cawthorne 3 46 13 dan rhone 2 4 2 maile doyle 1 13 1 alice gilbreath 8 74 30 sidney shorter 3 4 0 alfred schroeder 35 112 58 cathie ely 1 2 0 martin nagel 14 55 31 abby watkins 13 124 35 mary lemanski 2 152 78 gillian symons 3 30 6 pam tetu 1 4 2 guy crider 2 2 0 armando valencia 16 79 20 hannah bassham 2 3 0 charlotte bergeron 5 21 8 average 590 4720 1800 global average 1076 7102 2600 table 1 training data per name which is higher than for the ecdl setthe web03 corpus has the lowest ambiguity for two reasons first randomly picked names belong predominantly to the long tail of unfrequent person names which per se have low ambiguitybeing rare names implies that in averagethere are fewer documents returned by the search engine which also reduces the pos sibilities to find ambiguity213 test data for the test data we followed the same process described for the trainingin the name selection wetried to maintain a similar distribution of ambiguity degrees and scenariofor that reason we ran domly extracted 10 person names from the english wikipedia and another 10 names from participantsin the acl06 conferencein the case of the us census names we decided to focus on relatively com mon names to avoid the problems explained aboveunfortunately after the annotation was finished wefound a major increase in the ambiguity degrees of all data setswhile we expected a raise in the case of the us census names the other two cases just show that there is a high variability which would require much larger data sets to have reliable population samplesthis has made the task particularly challenging for participants because naive learning strategies might be misleaded by the training set214 annotationthe annotation of the data was performed separately in each set of documents related to an ambiguous namegiven this set of approximately 100 documents that mention the ambiguous name the an notation consisted in the manual clustering of eachdocument according to the actual entity that is re ferred on itwhen non person entities were found the annotation was performed without any special rulegenerally the annotator browses documents following the original ranking in the search results after reading a document he will decide whether thementions of the ambiguous name refer to a new en tity or to a entity previously identifiedwe asked the annotators to concentrate first on mentions that strictly contained the search string and then to pay attention to the coreferent variations of the namefor instance john edward foxor edward fox smithwould be valid mentionsedward j fox however breaks the original search string and we do not get into name variation detection so it will be considered valid only if it is coreferent to a valid 66 name entities documents discarded wikipedia names arthur morgan 19 100 52 james morehead 48 100 11 james davidson 59 98 16 patrick killen 25 96 4 william dickson 91 100 8 george foster 42 99 11 james hamilton 81 100 15 john nelson 55 100 25 thomas fraser 73 100 13 thomas kirk 72 100 20 average 5650 9930 1750 acl06 names dekang lin 1 99 0 chris brockett 19 98 5 james curran 63 99 9 mark johnson 70 99 7 jerry hobbs 15 99 7 frank keller 28 100 20 leon barrett 33 98 9 robert moore 38 98 28 sharon goldwater 2 97 4 stephen clark 41 97 39 average 3100 9840 1280 us census names alvin cooper 43 99 9 harry hughes 39 98 9 jonathan brooks 83 97 8 jude brown 32 100 39 karen peterson 64 100 16 marcy jackson 51 100 5 martha edwards 82 100 9 neil clark 21 99 7 stephan johnson 36 100 20 violet howard 52 98 27 average 5030 9910 1490 global average 4593 9893 1507 table 2 test data mentionin order to perform the clustering the annotatorwas asked to pay attention to objective facts and to be conservative when making decisionsthefinal result is a complete clustering of the docu ments where each cluster contains the documentsthat refer to a particular entityfollowing the pre vious example in documents for the name edward foxthe annotator found 16 different entities with that namenote that there is no a priori knowledge about the number of entities that will be discovered in a document setthis makes the task specially difficult when there are many different entities and a high volume of scattered biographical information to take into accountin cases where the document does not offer enough information to decide whether it belongs to a cluster or is a new entity it is discarded from the evaluation process another common reason for discarding documents was theabsence of the person name in the document usu ally due to a mismatch between the search engine cache and the downloaded urlwe found that in many cases different entities were mentioned using the ambiguous name within asingle documentthis was the case when a doc ument mentions relatives with names that contain the ambiguous string another common case ofintradocument ambiguity is that of pages contain ing database search results such as book lists from amazon actors from imdb etc a similar case is that of pages that explicitly analyze the ambiguity of a person name the way this situation was handled in terms of the annotation was to assign each document to as many clusters as entities were referred to on it with the ambiguous name22 evaluation measuresevaluation was performed in each document set of the data distributed as testthe human annotation was used as the gold standard for the evaluationeach system was evaluated using the standard pu rity and inverse purity clustering measures purity isrelated to the precision measure well known in in formation retrievalthis measure focuses on the frequency of the most common category in eachcluster and rewards the clustering solutions that in troduce less noise in each clusterbeing c the set of clusters to be evaluated l the set of categories and n the number of clustered elements purity is computed by taking the weighted average of maximal precision values purity i ci n max precisionwhere the precision of a cluster ci for a given cat egory lj is defined as precision ci lj ciinverse purity focuses on the cluster with maximum recall for each category rewarding the clus tering solutions that gathers more elements of each category in a corresponding single clusterinverse purity is defined as 67 inverse purity i li n max precisionfor the final ranking of systems we used the har monic mean of purity and inverse purity f05 the f measure is defined as follows f 1 1purity 1 inverse purityf02 is included as an additional measure giv ing more importance to the inverse purity aspectthe rationale is that for a search engine user it should be easier to discard a few incorrect web pages in a cluster containing all the informationneeded than having to collect the relevant infor mation across many different clusterstherefore achieving a high inverse purity should be rewarded more than having high purity23 baselinestwo simple baseline approaches were applied to the test datathe allinone baseline provides aclustering solution where all the documents are assigned to a single clusterthis has the effect of always achieving the highest score in the inverse purity measure because all classes have their docu ments in a single clusteron the other hand the purity measure will be equal to the precision of thepredominant class in that single clusterthe one inone baseline gives another extreme clusteringsolution where every document is assigned to a dif ferent clusterin this case purity always gives its maximum value while inverse purity will decrease with larger classes24 campaign designthe schedule for the evaluation campaign was set by the semeval organisation as follows release task description and trial data set release of training and test participants send their answers to the task organizers the task organizers evaluate the answers and send the resultsthe task description and the initial trial data set were publicly released before the start of the official evaluationthe official evaluation period started with the simultaneous release of both training and test data to gether with a scoring script with the main evaluation measures to be usedthis period spanned five weeksin which teams were allowed to register and down load the dataduring that period results for a giventask had to be submitted no later than 21 days af ter downloading the training data and no later than 7days after downloading the test dataonly one sub mission per team was allowedtraining data included the downloaded webpages their associated metadata and the human clustering of each document set providing a develop ment testbed for the participants systemswe also specified the source of each ambiguous name in the training data test data only included the downloaded web pages and their metadatathis section of thecorpus was used for the systems evaluationpartici pants were required to send a clustering for each test document setfinally after the evaluation period was finishedand all the participants sent their data the task orga nizers sent the evaluation for the test data29 teams expressed their interest in the task thisnumber exceeded our expectations for this pilot ex perience and confirms the potential interest of theresearch community in this highly practical prob lemout of them 16 teams submitted results within the deadline their results are reported below31 results and discussiontable 3 presents the macroaveraged results ob tained by the sixteen systems plus the two baselineson the test datawe found macroaverage 3 preferable to microaverage 4 because it has a clear inter pretation if the evaluation measure is f then we should calculate f for every test case and then average over all trialsthe interpretation of microaverage f is less clearthe systems are ranked according to the scores obtained with the harmonic mean measure f05 of 3macroaverage f consists of computing f for every test set and then averaging over all test sets4microaverage f consists of computing the average p and ip and then calculating f with these figures68 macroaveraged scores fmeasures rank teamid 5 2 pur inv pur 1 cu comsem 78 83 72 882 irstbp 75 77 75 803 psnus 75 78 73 825 shef 66 73 60 826 fico 64 76 53 907 unn 62 67 60 738 oneinone 61 52 100 479 aug 60 73 50 8810 swativ 58 64 55 7111 uazsa 58 60 58 6412 titpi 57 71 45 8913 jhu113 53 65 45 8214 dfki2 50 63 39 8315 wit 49 66 36 9316 uc3m 13 48 66 35 9517 ubcas 40 55 30 9118 allinone 40 58 29 100table 3 team rankingpurity and inverse purityconsidering only the par ticipant systems the average value for the ranking measure was 0 60 and its standard deviation 0 11results with f02 are not substantially different there are some ranking swaps but gen erally only within close pairsthe good performance of the oneinone baseline system is indicative of the abundance of singleton entities this situation increases the inverse purity score for this system giving a harmonic measure higher than the expectedthe weps task ended with considerable success in terms of participation and we believe that a careful analysis of the contributions made by participants will be an interesting reference for future re searchin addition all the collected and annotated dataset will be publicly available 5 as a benchmark for web people search systemsat the same time it is clear that building a re liable testbed for the task is not simplefirst ofall the variability across test cases is large and un predictable and a system that works well with the 5httpnlpunedeswepsnames in our test bed may not be reliable in practi cal open search situationspartly because of thatour testbed happened to be unintentionally challenging for systems with a large difference be tween the average ambiguity in the training and test datasetssecondly it is probably necessary to think about specific evaluation measures beyond standard clustering metrics such as purity and inverse puritywhich are not tailored to the task and do not be have well when multiple classification is allowedwe hope to address these problems in a forthcom ing edition of the weps taskthis research was supported in part by the national science foundation of united states under grantiis00325657 and by a grant from the spanish government under project textmess this paper does not necessarily reflect the po sition of the yous government
W07-2012
the semeval2007 weps evaluation establishing a benchmark for the web people search taskthis paper presents the task definition resources participation and comparative results for the web people search task which was organized as part of the semeval2007 evaluation exercisethis task consists of clustering a set of documents that mention an ambiguous person name according to the actual entities referred to using that namewe consider the problem of disambiguating person names in a web searching scenariothe goal of the web people search task is to assign web pages to groups where each group contains all pages that refer to one unique entityour web persona search task has created a benchmark dataset
semeval2007 task 15 tempeval temporal relation identification the tempeval task proposes a simple way to evaluate automatic extraction of temporalrelations it avoids the pitfalls of evaluating a graph of interrelated labels by defining three sub tasks that allow pairwise eval uation of temporal relations the task not only allows straightforward evaluation italso avoids the complexities of full tempo ral parsing newspaper texts narratives and other texts describe events that occur in time and specify the temporallocation and order of these eventstext comprehen sion amongst other capabilities clearly requires the capability to identify the events described in a text and locate these in timethis capability is crucial to a wide range of nlp applications from document summarization and question answering to machine translationrecent work on the annotation of events and temporal relations has resulted in both a defacto stan dard for expressing these relations and a handbuiltgold standard of annotated textstimeml is an emerging iso standard for annotation of events temporal expressions and the anchoring and ordering relations between themtimebank was originally conceived of as aproof of concept that illustrates the timeml lan guage but has since gone through several rounds of revisions and can now be considered a gold standard for temporal informationtimeml and timebank have already been used as the basis for automatic time event and temporal relation annotation tasks in a number of research projects in recent years an open evaluation challenge in the area of temporal annotation should serve to drive research forward as it has in other areas of nlpthe automatic identification of all temporal referring expres sions events and temporal relations within a text is the ultimate aim of research in this areahowever addressing this aim in a first evaluation challenge was judged to be too difficult both for organizers and participants and a staged approach was deemedmore effectivethus we here present an initial eval uation exercise based on three limited tasks that webelieve are realistic both from the perspective of as sembling resources for development and testing and from the perspective of developing systems capable of addressing the tasksthey are also tasks whichshould they be performable automatically have ap plication potentialthe tasks as originally proposed were modified slightly during the course of resource development for the evaluation exercise due to constraints on dataand annotator availabilityin the following we de scribe the tasks as they were ultimately realized in the evaluationthere were three tasks a b and c for allthree tasks the data provided for testing and train ing includes annotations identifying sentence boundaries all temporal referring expression as 75 specified by timex3 all events as specifiedin timeml selected instances of temporal re lations as relevant to the given taskfor tasks a and b a restricted set of event terms were identified those whose stems occurred twenty times or more in timebankthis set is referred to as the event target list or etltask a this task addresses only the temporal re lations holding between time and event expressions that occur within the same sentencefurthermore only event expressions that occur within the etl areconsideredin the training and test data tlink an notations for these temporal relations are provided the difference being that in the test data the relation type is withheldthe task is to supply this labeltask b this task addresses only the temporal relations holding between the document creation time and event expressionsagain onlyevent expressions that occur within the etl are con sideredas in task a tlink annotations for these temporal relations are provided in both training and test data and again the relation type is withheld in the test data and the task is to supply this labeltask c task c relies upon the idea of their beinga main event within a sentence typically the syn tactically dominant verbthe aim is to assign thetemporal relation between the main events of adja cent sentencesin both training and test data the main events are identified and tlinks between these main events are suppliedas for tasks a and b the task here is to supply the correct relation label for these tlinksthe tempeval annotation language is a simplifiedversion of timeml 1for tempeval we use the fol lowing five tags tempeval s timex3 event and tlinktempeval is the document root and s marks sentence boundariesall sentence tags in the tempeval data are automatically created using the alembic natural language processing toolsthe other three tags are discussed here in more detail1see httpwwwtimemlorg for language specifica tions and annotation guidelines timex3tags the time expressions in the textit is identical to the timex3 tag in timemlsee the timeml specifications and guidelines for further details on this tag and its attributeseach document has one special timex3 tagthe document creation time which is inter preted as an interval that spans a whole dayeventtags the event expressions in the textthe interpretation of what an event is is taken from timeml where an event is a cover term for predicates describing situations that happen or occur as well as some but not all stative predicatesevents can be denoted by verbsnouns or adjectivesthe tempeval event an notation scheme is somewhat simpler than thatused in timeml whose complexity was designed to handle event expressions that intro duced multiple event instances thiscomplication was not necessary for the tem peval datathe most salient attributes encodetense aspect modality and polarity informa tionfor tempeval task c one extra attribute is added mainevent with possible values yes and no tlinkthis is a simplified version of the timeml tlink tagthe relation types for the timeml version form a finegrained set based on james allens interval logic for tempeval we use only six relation typesincluding the three core relations before after and overlap the two less specific relations beforeoroverlap and overlap orafter for ambiguous cases and finally therelation vague for those cases where no partic ular relation can be establishedas stated above the tlinks of concern for each task are explicitly included in the training and in thetest datahowever in the latter the reltype at tribute of each tlink is set to unknownfor each task the system must replace the unknown values with one of the six allowed values listed abovethe event and timex3 annotations were takenverbatim from timebank version 122 the annota 2timebank 12 is available for free through the linguistic data consortium see httpwwwtimemlorg for more 76tion procedure for tlink tags involved dual annotation by seven annotators using a webbased anno tation interfaceafter this phase three experiencedannotators looked at all occurrences where two an notators differed as to what relation type to select and decided on the best optionfor task c there was an extra annotation phase where the main events were marked upmain events are those events that are syntactically dominant in the sentencesit should be noted that annotation of temporal relations is not an easy task for humans due to ram pant temporal vagueness in natural languageas aresult interannotator agreement scores are well be low the often kickedaround threshold of 90 both for the timeml relation set as well as the tempevalrelation setfor timeml temporal links an inter annotator agreement of 077 was reported whereagreement was measured by the average of preci sion and recallthe numbers for tempeval are even lower with an agreement of 072 for anchorings of events to times and an agreement of065 for event orderings obviously num bers like this temper the expectations for automatic temporal linkingthe lower number for tempeval came a bit asa surprise because after all there were fewer relations to choose formhowever the tempeval an notation task is different in the sense that it did not give the annotator the option to ignore certain pairs of events and made it therefore impossible to skip hardtoclassify temporal relationsin full temporal annotation evaluation of temporal annotation runs into the same issues as evaluation of anaphora chains simple pairwise comparisons maynot be the best way to evaluatein temporal annota tion for example one may wonder how the response in should be evaluated given the key in a before b a before c b equals c a after b a after c b equals cscoring at 033 precision misses the interde pendence between the temporal relationswhat we need to compare is not individual judgements but two partial ordersdetailsfor tempeval however the tasks are defined in a such a way that a simple pairwise comparison is possible since we do not aim to create a full temporal graph and judgements are made in isolationrecall that there are three basic temporal relations as well as three disjunctions over this set the addition of these disjunctions raises the question of how to score a response of for example before given akey of beforeoroverlapwe use two scor ing schemes strict and relaxedthe strict scoring scheme only counts exact matches as successfor example if the key is overlap and the responsebeforeoroverlap than this is counted as fail urewe can use standard definitions of precision and recall precision rcr recall rckwhere rc is number of correct answers in the response r the total number of answers in the re sponse and k the total number of answers in the keyfor the relaxed scoring scheme precision and recall are defined as precision rcwr recall rcwk where rcw reflects the weighted number of correctanswersa response is not simply counted as 1 or 0 but is assigned one of the val ues in table 1b o a bo oa v b 1 0 0 05 0 033 o 0 1 0 05 05 033 a 0 0 1 0 05 033 bo 05 05 0 1 05 067 oa 0 05 05 05 1 067 v 033 033 033 067 067 1 table 1 evaluation weights this scheme gives partial credit for disjunctionsbut not so much that noncommitment edges out pre cise assignmentsfor example assigning vague as the relation type for every temporal relation results in a precision of 03377six teams participated in the tempeval tasksthree of the teams used statistics exclusively one used arulebased system and the other two employed a hy brid approachthis section gives a short description of the participating systemscutmp trained three support vector machine models one for each taskall models used the goldstandard timebank features for events and times as well as syntactic features derived from the textadditionally the relation types obtained by running the task b system on the training data for task a and task c were added as a feature to the two latter systemsa subset of features was selectedusing crossvalidations on the training data discarding features whose removal improved the cross validation fscorewhen applied to the test data the task b system was run first in order to supplythe necessary features to the task a and task c sys temslccte automatically identifies temporal refer ring expressions events and temporal relations in text using a hybrid approach leveraging variousnlp tools and linguistic resources at lccfor tem poral expression labeling and normalization they used a syntactic pattern matching tool that deploys a large set of handcrafted finite state rulesfor event detection they used a small set of heuristics as well as a lexicon to determine whether or not a token is an event based on the lemma part of speech and wordnet sensesfor temporal relation discovery lccte used a large set of syntactic and semantic features as input to a machine learning componentsnaistjapan defined the temporal relation iden tification task as a sequence labeling model in which the target pairs a timex3 and an eventare linearly ordered in the documentfor analyz ing the relative positions they used features fromdependency trees which are obtained from a dependency parserthe relative position between the tar get event and a word in the target timex3 is used as a feature for a machine learning based relation identifierthe relative positions between a word inthe target entities and another word are also intro ducedthe usfd system uses an offtheshelf machine learning suite treating the assignment of temporal relations as a simple classification taskthe features used were the ones provided in thetempeval data annotation together with a few features straightforwardly computed from the docu ment without any deeper nlp analysiswvalis approach for discovering intra sentence temporal relations relies on sentencelevelsyntactic tree generation bottomup propaga tion of the temporal relations between syntactic constituents a temporal reasoning mechanism that relates the two targeted temporal entities to their closest ancestor and then to each other and on conflict resolution heuristicsin establishing the temporal relation between an event and thedocument creation time the temporal ex pressions directly or indirectly linked to that event are first analyzed and if no relation is detected the temporal relation with the dct is propagatedtopdown in the syntactic treeintersentence tem poral relations are discovered by applying several heuristics and by using statistical data extracted from the training corpusxrcet used a rulebased system that relies on a deep syntactic analyzer that was extended to treattemporal expressionstemporal processing is inte grated into a more generic tool a general purpose linguistic analyzer and is thus a complement for a better general purpose text understanding systemtemporal analysis is intertwined with syntacticosemantic text processing like deep syntactic analysis and determination of thematic rolestempeval specific treatment is performed in a postprocessing stagethe results for the six teams are presented in tables 2 3 and 4team strict relaxed p r f p r f cutmp 061 061 061 063 063 063 lccte 059 057 058 061 060 060 naist 061 061 061 063 063 063 usfd 059 059 059 060 060 060 wvali 062 062 062 064 064 064 xrcet 053 025 034 063 030 041 average 059 054 056 062 057 059 stddev 003 013 010 001 012 008 table 2 results for task a 78 team strict relaxed p r f p r f cutmp 075 075 075 076 076 076 lccte 075 071 073 076 072 074 naist 075 075 075 076 076 076 usfd 073 073 073 074 074 074 wvali 080 080 080 081 081 081 xrcet 078 057 066 084 062 071 average 076 072 074 078 074 075 stddev 003 008 005 003 006 003 table 3 results for task b team strict relaxed p r f p r f cutmp 054 054 054 058 058 058 lccte 055 055 055 058 058 058 naist 049 049 049 053 053 053 usfd 054 054 054 057 057 057 wvali 054 054 054 064 064 064 xrcet 042 042 042 058 058 058 average 051 051 051 058 058 058 stddev 005 005 005 004 004 004 table 4 results for task c all tables give precision recall and fmeasure for both the strict and the relaxed scoring scheme aswell as averages and standard deviation on the pre cision recall and fmeasure numbersthe entry for usfd is starred because the system developers are coorganizers of the tempeval task3 for task a the fmeasure scores range from 034 to 062 for the strict scheme and from 041 to 063 for the relaxed schemefor task b the scores range from 066 to 080 and 071 to 081 finally task c scores range from 042 to 055 and from 056 to 066 the differences between the systems is not spec tacularwvalis hybrid approach outperforms the other systems in task b and using relaxed scoring in task c as wellbut for task a the winners barely edge out the rest of the fieldsimilarly for task c using strict scoring there is no system that clearly separates itself from the fieldit should be noted that for task a and in lesser ex tent for task b the xrcet system has recall scores that are far below all other systemsthis seemsmostly due to a choice by the developers to not as sign a temporal relation if the syntactic analyzer did not find a clear syntactic relation between the two 3there was a strict separation between people assisting in the annotation of the evaluation corpus and people involved in system developmentelements that needed to be linked for the tempeval taskevaluationthe evaluation approach of tempeval avoids the in terdependencies that are inherent to a network of temporal relations where relations in one part of the network may constrain relations in any other part ofthe networkto accomplish that tempeval delib erately focused on subtasks of the larger problem of automatic temporal annotationone thing we may want to change to the present tempeval is the definition of task a currently it instructs to temporally link all events in a sentence to all time expressions in the same sentencein the future we may consider splitting this into two tasks where one subtask focuses on those anchorings thatare very local like white house spokesman marlin fitzwater said late yesterday thatwe expect both interannotator agreement and system per formance to be higher on this subtaskthere are two research avenues that loom beyondthe current tempeval definition of other subtasks with the ultimate goal of establishing a hierar chy of subtasks ranked on performance of automatictaggers and an approach to evaluate entire time linessome other temporal linking tasks that can be considered are ordering of consecutive events in a sentence ordering of events that occur in syntacticsubordination relations ordering events in coordi nations and temporal linking of reporting events to the document creation timeonce enough temporallinks from all these subtasks are added to the entire temporal graph it becomes possible to let confidence scores from the separate subtasks drive a con straint propagation algorithm as proposed in in effect using highprecision relations to constrain lowerprecision relations elsewhere in the graphwith this more complete temporal annotation it is no longer possible to simply evaluate the entire graph by scoring pairwise comparisonsinstead the entire timeline must be evaluatedinitial ideas regarding this focus on transforming the temporal graph of a document into a set of partial orders built 79 around precedence and inclusion relations and then evaluating each of these partial orders using some kind of edit distance measure4 we hope to have taken the first baby steps with the three tempeval taskswe would like to thank all the people who helped prepare the data for tempeval listed here in no particular order amber stubbs jessica littman hongyuan qiu emin mimaroglu emma barker catherine havasi yonit boussany roser saur and anna rumshiskythanks also to all participants to this new task steven bethard and james martin congmin min munirathnam srikanth and abraham fowler yuchang cheng masayuki asa hara and yuji matsumoto mark hepple andrea setzer and rob gaizauskas carolinehagege and xavier tannier and georgiana puscasyou part of the work in this paper was funded bythe dtoaquaint program under grant num ber n6133906c0140 and part funded by the eu vikef project
W07-2014
semeval2007 task 15 tempeval temporal relation identificationthe tempeval task proposes a simple way to evaluate automatic extraction of temporal relationsit avoids the pitfalls of evaluating a graph of interrelated labels by defining three subtasks that allow pairwise evaluation of temporal relationsthe task not only allows straightforward evaluation it also avoids the complexities of full temporal parsingtemporal information processing is a topic of natural language processing boosted by our evaluation campaign tempevaltempeval07 integrated 14 tlink relations into three before after and
semeval2007 task17 english lexical sample srl and all words this paper describes our experience in preparing the data and evaluating the results for three subtasks of semeval2007 task17 lexical sample semantic role labeling and allwords respectively we tab ulate and analyze the results of participating systems correctly disambiguating words and correctly identifying the semantic relationships be tween those words is an important step forbuilding successful natural language processing applications such as text summarization question an swering and machine translationsemeval2007task17 focuses on both of these challenges wsd and srl using annotated english text taken from the wall street journal and the brown corpusit includes three subtasks i the traditional allwords task comprising finegrained word sense dis ambiguation using a 3500 word section of the wall street journal annotated with wordnet 21 sense tags ii a lexical sample task for coarsegrainedword sense disambiguation on a selected set of lex emes and iii semantic role labeling using two different types of arguments on the same subset of lexemes21 english finegrained allwordsin this task we measure the ability of systems to identify the correct finegrained wordnet 21 wordsense for all the verbs and head words of their argu ments211 data preparation we began by selecting three articles wsj 0105mrg wsj 0186mrg and wsj 0239mrg from a section of the wsj corpus that has been treebanked and propbankedall instances of verbs were identified using thetreebank partofspeech tags and also the head words of their noun arguments the locations of the sentences containing them as well as the locations of the verbs and the nouns within these sentences were recorded for subsequent senseannotationa total of 465 lemmas were selected from about 3500 words of textwe use a tool called stamp written by benjamin snyder for senseannotation of these instancesstamp accepts a list of pointers to the in stances that need to be annotatedthese pointers consist of the name of the file where the instance is located the sentence number of the instance and finally the word number of the ambiguous word within that sentencethese pointers were obtained as described in the previous paragraphstamp also requires a sense inventory which must be stored in xml formatthis sense inventory was obtained by querying wordnet 21 and storing the output as a 87set of xml files prior to taggingstamp works by displayingto the user the sentence to be annotated with the tar get word highlighted along with the previous and the following sentences and the senses from the sense inventorythe user can select one of the senses and move on to the next instancetwo linguistics students annotated the words with wordnet 21 sensesour annotators examined each instance upon which they disagreed and resolvedtheir disagreementsfinally we converted the re sulting data to the senseval formatfor this dataset we got an interannotator agreement of 72 on verbs and 86 for nouns212 results a total of 14 systems were evaluated on the all words taskthese results are shown in table 1we used the standard senseval scorer scorer21 to score the systemsall the fscores2 in this table as well as other tables in this paper are accompanied by a 95 confidence interval calculated using the bootstrap resampling procedure22 ontonotes english lexical sample wsdit is quite well accepted at this point that it is dif ficult to achieve high interannotator agreement onthe finegrained wordnet style senses and without a corpus with high annotator agreement auto matic learning methods cannot perform at a levelthat would be acceptable for a downstream applica tionontonotes is a project that has annotated several layers of semantic information including word senses at a high interannotator agreement of over 90therefore we decided to use this data for the lexical sample task221 data all the data for this task comes from the 1m wordwsj treebankfor the convenience of the partici pants who wanted to use syntactic parse information as features using an offtheshelf syntactic parser we decided to compose the training data of sections 0221for the test sets we use data from sections 1httpwwwcseunteduradasensevalsenseval3scoring 2scorer2 reports precision and recall scores for each systemfor a sys tem that attempts all the words both precision and recall are the samesince a few systems had missing answers they got different precision and recall scorestherefore for ranking purposes we consolidated them into an fscoretrain test total verb 8988 2292 11280 noun 13293 2559 15852 total 22281 4851 table 2 the number of instances for verbs andnouns in the train and test sets for the lexical sam ple wsd task01 22 23 and 24fortunately the distribution ofwords was amenable to an acceptable number of in stances for each lemma in the test setwe selecteda total of 100 lemmas con sidering the degree of polysemy and total instances that were annotatedthe average ita for these is over 90the training and test set composition is described in table 2the distribution across all the verbs and nouns is displayed in table 4 222 resultsa total of 13 systems were evaluated on the lexi cal sample tasktable 3 shows the precisionrecall for all these systemsthe same scoring software was used to score this task as well223 discussion for the all words task the baseline performanceusing the most frequent wordnet sense for the lemmas is 514the topperforming system was a supervised system that used a maximum entropy clas sifier and got a precisionrecall of 591 about 8 points higher than the baselinesince the coarse and finegrained disambiguation tasks have been part ofthe two previous senseval competitions and we happen to have access to that data we can take this op portunity to look at the disambiguation performancetrendalthough different test sets were used for ev ery evaluation we can get a rough indication of the trendfor the finegrained all words sense tagging task which has always used wordnet the systemperformance has ranged from our 59 to 652 to 69 because of time constraints on the data preparation this years task has proportionally more verbs and fewer nouns thanprevious allwords english tasks which may ac count for the lower scoresas expected the lexical sample task using coarse 88 rank participant system id classifier f 1 stephen tratz pnnl maxent 591452 hwee tou ng nuspt svm 587453 rada mihalcea untyahoo memorybased 583454 cai junfu nusml naive bayes 576455 oier lopez de lacalle ubcalm knn 544456 david martinez ubcumb2 knn 540457 jonathan chang pubcd exponential model 539458 radu ion racai unsupervised 527459 most frequent wordnet sense baseline na 5144510 davide buscaldi upvwsd unsupervised 4694511 sudip kumar naskar jusknsb unsupervised 4024512 david martinez ubcumb1 unsupervised 3994514 rafael berlanga tkbuo unsupervised 3254515 jordan boydgraber putop unsupervised 13245table 1 system performance for the allwords taskrank participant system classifier f 1 cai junfu nusml svm 887122 oier lopez de lacalle ubcalm svdknn 869125 lucia specia uspibm1 ilp 851125 deniz yuret ku semisupervised 851126 saarikoski oe naive bayes svm 838127 university of technology brno vutbr naive bayes 803128 ana zelaia ubczas svdknn 799129 carlo strapparava itcirst svm 7961210 most frequent sense in training baseline na 7801211 toby hawker usyd svm 7431212 siddharth patwardhan umnd1 unsupervised 5381213 saif mohammad tor unsupervised 52112 toby hawker usydsvm 89112 carlo strapparava itcsvm 89112 table 3 system performance for the ontonotes lexical sample tasksystems marked with an were postcompetition bugfix submissionsgrained senses provides consistently higher per formance than previous more finegrained lexicalsample tasksthe high scores here were foreshad owed in an evaluation involving a subset of the data last summer note that the best system performance is now closely approaching the ita for this data of over 90table 4 shows theperformance of the top 8 systems on all the indi vidual verbs and nouns in the test setowing to space constraints we have removed some lemmas that have perfect or almost perfect accuraciesat theright are mentioned the average minimum and max imum performances of the teams per lemma and atthe bottom are the average scores per lemma and broken down by verbs and nounsa gap of about 10 points between the verb and noun performance seems to indicate that in general the verbs were more difficult than the nounshowever this might just be owing to this particular test sample having more verbs with higher perplexities and maybe even ones that are indeed difficult to disambiguate in spite of highhuman agreementthe hope is that better knowledge sources can overcome the gap still existing between the system performance and human agree mentoverall however this data indicates that theapproach suggested by and that is be ing adopted in the ongoing ontonotes project does result in higher system perfor mancewhether or not the more coarsegrained senses are effective in improving natural language processing applications remains to be seen89 lemma s s t t 1 2 3 4 5 6 7 8 average min max turnv 13 8 340 62 58 61 40 55 52 53 27 44 49 27 61 gov 12 6 244 61 64 69 38 66 43 46 31 39 49 31 69 comev 10 9 186 43 49 46 56 60 37 23 23 49 43 23 60 setv 9 5 174 42 62 50 52 57 50 57 36 50 52 36 62 holdv 8 7 129 24 58 46 50 54 54 38 50 67 52 38 67 raisev 7 6 147 34 50 44 29 26 44 26 24 12 32 12 50 workv 7 5 230 43 74 65 65 65 72 67 46 65 65 46 74 keepv 7 6 260 80 56 54 52 64 56 52 48 51 54 48 64 startv 6 4 214 38 53 50 47 55 45 42 37 45 47 37 55 leadv 6 6 165 39 69 69 85 69 51 69 36 46 62 36 85 seev 6 5 158 54 56 54 46 54 57 52 48 48 52 46 57 askv 6 3 348 58 84 72 72 78 76 52 67 66 71 52 84 findv 5 3 174 28 93 93 86 89 82 82 75 86 86 75 93 fixv 5 3 32 2 50 50 50 50 50 0 0 50 38 0 50 buyv 5 3 164 46 83 80 80 83 78 76 70 76 78 70 83 beginv 4 2 114 48 83 65 75 69 79 56 50 56 67 50 83 killv 4 1 111 16 88 88 88 88 88 88 88 81 87 81 88 joinv 4 4 68 18 44 50 50 39 56 57 39 44 47 39 57 endv 4 3 135 21 90 86 86 90 62 87 86 67 82 62 90 dov 4 2 207 61 92 90 90 93 93 90 85 84 90 84 93 examinev 3 2 26 3 100 100 67 100 100 67 100 33 83 33 100 reportv 3 2 128 35 89 91 91 91 91 91 91 86 90 86 91 regardv 3 3 40 14 93 93 86 86 64 86 57 93 82 57 93 recallv 3 1 49 15 100 100 87 87 93 87 87 87 91 87 100 provev 3 2 49 22 90 88 82 80 90 86 70 74 82 70 90 claimv 3 2 54 15 67 73 80 80 80 80 80 87 78 67 87 buildv 3 3 119 46 74 67 74 61 54 74 61 72 67 54 74 feelv 3 3 347 51 71 69 69 74 76 69 61 71 70 61 76 carev 3 3 69 7 43 43 43 43 100 29 57 57 52 29 100 contributev 2 2 35 18 67 72 72 67 50 61 50 67 63 50 72 maintainv 2 2 61 10 80 80 70 100 80 90 90 80 84 70 100 complainv 2 1 32 14 93 86 86 86 86 86 86 79 86 79 93 proposev 2 2 34 14 100 86 100 86 100 93 79 79 90 79 100 promisev 2 2 50 8 88 88 75 88 75 75 62 88 80 62 88 producev 2 2 115 44 82 82 77 73 75 75 77 80 78 73 82 preparev 2 2 54 18 94 83 89 89 83 86 83 83 86 83 94 explainv 2 2 85 18 94 89 94 89 94 89 89 94 92 89 94 believev 2 2 202 55 87 78 78 86 84 78 74 80 81 74 87 occurv 2 2 47 22 86 73 91 96 86 96 86 82 87 73 96 grantv 2 2 19 5 100 80 80 80 40 80 60 80 75 40 100 enjoyv 2 2 56 14 50 57 57 50 64 57 50 57 55 50 64 needv 2 2 195 56 89 82 86 89 86 78 70 70 81 70 89 disclosev 1 1 55 14 93 93 93 93 93 93 93 93 93 93 93 pointn 9 6 469 150 91 91 89 91 92 87 84 79 88 79 92 positionn 7 6 268 45 78 78 78 53 56 65 58 64 66 53 78 defensen 7 7 120 21 57 48 52 43 48 29 48 48 46 29 57 carriern 7 3 111 21 71 71 71 71 67 71 71 62 70 62 71 ordern 7 4 346 57 93 95 93 91 93 92 90 91 92 90 95 exchangen 5 3 363 61 92 90 92 85 90 88 82 79 87 79 92 systemn 5 3 450 70 79 73 66 67 59 63 63 61 66 59 79 sourcen 5 5 152 35 86 80 80 63 83 68 60 29 69 29 86 spacen 5 2 67 14 93 100 93 93 93 86 86 71 89 71 100 basen 5 4 92 20 75 80 75 50 65 40 50 75 64 40 80 authorityn 4 3 90 21 86 86 81 62 71 33 71 81 71 33 86 peoplen 4 4 754 115 96 96 95 96 95 90 91 91 94 90 96 chancen 4 3 91 15 60 67 60 60 67 73 20 73 60 20 73 partn 4 3 481 71 90 90 92 97 90 74 66 66 83 66 97 hourn 4 2 187 48 83 85 92 83 77 90 58 92 83 58 92 developmentn 3 3 180 29 100 79 86 79 76 62 79 62 78 62 100 presidentn 3 3 879 177 98 97 98 97 93 96 97 85 95 85 98 networkn 3 3 152 55 91 87 98 89 84 88 87 82 88 82 98 futuren 3 3 350 146 97 96 94 97 83 98 89 85 92 83 98 effectn 3 2 178 30 97 93 80 93 80 90 77 83 87 77 97 staten 3 3 617 72 85 86 86 83 82 79 83 82 83 79 86 powern 3 3 251 47 92 87 87 81 77 77 77 74 81 74 92 billn 3 3 404 102 98 99 98 96 90 96 96 22 87 22 99 arean 3 3 326 37 89 73 65 68 84 70 68 65 73 65 89 jobn 3 3 188 39 85 80 77 90 80 82 69 82 80 69 90 managementn 2 2 284 45 89 78 87 73 98 76 67 64 79 64 98 conditionn 2 2 132 34 91 82 82 56 76 78 74 76 77 56 91 policyn 2 2 331 39 95 97 97 87 95 97 90 64 90 64 97 raten 2 2 1009 145 90 88 92 81 92 89 88 91 89 81 92 drugn 2 2 205 46 94 94 96 78 94 94 87 78 89 78 96 average overall 86 83 83 82 82 79 76 77 verbs 78 75 73 76 73 70 65 70 nouns 89 87 86 81 83 80 77 76 table 4 all supervised system performance per predicate90 3 semantic role labelingsubtask 2 evaluates semantic role labeling systems where the goal is to locate the constituents which are arguments of a given verb and to assign them appropriate semantic roles that describe howthey relate to the verbsrl systems are an important building block for many larger semantic systemsfor example in order to determine that ques tion is answered by sentence but not by sentence we must determine the relationships between the relevant verbs and their arguments a what do lobsters like to eatb recent studies have shown that lobsters pri marily feed on live fish dig for clams sea urchins and feed on algae and eelgrassc in the early 20th century mainers would only eat lobsters because the fish they caught was too valuable to eat themselvestraditionally srl systems have been trained on either the propbank corpus for two years the conll workshop has made this their shared task or the framenet corpus senseval3 used this for their shared taskhowever there is still little con sensus in the linguistics and nlp communities about what set of role labels are most appropriatethepropbank corpus avoids this issue by using theory agnostic labels and by defining those labels to have only verbspecific meaningsunder this scheme propbank can avoidmaking any claims about how any one verbs ar guments relate to other verbsarguments or aboutgeneral distinctions between verb arguments and ad junctshowever there are several limitations to this ap proachthe first is that it can be difficult to make inferences and generalizations based on role labels that are only meaningful with respect to a single verbsince each role label is verbspecific we can not confidently determine when two different verbsarguments have the same role and since no encoded meaning is associated with each tag we can notmake generalizations across verb classesin con trast the use of a shared set of role labels such system type precision recall f ubcupc open 8451 8224 833605 ubcupc closed 8504 8207 835205 rtv closed 8182 7037 756606 without sayubcupc open 7857 7470 766008 ubcupc closed 7867 7394 762308 rtv closed 7415 5785 650009table 5 system performance on propbank argu mentsas verbnet roles would facilitate both inferencingand generalizationverbnet has more traditional la bels such as agent patient theme beneficiary etctherefore we chose to annotate the corpus us ing two different role label sets the propbank role set and the verbnet role setverbnet roles were generated using the semlink mapping which provides a mapping between prop bank and verbnet role labelsin a small number of cases no verbnet role was available in those cases the propbank role label was used insteadwe proposed two levels of participation in thistask i closed the systems could use only the an notated data provided and nothing elseii open where systems could use propbank data from sec tions 0221 as well as any other resource for training their labelers31 datawe selected 50 verbs from the 65 in the lexical sam ple task for the srl taskthe partitioning into train and test set was done in the same fashion as for the lexical sample tasksince propbank does not tag any noun predicates none of the 35 nouns from the lexical sample task were part of this data32 resultsfor each system we calculated the precision re call and fmeasure for both role label setsscores were calculated using the srlevalpl script from the conll2005 scoring package only two teams chose to perform the srl subtaskthe performance of these two teams is shown in table 5 and table 691 system type precision recall f ubcupc open 8531 8208 836605 ubcupc closed 8531 8208 836605 rtv closed 8158 7016 754406 without sayubcupc open 7923 7388 764608 ubcupc closed 7923 7388 764608 rtv closed 7363 5744 645309 table 6 system performance on verbnet roles33 discussiongiven that only two systems participated in the task it is difficult to form any strong conclusionsit should be noted that since there was no additional verbnet role data to be used by the open system the performance of that on propbank arguments as well as verbnet roles is exactly identicalit can be seenthat there is almost no difference between the perfor mance of the open and closed systems for tagging propbank argumentsthe reason for this is the factthat all the instances of the lemma under consider ation was selected from the propbank corpus and probably the number of training instances for each lemma as well as the fact that the predicate is such an important feature combine to make the difference negligiblewe also realized that more than half of the test instances were contributed by the predicate saythe performance over whose arguments is in the high 90sto remove the effect of saywe alsocomputed the performances after excluding exam ples of sayfrom the test setthese numbers are shown in the bottom half of the two tablestheseresults are not directly comparable to the conll 2005 shared task since i this test set comprisessections 01 22 23 and 24 as opposed to just sec tion 23 and ii this test set comprises data for only 50 predicates as opposed to all the verb predicates in the conll2005 shared taskthe results in the previous discussion seem to confirm the hypothesis that there is a predictable correlation between human annotator agreement and sys tem performancegiven high enough ita rates we can can hope to build sense disambiguation systemsthat perform at a level that might be of use to a con suming natural language processing applicationitis also encouraging that the more informative verb net roles which have betterdirect applicability indownstream systems can also be predicted with al most the same degree of accuracy as the propbank arguments from which they are mappedwe gratefully acknowledge the support of the defense advanced research projects agency under the gale program darpacmo contract nohr001106c0022 national science foundation grant nsf0415923 word sense disambiguation the dtoaquaint nbchc040036 grant under the university of illinois subcontract to university of pennsylvania 20030791101 and nsfitr0325646 domainindependent semantic interpretation
W07-2016
semeval2007 task17 english lexical sample srl and all wordsthis paper describes our experience in preparing the data and evaluating the results for three subtasks of semeval2007 task17 lexical sample semantic role labeling and allwords respectivelywe tabulate and analyze the results of participating systemsthe use of coarsegrained sense groups has led to considerable advances in wsd performance with accuracies of around 90
semeval2007 task 19 frame semantic structure extraction this task consists of recognizing words and phrases that evoke semantic frames as defined in the framenet project and their semantic dependents which are usually but not always their syntacticdependents the train ing data was fn annotated sentences in testing participants automatically annotated three previously unseen texts to match goldstandard annotation including pre dicting previously unseen frames and roles precision and recall were measured both for matching of labels of frames and fes and for matching of semantic dependency trees based on the annotation the task of labeling frameevoking words with ap propriate frames is similar to wsd while the task of assigning frame elements is called semantic role labeling and has been the subject of several shared tasks at acl and conllfor example in the sentence matilde said i rarely eat rutabagasaid evokes the statement frame and eat evokes the ingestion framethe role of speaker in the statement frame is filled by matilda and the roleof message by the whole quotationin the inges tion frame i is the ingestor and rutabaga fills theingestibles rolesince the ingestion event is con tained within the message of the statement event we can represent the fact that the message conveyed was about ingestion just by annotating the sentence with respect to these two framesafter training on fn annotations the participantssystems labeled three new texts automaticallythe evaluation measured precision and recall for frames and frame elements with partial credit for incorrect but closely related framestwo types of evaluation were carried out label matching evaluation in which the participants labeled data was compareddirectly with the gold standard labeled data and se mantic dependency evaluation in which both thegold standard and the submitted data were first converted to semantic dependency graphs in xml for mat and then these graphs were comparedthere are three points that make this task harder and more interesting than earlier srl tasks while previous tasks focused on role assignment the current task also comprises the identification of the appropriate framenet frame similar to wsd the task comprises not only the labeling of individual predicates and their arguments but also the integration of all labels into an overall semantic depen dency graph a partial semantic representation of the overall sentence meaning based on frames and roles and the test data includes occurrences of frames that are not seen in the training datafor these cases participant systems have to identify theclosest known framethis is a very realistic scenario encouraging the development of robust systems showing graceful degradation in the face of un known events99the basic concept of frame semantics is that many words are best understood as part of a group of terms that are related to a particular type of eventand the participants and propsinvolved in it the classes of events are the semantic frames of the lexical units that evoke them and the roles associated with the event are referred to as frame elements the same type of analysis applies not only to events butalso to relations and states the frameevoking expressions may be single words or multiword ex pressions which may be of any syntactic categorynote that these fe names are quite framespecific generalizations over them are expressed via explicit fefe relationsthe berkeley framenet project is creating a computer and humanreadable lexical resource for english based on the theory of frame semantics and supported by corpus evidencethe current release of the framenet data which has been freely available for instructional and research purposes since the fall of 2006 includes roughly 780 frames with roughly 10000 word senses it also contains roughly 150000 annotation sets of which 139000are lexicographic examples with each sentence an notated for a single predicatorthe remainder are from fulltext annotation in which each sentence isannotated for all predicators 1700 sentences are annotated in the fulltext portion of the database ac counting for roughly 11700 annotation sets or 68 predicators per sentencenearly all of the frames are connected into a single graph by frametoframe relations almost all of which have associated fetofe relations 21 frame semantics of textsthe ultimate goal is to represent the lexical se mantics of all the sentences in a text based onthe relations between predicators and their depen dents including both phrases and clauses which may in turn include other predicators although this has been a longstanding goal of fn automatic means of doing this are only now becoming availableconsider a sentence from one of the testing texts this geography is important in understanding dublinin the frame semantic analysis of this sentence there are two predicators which fn has analyzed important and understanding as well as one which we have not yet analyzed geographyin additiondublin is recognized by the ner system as a loca tionin the gold standard annotation we have the annotation shown in for the importance frame evoked by the target important and the annotationshown in for the grasp frame evoked by under standing factor this geography cop is important undertaking in understanding dublininterested party ini this geography is important in under standing phenomenon dublincognizercni the definitions of the two frames begin like this importance a factor affects the outcome of anundertaking which can be a goaloriented activ ity or the maintenance of a desirable state the work in a field or something portrayed as affecting an interested partygrasp a cognizer possesses knowledge about the workings significance or meaning of an idea or object which we call phenomenon and is able to make predictions about the behavior or occurrence of the phenomenonusing these definitions and the labels and the fact that the target and fes of one frame are subsumedby an fe of the other we can compose the meanings of the two frames to produce a detailed para phrase of the meaning of the sentence something denoted by this geography is a factor which affects the outcome of the undertaking of understanding the location called dublinby any interested partywe have not dealt with geography as a frameevoking expression although we would eventually like toin the interested party is not a label onany part of the text rather it is marked ini for indefinite null instantiation meaning that it is con ceptually required as part of the frame definition absent from the sentence and not recoverable from the context as being a particular individualmeaning 100that this geography is important for anyone in generals understanding of dublinin the cog nizer is constructionally null instantiated as thegerund understanding licenses omission of its sub jectthe marking of null instantiations is important in handling text coherence and was part of the goldstandard but as far as we know none of the participants attempted it and it was ignored in the evalua tionnote that we have collapsed the two null instantiated fes the interested party of the impor tance frame and the cognizer in the grasp frame since they are not constrained to be distinct22 semantic dependency graphssince the role fillers are dependents of the predicators the full framenet annotation of a sentence is roughly equivalent to a dependency parse in which some of the arcs are labeled with rolenames and a dependency graph can be derived algorithmically from framenet annotation an early ver sion of this was proposed by fig1 shows the semantic dependency graph derived from sentence this graphical representa tion was derived from a semantic dependency xml file it shows that the top frame in this sentence is evoked by the word important although the syntactic head is the copula is the labels on thearcs are either the names of frame elements or indications of which of the daughter nodes are seman tic heads which is important in some versions of the evaluationthe labels on nodes are either frame names syntactic phrases types or the names of certain other syntactic connectors in this case marker and support31 training datathe major part of the training data for the task con sisted of the current data release from framenet described in sec2 this was supple mented by additional training data made availablethrough semeval to participants in this taskin addition to updated versions of some of the fulltext an notation from release 13 three files from the anc were included from slatecom stephanopoulos importance important marker in undertaking np factor grasp understanding semhead this geography head nelocation dublin denotedfe location phenomenon supp is head semhead figure 1 sample semantic dependency graph crimesand entrepreneur as madonna and from the berlitz travel guides history of jerusalem32 testing datathe testing data was made up of three texts none of which had been seen before the gold standard consisted of manual annotations of these texts for all frame evoking expressions and the fillers of the associated frame elementsall annotation of the testing data was carefully reviewed by the fn staff to insure its cor rectnesssince most of the texts annotated in the fn database are from the nti website we decided to take two of the three test ing texts from there alsoone china overview was very similar to other annotated texts such as taiwan introduction russia overview etc available in release 13the other nti text work advances while in the same domain was shorter and closer to newspaper style than the rest of the nti textsfinally the introduction to 101 sents nes frames tokens types work 14 31 174 77 china 39 90 405 125 dublin 67 86 480 165 totals 120 207 1059 272 table 1 summary of testing datadublin taken from the american national cor pus berlitz travel guides is of quite a different genre although the history of jerusalemtext in the training data was somewhat similartable 1 gives some statistics on the three testing filesto give a flavor of the texts here are two sentences frame evoking words are in boldface from work advances the iranians are now willing to accept the installation of cameras only outside the cascade halls which will not enable the iaea to monitor the entire uranium enrichment processthe diplomat saidfrom introduction to dublin and in this city where literature and theater have historicallydominated the scene visual arts are finally com ing into their own with the new museum of modern art and the many galleries that display the work of modern irish artistsa number of groups downloaded the training or test ing data but in the end only three groups submitted results the utdsrl group and the lth group who submitted full results and the clr group who submitted results for frames onlyit should also be noted that the lth group had the testing data for longer than the 10 days allowed by the rules of the exercise which means that the results of the two teams are not exactly comparablealso the results from the clr group were initially formatted slightly differently from the gold standard with regard to character spacing a later reformatting allowed their results to be scored with the other groupsthe lth system used only svm classifiers while the utdsrl system used a combination of svm and me classifiers determined experimentallythe clr system did not use classifiers but handwritten symbolic rulesplease consult the separate system papers for details about the features usedthe labelsonly matching was similar to previousshared tasks but the dependency structure evaluation deserves further explanation the xml seman tic dependency structure was produced by a program called fttosem implemented in perl which goes sentence by sentence through a framenet fulltext xml file taking lu fe and other labels and using them to structure a syntactically unparsed piece of a sentence into a syntacticsemantic treetwo basic principles allow us to produce this tree lus are the sole syntactic head of a phrase whose semantics is expressed by their frame and each label span is interpreted as the boundaries of a syntactic phrase so that when a larger label span subsumes a smaller one the larger span can be interpreted as a the highernode in a hierarchical treethere are a fair num ber of complications largely involving identifyingmismatches between syntactic and semantic headednesssome of these are annotated in the data with their own labels while others must be identified using simple syntactic heuristics and partofspeech tagsfor this evaluation a nonframe node counts as matching provided that it includes the head of the gold standard whether or not nonhead children ofthat node are includedfor frame nodes the partici pants got full credit if the frame of the node matched the gold standard51 partial credit for related framesone of the problems inherent in testing against un seen data is that it will inevitably contain lexical units that have not previously been annotated in framenet so that systems which do not generalizewell cannot get them rightin principle the deci sion as to what frame to add a new lu to should be helped by the same criteria that are used to assign polysemous lemmas to existing frameshoweverin practice this assignment is difficult precisely be because unlike wsd there is no assumption that all the senses of each lemma are defined in advance if 102 the system cannot be sure that a new use of a lemma is in one of the frames listed for that lemma thenit must consider all the 800 frames as possibili tiesthis amounts to the automatic induction of finegrained semantic similarity from corpus data a notoriously difficult problem for lus which clearly do not fit into any exist ing frames the problem is still more difficultin the course of creating the gold standard annotation of the three testing texts the fn team created almost 40 new frameswe cannot ask that participants hit uponthe new frame name but the new frames are not created in a vacuum as mentioned above they are almost always added to the existing structure of frame toframe relations this allows us to give credit for assignment to frames which are not the precise onein the gold standard but are close in terms of frame toframe relationswhenever participantsproposed frames were wrong but connected to the right frameby frame relations partial credit was given decreas ing by 20 for each link in the frameframe relationgraph between the proposed frame and the gold stan dardfor fes each frame element had to match the gold standard frame element and contain at least the same head word in order to gain full credit again partial credit was given for frame elements related via fetofe relationstext group recall precf1 dublin utdsrl 04188 07716 05430 china utdsrl 05498 08009 06520 work utdsrl 05251 08382 06457 dublin lth 05184 07156 06012 china lth 06261 07731 06918 work lth 06606 08642 07488 dublin clr 03984 06469 04931 china clr 04621 06302 05332 work clr 05054 07452 06023 table 2 frame recognition onlythe strictness of the requirement of exact boundary matching is compounded by the cascading effect of semantic classification errors as seen by comparing text group recall precf1 label matching only dublin utdsrl 027699 055663 036991 china utdsrl 031639 051715 039260 work utdsrl 031098 062408 041511 dublin lth 036536 055065 043926 china lth 039370 054958 045876 work lth 041521 061069 049433 semantic dependency matching dublin utdsrl 026238 053432 035194 china utdsrl 031489 053145 039546 work utdsrl 030641 061842 040978 dublin lth 036345 054857 043722 china lth 040995 057410 047833 work lth 045970 067352 054644table 3 results for combined frame and fe recog nition the fscores in table 3 with those in table 2the difficulty of the task is reflected in the fscores of around 35 for the most difficult text in the most difficult condition but participants still managed to reach fscores as high as 75 for the more limited task of frame identification which more closely matches traditional senseval tasks despite the lack of a full sense inventorythe difficulty posed by having such an unconstrained task led to understandably low recall scores in all participants the systems submitted by the teams differed in their sensitivity to differences in the texts utdsrls system varied by around 10 across texts while lths varied by 15there are some rather encouraging results alsothe participants rather consistently performed bet ter with our more complex but also more useful andrealistic scoring including partial credit and grad ing on semantic dependency rather than exact span match the participants all performed relatively well onthe framerecognition task with precision scores av eraging 63 and topping 85the testing data for this task turned out to be espe cially challenging with regard to new frames since in an effort to annotate especially thoroughly almost 10340 new frames were created in the process of an notating these three specific passagesone result of this was that the test passages had more unseenframes than a random unseen passage which prob ably lowered the recall on framesit appears that this was not entirely compensated by giving partial credit for related framesthis task is a more advanced and realistic version of the automatic semantic role labeling task of senseval3 unlike that task the testing data was previously unseen participants had to determine the correct frames as a first step and participants also had to determine fe boundaries which were given in the senseval3a crucial difference from similar approaches such as srl with propbank roles is that by identifying relations as part of a frame you have identified a gestalt of relations that enables far more inference and sentences from the same passage that use other words from the same frame will be easier to link togetherthus the fn srl results are translatable fairly directly intoformal representations which can be used for rea soning question answering etcdespite the problems with recall the participants have expressed a determination to work to improvethese results and the fn staff are eager to collabo rate in this efforta project is now underway at icsi to speed up frame and lu definition and another tospeed up the training of srl systems is just begin ning so the prospects for improvement seem goodthis material is based in part upon work sup ported by the national science foundation under grant noiis0535297
W07-2018
semeval2007 task 19 frame semantic structure extractionthis task consists of recognizing words and phrases that evoke semantic frames as defined in the framenet project and their semantic dependents which are usually but not always their syntactic dependents the training data was fn annotated sentencesin testing participants automatically annotated three previously unseen texts to match gold standard annotation including predicting previously unseen frames and rolesprecision and recall were measured both for matching of labels of frames and fes and for matching of semantic dependency trees based on the annotationour shared tasks shows that framesemantic srl of running text is a hard problem partly due to the fact that running text is bound to contain many frames for which no or little annotated training data are available
on the complexity of nonprojective datadriven dependency parsing in this paper we investigate several nonprojective parsing algorithms for dependency parsing providing novel polynomial time solutions under the assumption that each dependency decision is independent of all the others called here the edgefactored model we also investigate algorithms for nonprojective parsing that account for nonlocal information and present several hardness results this suggests that it is unlikely that exact nonprojective dependency parsing is tractable for any model richer than the edgefactored model dependency representations of natural language are a simple yet flexible mechanism for encoding words and their syntactic dependencies through directed graphsthese representations have been thoroughly studied in descriptive linguistics and have been applied in numerous language processing tasksfigure 1 gives an example dependency graph for the sentence mr tomash will remain as a director emeritus which has been extracted from the penn treebank each edge in this graph represents a single syntactic dependency directed from a word to its modifierin this representation all edges are labeled with the specific syntactic function of the dependency eg sbj for subject and nmod for modifier of a nounto simplify computation and some important definitions an artificial token is inserted into the sentence as the left most word and will always represent the root of the dependency graphwe assume all dependency graphs are directed trees originating out of a single node which is a common constraint the dependency graph in figure 1 is an example of a nested or projective graphunder the assumption that the root of the graph is the left most word of the sentence a projective graph is one where the edges can be drawn in the plane above the sentence with no two edges crossingconversely a nonprojective dependency graph does not satisfy this propertyfigure 2 gives an example of a nonprojective graph for a sentence that has also been extracted from the penn treebanknonprojectivity arises due to long distance dependencies or in languages with flexible word orderfor many languages a significant portion of sentences require a nonprojective dependency analysis thus the ability to learn and infer nonprojective dependency graphs is an important problem in multilingual language processingsyntactic dependency parsing has seen a number of new learning and inference algorithms which have raised stateoftheart parsing accuracies for many languagesin this work we focus on datadriven models of dependency parsingthese models are not driven by any underlying grammar but instead learn to predict dependency graphs based on a set of parameters learned solely from a labeled corpusthe advantage of these models is that they negate the need for the development of grammars when adapting the model to new languagesone interesting class of datadriven models are those that assume each dependency decision is independent modulo the global structural constraint that dependency graphs must be treessuch models are commonly referred to as edgefactored since their parameters factor relative to individual edges of the graph edgefactored models have many computational benefits most notably that inference for nonprojective dependency graphs can be achieved in polynomial time the primary problem in treating each dependency as independent is that it is not a realistic assumptionnonlocal information such as arity and neighbouring dependencies can be crucial to obtaining high parsing accuracies however in the datadriven parsing setting this can be partially adverted by incorporating rich feature representations over the input the goal of this work is to further our current understanding of the computational nature of nonprojective parsing algorithms for both learning and inference within the datadriven settingwe start by investigating and extending the edgefactored model of mcdonald et al in particular we appeal to the matrix tree theorem for multidigraphs to design polynomialtime algorithms for calculating both the partition function and edge expectations over all possible dependency graphs for a given sentenceto motivate these algorithms we show that they can be used in many important learning and inference problems including minrisk decoding training globally normalized loglinear models syntactic language modeling and unsupervised learning via the them algorithm none of which have previously been known to have exact nonprojective implementationswe then switch focus to models that account for nonlocal information in particular arity and neighbouring parse decisionsfor systems that model arity constraints we give a reduction from the hamiltonian graph problem suggesting that the parsing problem is intractable in this casefor neighbouring parse decisions we extend the work of mcdonald and pereira and show that modeling vertical neighbourhoods makes parsing intractable in addition to modeling horizontal neighbourhoodsa consequence of these results is that it is unlikely that exact nonprojective dependency parsing is tractable for any model assumptions weaker than those made by the edgefactored modelsthere has been extensive work on datadriven dependency parsing for both projective parsing and nonprojective parsing systems these approaches can often be classified into two broad categoriesin the first category are those methods that employ approximate inference typically through the use of linear time shiftreduce parsing algorithms in the second category are those that employ exhaustive inference algorithms usually by making strong independence assumptions as is the case for edgefactored models recently there have also been proposals for exhaustive methods that weaken the edgefactored assumption including both approximate methods and exact methods through integer linear programming or branchandbound algorithms for grammar based models there has been limited work on empirical systems for nonprojective parsing systems notable exceptions include the work of wang and harper theoretical studies of note include the work of neuhaus and boker showing that the recognition problem for a minimal dependency grammar is hardin addition the work of kahane et al provides a polynomial parsing algorithm for a constrained class of nonprojective structuresnonprojective dependency parsing can be related to certain parsing problems defined for phrase structure representations as for instance immediate dominance cfg parsing and shakeandbake translation independently of this work koo et al and smith and smith showed that the matrixtree theorem can be used to train edgefactored loglinear models of dependency parsingboth studies constructed implementations that compare favorably with the stateoftheartthe work of meila and jaakkola is also of notein that study they use the matrix tree theorem to develop a tractable bayesian learning algorithms for tree belief networks which in many ways are closely related to probabilistic dependency parsing formalisms and the problems we address herelet l l1 ll be a set of permissible syntactic edge labels and x x0x1 x be a sentence such that x0rootfrom this sentence we construct a complete labeled directed graph gx such that gx is a graph where each word in the sentence is a node and there is a directed edge between every pair of nodes for every possible labelby its definition gx is a multidigraph which is a digraph that may have more than one edge between any two nodeslet k represent the kth edge from i to j gx encodes all possible labeled dependencies between the words of xthus every possible dependency graph of x must be a subgraph of gxlet i j be a relation that is true if and only if there is a nonempty directed path from node i to node j in some graph under considerationa directed spanning tree1 of a graph g that originates define t as the set of all directed spanning trees for a graph g as mcdonald et al noted there is a onetoone correspondence between spanning trees of gx and labeled dependency graphs of x ie t is exactly the set of all possible projective and nonprojective dependency graphs for sentence xthroughout the rest of this paper we will refer to any t t as a valid dependency graph for a sentence xthus by definition every valid dependency graph must be a treein this section we examine the class of models that assume each dependency decision is independentwithin this setting every edge in an induced graph gx for a sentence x will have an associated weight wk ij 0 that maps the kth directed edge from node i to node j to a real valued numerical weightthese weights represents the likelihood of a dependency occurring from word wi to word wj with label lkdefine the weight of a spanning tree t as the product of the edge weights it is easily shown that this formulation includes the projective model of paskin and the nonprojective model of mcdonald et al the definition of wk ij depends on the context in which it is being usedfor example in the work of mcdonald et al it is simply a linear classifier that is a function of the words in the dependency the label of the dependency and any contextual features of the words in the sentencein a generative probabilistic model it could represent the conditional probability of a word wj being generated with a label lk given that the word being modified is wi we will attempt to make any assumptions about the form wk ij clear when necessaryfor the remainder of this section we discuss three crucial problems for learning and inference while showing that each can be computed tractably for the nonprojective casethe first problem of interest is finding the highest weighted tree for a given input sentence x mcdonald et al showed that this can be solved in o for unlabeled parsing using the chuliuedmonds algorithm for standard digraphs unlike most exact projective parsing algorithms which use efficient bottomup chart parsing algorithms the chuliuedmonds algorithm is greedy in natureit begins by selecting the single best incoming dependency edge for each node jit then postprocesses the resulting graph to eliminate cycles and then continues recursively until a spanning tree results for detailsthe algorithm is trivially extended to the multidigraph case for use in labeled dependency parsingfirst we note that if the maximum directed spanning tree of a multidigraph gx contains any edge k then we must have k k argmaxk wkijotherwise we could simply substitute k in place of k and obtain a higher weighted treetherefore without effecting the solution to the argmax problem we can delete all edges in gx that do not satisfy this propertythe resulting digraph is no longer a multidigraph and the chuliuedmonds algorithm can be applied directlythe new runtime is oas a side note the kbest argmax problem for digraphs can be solved in o this can also be easily extended to the multidigraph case for labeled parsinga common step in many learning algorithms is to compute the sum over the weight of all the possible outputs for a given input xthis value is often referred to as the partition function due to its similarity with a value by the same name in statistical mechanicswe denote this value as zx to compute this sum it is possible to use the matrix tree theorem for multidigraphs matrix tree theorem let g be a multidigraph with nodes v 10 1 n and edges e define matrix q as a x matrix indexed from 0 to n for all i and j define if the ith row and column are removed from q to produce the matrix qi then the sum of the weights of all directed spanning trees rooted at node i is equal to qi thus if we construct q for a graph gx then the determinant of the matrix qc is equivalent to zxthe determinant of an nxn matrix can be calculated in numerous ways most of which take o the most efficient algorithms for calculating the determinant of a matrix use the fact that the problem is no harder than matrix multiplication matrix multiplication currently has known o implementations and it has been widely conjectured that it can be solved in o however most algorithms with subo running times require constants that are large enough to negate any asymptotic advantage for the case of dependency parsingas a result in this work we use o as the runtime for computing zxsince it takes o to construct the matrix q the entire runtime to compute zx is oanother important problem for various learning paradigms is to calculate the expected value of each edge for an input sentence x where ik t is an indicator function that is one when the edge k is in the tree t to calculate the expectation for the edge k we can simply eliminate all edges k0 k from gx and calculate zxzx will now be equal to the sum of the weights of all trees that contain k a naive implementation to compute the expectation of all ln2 edges takes o since calculating zx takes o for a single edgehowever we can reduce this considerably by constructing q a single time and only making modifications to it when necessaryan algorithm is given in figure 33 that has a runtime of othis algorithm works by first constructing qit then considers edges from the node i to the node jnow assume that there is only a single edge from i to j and that that edge has a weight of 1furthermore assume that this edge is the only edge directed into the node jin this case q should be modified so that qjj 1 qij 1 and qi0j 0 vi i j the value of zx under this new q will be equivalent to the weight of all trees containing the single edge from i to j with a weight of 1for a specific edge k its expectation is simply wk ijzx since we can factor out the weight 1 edge from i to j in all the trees that contribute to zx and multiply through the actual weight for the edgethe algorithm then reconstructs q and continuesfollowing the work of koo et al and smith and smith it is possible to compute all expectations in o through matrix inversionto make this paper self contained we report here their algorithm adapted to our notationfirst consider the equivalence we assume the rows and columns of q0 are indexed from 1 so that the indexes of q and q0 coincideto calculate k when i j 0 we can use the fact that log xxij ji and that zx and 1 can both be calculated a single time each taking ousing these values each expectation is computed in ocoupled with with the fact that we need to construct q and compute the expectation for all ln2 possible edges in total it takes o time to compute all edge expectationsprojective dependency parsing algorithms are well understood due to their close connection to phrasebased chart parsing algorithmsthe work of eisner showed that the argmax problem for digraphs could be solved in o using a bottomup dynamic programming algorithm similar to ckypaskin presented an o insideoutside algorithm for projective dependency parsing using the eisner algorithm as its backboneusing this algorithm it is trivial to calculate both zx and each edge expectationcrucially the nested property of projective structures allows edge expectations to be computed in o from the insideoutside valuesit is straightforward to extend the algorithms of eisner and paskin to the labeled case adding only a factor of otable 1 gives an overview of the computational complexity for the three problems considered here for both the projective and nonprojective casewe see that the nonprojective case compares favorably for all three problemsto motivate the algorithms from section 3 we present some important situations where each calculation is requiredmany learning paradigms can be defined as inferencebased learningthese include the perceptron and its largemargin variants in these settings a models parameters are iteratively updated based on the argmax calculation for a single or set of training instances under the current parameter settingsthe work of mcdonald et al showed that it is possible to learn a highly accurate nonprojective dependency parser for multiple languages using the chuliuedmonds algorithm for unlabeled parsingin minrisk decoding the goal is to find the dependency graph for an input sentence x that on average has the lowest expected risk where r is a risk function measuring the error between two graphsminrisk decoding has been studied for both phrasestructure parsing and dependency parsing in that work as is common with many minrisk decoding schemes t is not the entire space of parse structuresinstead this set is usually restricted to a small number of possible trees that have been preselected by some baseline systemin this subsection we show that when the risk function is of a specific form this restriction can be droppedthe result is an exact minrisk decoding procedurelet r be the hamming distance between two dependency graphs for an input sentence x this is a common definition of risk between two graphs as it corresponds directly to labeled dependency parsing accuracy k we ij can directly solve this problem using the edge expectation algorithm described in section 33 and the argmax algorithm described in section 31conditional random fields are global discriminative learning algorithms for problems with structured output spaces such as dependency parsingfor dependency parsing crfs would define the conditional probability of a dependency graph t for a sentence x as a globally norhere the weights wk ij are potential functions over each edge defined as an exponentiated linear classifier with weight vector w e right now and feature vector f e right now where fu e r represents a single dimension of the vector f the denominator which is exactly the sum over all graph weights is a normalization constant forcing the conditional probability distribution to sum to onecrfs set the parameters w to maximize the loglikelihood of the conditional probability over a training set of examples t t this optimization can be solved through a variety of iterative gradient based techniquesmany of these require the calculation of feature expectations over the training set under model parameters for the previous iterationfirst we note that the feature functions factor over edges ie fu ket fubecause of this we can use edge expectations to compute the expectation of every feature fulet α represent the expectation of feature fu for the training instance xα thus we can calculate the feature expectation per training instance using the algorithms for computing z and edge expectationsusing this we can calculate feature expectations over the entire training set where p is typically set to 1t a generative probabilistic dependency model over some alphabet e consists of parameters pkxy associated with each dependency from word x e e to word y e e with label lk e l in addition we impose 0 0let xαi be the ith word of xαby solving the above constrained optimization problem with the usual lagrange multipliers method one gets where for each xα the expectation kx is defined as in section 3 but with the weight w replaced by the probability distribution pthe above l e2 relations represent a nonlinear system of equationsthere is no closed form solution in the general case and one adopts the expectation maximization method which is a specialization of the standard fixedpoint iteration method for the solution of nonlinear systemswe start with some initial assignment of the parameters and at each iteration we use the induced distribution p to compute a refined value for the parameters themselveswe are always guaranteed that the kullbackliebler divergence between two approximated distributions computed at successive iterations does not increase which implies the convergence of the method to some local maxima observe that at each iteration we can compute quantities kx and zx in polynomial time using the algorithms from section 3 with pkxixj in place of wkijfurthermore under some standard conditions the fixedpoint iteration method guarantees a constant number of bits of precision gain for the parameters at each iteration resulting in overall polynomial time computation in the size of the input and in the required number of bits for the precisionas far as we know this is the first them learning algorithm for the model in paskin working in the nonprojective casethe projective case has been investigated in paskin we have shown that several computational problems related to parsing can be solved in polynomial time for the class of nonprojective dependency models with the assumption that dependency relations are mutually independentthese independence assumptions are unwarranted as it has already been established that modeling nonlocal information such as arity and nearby parsing decisions improves the accuracy of dependency models in the spirit of our effort to understand the nature of exact nonprojective algorithms we examine dependency models that introduce arity constraints as well as permit edge decisions to be dependent on a limited neighbourhood of other edges in the graphboth kinds of models can no longer be considered edgefactored since the likelihood of a dependency occurring in a particular analysis is now dependent on properties beyond the edge itselfone feature of the edgefactored models is that no restriction is imposed on the arity of the nodes in the dependency treesas a consequence these models can generate dependency trees of unbounded aritywe show below that this is a crucial feature in the development of the complexity results we have obtained in the previous sectionslet us assume a graph g x is now defined as the set of all directed spanning trees for g x rooted in node 0 such that every node i e vx has arity smaller than or equal to 0we now introduce a construction that will be used to establish several hardness results for the computational problems discussed in this paperrecall that a hamiltonian path in a directed graph g is a directed path that visits all of the nodes of g exactly once we add an edge 1 to exwe set w1 1 if there is an edge from i to j in g or else if i or j is the root node 0 and w1 0 otherwisefurthermore we set 0 1 for each i e vxthis construction can be clearly carried out in logspacenote that each t e t x must be a monadic tree with weight equal to either 0 or 1it is not difficult to see that if w 1 then when we remove the root node 0 from t we obtain a hamiltonian path in g conversely each hamiltonian path in g can be extended to a spanning tree t e t x with w 1 by adding the root node 0using the above observations it can be shown that the solution of the argmax problem for g xprovides some hamiltonian directed path in g the latter search problem is fnphard and is unlikely to be solved in polynomial timefurthermore quantity zx provides the count of the hamiltonian directed paths in g and for each i e v the expectation 1x provides the count of the hamiltonian directed paths in g starting from node iboth these counting problems are phard and very unlikely to have polynomial time solutionsthis result helps to relate the hardness of datadriven models to the commonly known hardness results in the grammardriven literature given by neuhaus and boker in that work an arity constraint is included in their minimal grammarin general we would like to say that every dependency decision is dependent on every other edge in a graphhowever modeling dependency parsing in such a manner would be a computational nightmareinstead we would like to make a markov assumption over the edges of the tree in a similar way that a markov assumption can be made for sequential classification problems in order to ensure tractable learning and inferenceklein and manning distinguish between two kinds of markovization for unlexicalized cfg parsingthe first is vertical markovization which makes the generation of a nonterminal dependent on other nonterminals that have been generated at different levels in the phrasestructure treethe second is horizontal markovization which makes the generation of a nonterminal dependent on other nonterminals that have been generated at the same level in the treefor dependency parsing there are analogous notions of vertical and horizontal markovization for a given edge k first let us define the vertical and horizontal neighbourhoods of k the vertical neighbourhood includes all edges in any path from the root to a leaf that passes through k the horizontal neighbourhood contains all edges kfigure 4 graphically displays the vertical and horizontal neighbourhoods for an edge in the dependency graph from figure 1vertical and horizontal markovization essentially allow the score of the graph to factor over a larger scope of edges provided those edges are in the same vertical or horizontal neighbourhooda dth order factorization is one in which the score factors only over the d nearest edges in the neighbourhoodsin mcdonald and pereira it was shown that nonprojective dependency parsing with horizontal markovization is fnphardin this study we complete the picture and show that vertical markovization is also fnphardconsider a firstorder vertical markovization in which the score for a dependency graph factors over pairs of vertically adjacent edges2 where k hiwk0 ij is the weight of including both edges k and k0 in the dependency graphnote that this formulation does not include any contributions from dependencies that have no vertically adjacent neighbours ie any edge k such that there is no edge k0 in the graphwe can easily rectify this by inserting a second root node say 00 and including the weights k000wk0 0ito ensure that only valid dependency graphs get a weight greater than zero we can set k hiwk0 now consider the npcomplete 3dmatching problem as input we are given three sets of size m call them a b and c and a set s c_ a x b x c the 3dm problem asks if there is a set s0 c_ s such that s0 m and for any two tuples e s0 it is the case that a a0 b b0 and c c02mcdonald and pereira define this as a secondorder markov assumptionthis is simply a difference in terminology and does not represent any meaningful distinctionwe can reduce the 3dmatching problem to the firstorder vertical markov parsing problem by constructing a graph g such that l aubucv 000 you a you b you c and e k i j e v k e lthe set e contains multiple edges between ever pair of nodes each edge taking on a label representing a single element of the set a you b you c now define k 000wk0 we show below that there exists a bijection between the set of valid 3dms for s and the set of nonzero weighted dependency graphs in tfirst it is easy to show that for any 3dm s0 there is a representative dependency graph that has a weight of 1this graph simply consists of the edges b c and c for all e s0 plus an arbitrarily labeled edge from 00 to 0to prove the reverse consider a graph with weight 1this graph must have a weight 1 edge into the node a of the form b since the graph must be spanningby the definition of the weight function in any nonzero weighted tree a must have a single outgoing edge and that edge must be directed into the node b let us say that this edge is c then again by the weighting function in any nonzero weighted graph b must have a single outgoing edge that is directed into c in particular the edge c thus for any node a there is a single path directed out of it to a single leaf c e c we can then state that the only nonzero weighted dependency graph is one where each a e a b e b and c e c occurs in exactly one of m disjoint paths from the root of the form 0 a b c this is because the label of the single edge going into node a will determine exactly the node b that the one outgoing edge from a must go intothe label of that edge determines exactly the single outgoing edge from b into some node c now since the weighting function ensures that the only nonzero weighted paths into any leaf node c correspond directly to elements of s each of the m disjoint paths represent a single tuple in a 3dmthus if there is a nonzero weighted graph in t then it must directly correspond to a valid 3dm which concludes the proofnote that any dth order markovization can be embedded into a d 1th markovizationthus this result also holds for any arbitrary markovizationin this paper we have shown that many important learning and inference problems can be solved efficiently for nonprojective edgefactored dependency models by appealing to the matrix tree theorem for multidigraphsthese results extend the work of mcdonald et al and help to further our understanding of when exact nonprojective algorithms can be employedwhen this analysis is coupled with the projective parsing algorithms of eisner and paskin we begin to get a clear picture of the complexity for datadriven dependency parsing within an edgefactored frameworkto further justify the algorithms presented here we outlined a few novel learning and inference settings in which they are requiredhowever for the nonprojective case moving beyond edgefactored models will almost certainly lead to intractable parsing problemswe have provided further evidence for this by proving the hardness of incorporating arity constraints and horizontalvertical edge markovization both of which incorporate information unavailable to an edgefactored modelthe hardness results provided here are also of interest since both arity constraints and markovization can be incorporated efficiently in the projective case through the straightforward augmentation of the underlying chart parsing algorithms used in the projective edgefactored modelsthis highlights a fundamental difference between the nature of projective parsing algorithms and nonprojective parsing algorithmson the projective side all algorithms use a bottomup chart parsing framework to search the space of nested constructionson the nonprojective side algorithms are either greedyrecursive in nature or based on the calculation of the determinant of a matrix thus the existence of bottomup chart parsing algorithms for projective dependency parsing provides many advantagesas mentioned above it permits simple augmentation techniques to incorporate nonlocal information such as arity constraints and markovizationit also ensures the compatibility of projective parsing algorithms with many important natural language processing methods that work within a bottomup chart parsing framework including information extraction and syntaxbased machine translation the complexity results given here suggest that polynomial chartparsing algorithms do not exist for the nonprojective caseotherwise we should be able to augment them and move beyond edgefactored models without encountering intractability just like the projective casean interesting line of research is to investigate classes of nonprojective structures that can be parsed with chartparsing algorithms and how these classes relate to the languages parsable by other syntactic formalismsthanks to ben taskar for pointing out the work of meila and jaakkola thanks to david smith noah smith and michael collins for making drafts of their emnlp papers available
W07-2216
on the complexity of nonprojective datadriven dependency parsingin this paper we investigate several nonprojective parsing algorithms for dependency parsing providing novel polynomial time solutions under the assumption that each dependency decision is independent of all the others called here the edgefactored modelwe also investigate algorithms for nonprojective parsing that account for nonlocal information and present several hardness resultsthis suggests that it is unlikely that exact nonprojective dependency parsing is tractable for any model richer than the edgefactored modelwe claim that the main obstacle is that nonprojective parsing is nphard beyond arcfactored models
further metaevaluation of machine translation j schroeder ed ac uk abstract this paper analyzes the translation quality of machine translation systems for 10 language pairs translating between czech english french german hungarian and spanish we report the translation quality of over 30 diverse translation systems based on a largescale manual evaluation involving hundreds of hours of effort we use the human judgments of the systems to analyze automatic evaluation metrics for translation quality and we report the strength of the correlation with human judgments at both the systemlevel and at the sentencelevel we validate our manual evaluation methodology by measuring intraand interannotator agreement and collecting timing information this paper presents the results the shared tasks of the 2008 acl workshop on statistical machine translation which builds on two past workshops there were two shared tasks this year a translation task which evaluated translation between 10 pairs of european languages and an evaluation task which examines automatic evaluation metricsthere were a number of differences between this years workshop and last years workshop newspaper articles from a number of different sourcesthis outofdomain test set contrasts with the indomain europarl test set new language pairs we evaluated the quality of hungarianenglish machine translationhungarian is a challenging language because it is agglutinative has many cases and verb conjugations and has freer word ordergermanspanish was our first language pair that did not include english but was not manually evaluated since it attracted minimal participation of rulebased mt systems and provided their output which were also treated as fully fledged entries in the manual evaluationthree additional groups were invited to apply their system combination algorithms to all systems ation metrics with human judgments at the system level we also measured how consistent they were with the human rankings of individual sentencesthe remainder of this paper is organized as follows section 2 gives an overview of the shared translation task describing the test sets the materials that were provided to participants and a list of the groups who participatedsection 3 describes the manual evaluation of the translations including information about the different types of judgments that were solicited and how much data was collectedsection 4 presents the results of the manual evaluationsection 5 gives an overview of the shared evaluation task describes which automatic metrics were submitted and tells how they were evaluatedsection 6 presents the results of the evaluation tasksection 7 validates the manual evaluation methodology2 overview of the shared translation task the shared translation task consisted of 10 language pairs english to german german to english english to spanish spanish to english english to french french to english english to czech czech to english hungarian to english and german to spanisheach language pair had two test sets drawn from the proceedings of the european parliament or from newspaper articles1 the test data for this years task differed from previous years datainstead of only reserving a portion of the training data as the test set we hired people to translate news articles that were drawn from a variety of sources during november and december of 2007we refer to this as the news test seta total of 90 articles were selected 15 each from a variety of czech english french german hungarianand spanishlanguage news sites2 hungarian napi index origo nepszabadsag hvg uniospez the translations were created by the members of euromatrix consortium who hired a mix of professional and nonprofessional translatorsall translators were fluent or native speakers of both languages and all translations were proofread by a native speaker of the target languageall of the translations were done directly and not via an intermediate languageso for instance each of the 15 hungarian articles were translated into czech english french german and spanishthe total cost of creating the 6 test sets consisting of 2051 sentences in each language was approximately 17200 euros having a test set that is balanced in six different source languages and translated across six languages raises some interesting questionsfor instance is it easier when the machine translation system translates in the same direction as the human translatorwe found no conclusive evidence that shows thiswhat is striking however that the parts differ dramatically in difficulty based on the original source languagefor instance the edinburgh frenchenglish system has a bleu score of 268 on the part that was originally spanish but a score of on 97 on the part that was originally hungarianfor average scores for each original language see table 1in order to remain consistent with previous evaluations we also created a europarl test setthe europarl test data was again drawn from the transcripts of eu parliamentary proceedings from the fourth quarter of 2000 which is excluded from the europarl training dataour rationale behind investing a considerable sum to create the news test set was that we believe that it more accurately represents the quality of systems translations than when we simply hold out a portion of the training data as the test set as with the europarl setfor instance statistical systems are heavily optimized to their training data and do not perform as well on outofdomain data having both the news test set and the europarl test set allows us to contrast the performance of systems on indomain and outofdomain data and provides a fairer comparison between systems trained on the europarl corpus and systems that were developed without itto lower the barrier of entry for newcomers to the field we provided a complete baseline mt system along with data resourceswe provided the performance of this baseline system is similar to the best submissions in last years shared taskthe training materials are described in figure 1we received submissions from 23 groups from 18 institutions as listed in table 2we also evaluated seven additional commercial rulebased mt systems bringing the total to 30 systemsthis is a significant increase over last years shared task where there were submissions from 15 groups from 14 institutionsof the 15 groups that participated in last years shared task 11 groups returned this yearone of the goals of the workshop was to attract submissions from newcomers to the field and we are please to have attracted many smaller groups some as small as a single graduate student and her adviserthe 30 submitted systems represent a broad range of approaches to statistical machine translationthese include statistical phrasebased and rulebased systems and also hybrid machine translation and statistical treebased systemsfor most language pairs we assembled a solid representation of the state of the art in machine translationin addition to individual systems being entered this year we also solicited a number of entries which combined the results of other systemswe invited researchers at bbn carnegie mellon university and the university of edinburgh to apply their system combination algorithms to all of the systems submitted to shared translation taskwe designated the translations of the europarl set as the development data for combination techniques which weight each system3 cmu combined the frenchenglish systems bbn combined the frenchenglish and germanenglish systems and edinburgh submitted combinations for the frenchenglish and germanenglish systems as well as a multisource system combination which combined all systems which translated from any language pair into english for the news test setthe university of saarland also produced a system combination over six commercial rbmt systems saarland graciously provided the output of these systems which we manually evaluated alongside all other entriesfor more on the participating systems please refer to the respective system descriptions in the proceedings of the workshopas with last years workshop we placed greater emphasis on the human evaluation than on the automatic evaluation metric scoresit is our contention that automatic measures are an imperfect substitute for human assessment of translation qualitytherefore rather than select an official automatic evaluation metric like the nist machine translation workshop does we define the manual evaluation to be primary and use 3since the performance of systems varied significantly between the europarl and news test sets such weighting might not be optimalhowever this was a level playing field since none of the individual systems had development data for the news set eithereuroparl corpus and from the project syndicate a web site which collects political commentary in multiple languagesfor czech and hungarian we use other available parallel corporanote that the number of words is computed based on the provided tokenizer and that the number of distinct words is the based on lowercased tokens the human judgments to validate automatic metricsmanual evaluation is time consuming and it requires a monumental effort to conduct it on the scale of our workshopwe distributed the workload across a number of people including shared task participants interested volunteers and a small number of paid annotatorsmore than 100 people participated in the manual evaluation with 75 people putting in more than an hours worth of effort and 25 putting in more than four hoursa collective total of 266 hours of labor was investedwe wanted to ensure that we were using our annotators time effectively so we carefully designed the manual evaluation processin our analysis of last years manual evaluation we found that the niststyle fluency and adequacy scores were overly time consuming and inconsistent4 we therefore abandoned this method of evaluating the translationswe asked people to evaluate the systems output in three different ways the manual evaluation software asked for repeated judgments from the same individual and had multiple people judge the same item and logged the time it took to complete each judgmentthis allowed us to measure intra and interannotator agreement and to analyze the average amount of time it takes to collect the different kinds of judgmentsour analysis is presented in section 7ranking translations relative to each other is a relatively intuitive and straightforward taskwe therefore kept the instructions simplethe instructions for this task were 4it took 26 seconds on average to assign fluency and adequacy scores to a single sentence and the interannotator agreement had a kappa of between 22525 meaning that annotators assigned the same scores to identical sentences less than 40 of the timerank each whole sentence translation from best to worst relative to the other choices ranking several translations at a time is a variant of force choice judgments where a pair of systems is presented and an annotator is asked is a better than b worse than b or equal to b in our experiments annotators were shown five translations at a time except for the hungarian and czech language pairs where there were fewer than five system submissionsin most cases there were more than 5 systems submissionswe did not attempt to get a complete ordering over the systems and instead relied on random selection and a reasonably large sample size to make the comparisons fairwe continued the constituentbased evaluation that we piloted last year wherein we solicited judgments about the translations of short phrases within sentences rather than whole sentenceswe parsed the source language sentence selected syntactic constituents from the tree and had people judge the translations of those syntactic phrasesin order to draw judges attention to these regions we highlighted the selected source phrases and the corresponding phrases in the translationsthe corresponding phrases in the translations were located via automatic word alignmentsfigure 2 illustrates how the source and reference phrases are highlighted via automatic word alignmentsthe same is done for sentence and each of the system translationsthe english french german and spanish test sets were automatically parsed using high quality parsers for those languages the word alignments were created with giza applied to a parallel corpus containing the complete europarl training data plus sets of 4051 sentence pairs created by pairing the test sentences with the reference translations and the test sentences paired with each of the system translationsthe phrases in the translations were located using standard phrase extraction techniques because the wordalignments were created automatically and because the phrase extraction is heuristic the phrases that were selected may not exactly correspond to the translations of the selected source phrasewe noted this in the instructions to judges rank each constituent translation from best to worst relative to the other choices grade only the highlighted part of each translationplease note that segments are selected automatically and they should be taken as an approximate guidethey might include extra words that are not in the actual alignment or miss words on either end76 the criteria that we used to select which constituents to evaluate were the final criterion helped reduce the number of alignment errors but may have biased the sample to phrases that are more easily alignedthis year we introduced a variant on the constituentbased evaluation where instead of asking judges to rank the translations of phrases relative to each other we asked them to indicate which phrasal translations were acceptable and which were notdecide if the highlighted part of each translation is acceptable given the referencethis should not be a relative judgment against the other system translationsthe instructions also contained the same caveat about the automatic alignments as abovefor each phrase the judges could click on yes no or not sure the number of times people clicked on not sure varied by language pair and taskit was selected as few as 5 of the time for the englishspanish news task to as many as 125 for the czechenglish news taskwe collected judgments using a webbased tool that presented judges with batches of each type of evaluationwe presented them with five screens of sentence rankings ten screens of constituent rankings and ten screen of yesno judgmentsthe order of the types of evaluation were randomizedin order to measure intraannotator agreement 10 of the items were repeated and evaluated twice by each judgein order to measure interannotator agreement 40 of the items were randomly drawn from a common pool that was shared across all annotators so that we would have items that were judged by multiple annotatorsjudges were allowed to select whichever data set they wanted and to evaluate translations into whatever languages they were proficient inshared task participants were excluded from judging their own systemsin addition to evaluation each language pair individually we also combined all system translations into english for the news test set taking advantage of the fact that our test sets were parallel across all languagesthis allowed us to gather interesting data about the difficulty of translating from different languages into englishtable 3 gives a summary of the number of judgments that we collected for translations of individual sentenceswe evaluated 14 translation tasks with three different types of judgments for most of them for a total of 46 different conditionsin total we collected over 75000 judgmentsdespite the large number of conditions we managed to collect between 10002000 judgments for the constituentbased evaluation and several hundred to several thousand judgments for the sentence ranking taskstables 4 5 and 6 summarize the results of the human evaluation of the quality of the machine translation systemstable 4 gives the results for the manual evaluation which ranked the translations of sentencesit shows the average number of times that systems were judged to be better than or equal to any other systemtable 5 similarly summarizes the results for the manual evaluation which ranked the translations of syntactic constituentstable 6 shows how many times on average a systems translated constituents were judged to be acceptable in the yesno evaluationthe bolded items indicate the system that performed the best for each task under that particular evaluate metrictable 7 summaries the results for the allenglish task that we introduced this yearappendix c gives an extremely detailed pairwise comparison between each of the systems along with an indication of whether the differences are statistically significantthe highest ranking entry for the allenglish task was the university of edinburghs system combination entryit uses a technique similar to rosti et al to perform system combinationlike the other system combination entrants it was tuned on the europarl test set and tested on the news test set using systems that submitted entries to both tasksthe university of edinburghs system combination went beyond other approaches by combining output from multiple languages pairs resulting in 37 component systemsrather than weighting individual systems it incorporated weighted features that indicated which language the system was originally translating fromthis entry was part of ongoing research in multilingual multisource translationsince there was no official multilingual system combination track this entry should be viewed only as a contrastive data pointwe analyzed the allenglish judgments to see which source languages were preferred more often thinking that this might be a good indication of how challenging it is for current mt systems to translate from each of the languages into englishfor this analysis we collapsed all of the entries derived from one source language into an equivalence class and judged them against the otherstherefore all french systems were judged against all german systems and so onwe found that french systems were judged to be better than or equal to other systems 69 of the time spanish systems 64 of the time german systems 47 of the time czech systems 39 of the time and hungarian systems 29 of the timewe performed a similar analysis by collapsing the rbmt systems into one equivalence class and the other systems into anotherwe evaluated how well these two classes did on the sentence ranking task for each language pair and test set and found that rbmt was a surprisingly good approach in many of the conditionsrbmt generally did better on the news test set and for translations into german suggesting that smts forte is in test sets where it has appropriate tuning data and for language pairs with less reordering than between german and english system was judged to be better than or equal to all other systems in the sentence ranking task for the allenglish conditionthe subscript indicates the source language of the systemthe manual evaluation data provides a rich source of information beyond simply analyzing the quality of translations produced by different systemsin particular it is especially useful for validating the automatic metrics which are frequently used by the machine translation research communitywe continued the shared task which we debuted last year by examining how well various automatic metrics correlate with human judgmentsin addition to examining how well the automatic evaluation metrics predict human judgments at the systemlevel this year we have also started to measure their ability to predict sentencelevel judgmentsthe automatic metrics that were evaluated in this years shared task were the following some of the allowable variation in translationwe use a single reference translation in our experiments wordsthey calculate bleu and fmeasure by matching part of speech 4grams in a hypothesis translation against the reference translationin addition to the above metrics which scored the translations on both the systemlevel5 and the sentencelevel there were a number of metrics which focused on the sentencelevel system translations features included in duh s training were sentencelevel bleu scores and intraset ranks computed from the entire set of translations usaars evaluation metric uses giza to align outputs of multiple systems with the corresponding reference translations with a bias towards identical onetoone alignments through a suitably augmented corpusthe model4 log probabilities in both directions are added and normalized to a scale between 0 and 1to measure the correlation of the automatic metrics with the human judgments of translation quality at the systemlevel we used spearmans rank correlation coefficient p we converted the raw scores assigned each system into rankswe assigned a ranking to the systems for each of the three types of manual evaluation based on the percent of time that the sentences it produced were judged to be better than or equal to the translations of any other system the percent of time that its constituent translations were judged to be better than or equal to the translations of any other system the percent of time that its constituent translations were judged to be acceptablewe calculated p three times for each automatic metric comparing it to each type of human evaluationsince there were no ties p can be calculated using the simplified equation where di is the difference between the rank for systemi and n is the number of systemsthe possible values of p range between 1 and 1 thus an automatic evaluation metric with a higher value for p is making predictions that are more similar to the human judgments than an automatic evaluation metric with a lower p measuring sentencelevel correlation under our human evaluation framework was made complicated by the fact that we abandoned the fluency and adequacy judgments which are intended to be absolute scalessome previous work has focused on developing automatic metrics which predict human ranking at the sentencelevel such work generally used the 5point fluency and adequacy scales to combine the translations of all sentences into a single ranked listthis list could be compared against the scores assigned by automatic metrics and used to calculate correlation coefficientswe did not gather any absolute scores and thus cannot compare translations across different sentencesgiven the seemingly unreliable fluency and adequacy assignments that people make even for translations of the same sentences it may be dubious to assume that their scoring will be reliable across sentencesthe data points that we have available consist of a set of 6400 human judgments each ranking the output of 5 systemsit is straightforward to construct a ranking of each of those 5 systems using the scores automatic evaluation metrics on translations into french german and spanish assigned to their translations of that sentence by the automatic evaluation metricswhen the automatic scores have been retrieved we have 6400 pairs of ranked lists containing 5 itemshow best to treat these is an open discussion and certainly warrants further thoughtit does not seem like a good idea to calculate p for each pair of ranked list because 5 items is an insufficient number to get a reliable correlation coefficient and its unclear if averaging over all 6400 lists would make sensefurthermore many of the human judgments of 5 contained ties further complicating matterstherefore rather than calculating a correlation coefficient at the sentencelevel we instead ascertained how consistent the automatic metrics were with the human judgmentsthe way that we calculated consistency was the following for every pairwise comparison of two systems on a single sentence by a person we counted the automatic metric as being consistent if the relative scores were the same we divided this by the total number of pairwise comparisons to get a percentagebecause the systems generally assign real numbers as scores we excluded pairs that the human annotators ranked as tiestables 8 and 9 report the systemlevel p for each automatic evaluation metric averaged over all translations directions into english and out of english6 for the into english direction the meteor score with its parameters tuned on adequacy judgments had the strongest correlation with ranking the translations of whole sentencesit was tied with the combined method of gimenez and marquez for the highest correlation over all three types of human judgmentsbleu was the second to lowest ranked overall though this may have been due in part to the fact that we were using test sets which had only a single reference translation since the cost of creating multiple references was prohibitively expensive in the reverse direction for translations out of english into the other languages bleu does considerably better placing second overall after the partofspeech variant on it proposed by popovic and ney yet another variant of bleu which utilizes meteors flexible matching has the strongest correlation for sentencelevel rankingappendix b gives a break down of the correlations for each of the lan6tables 8 and 9 exclude the spanishenglish news task since it had a negative correlation with most of the automatic metricssee tables 19 and 20 guage pairs and test setstables 10 and 11 report the consistency of the automatic evaluation metrics with human judgments on a sentencebysentence basis rather than on the system levelfor the translations into english the ulc metric had the strongest correlation with human judgments correctly predicting the human ranking of a each pair of system translations of a sentence more than half the timethis is dramatically higher than the chance baseline which is not 5 since it must correctly rank a list of systems rather than a pairfor the reverse direction meteorranking performs very stronglythe svnrank which had the lowest overall correlation at the system level does the best at consistently predicting the translations of syntactic constituents into other languagesin addition to scoring the shared task entries we also continued on our campaign for improving the process of manual evaluationwe measured pairwise agreement among annotators using the kappa coefficient which is widely used in computational linguistics for measuring agreement in category judgments it is defined as annotator agreement for the different types of manual evaluation where p is the proportion of times that the annotators agree and p is the proportion of time that they would agree by chancewe define chance agreement for ranking tasks as s since there are three possible outcomes when ranking the output of a pair of systems a b a b a b a b or a bfor intraannotator agreement we did similarly but gathered items that were annotated on multiple occasions by a single annotatortable 12 gives k values for interannotator agreement and table 13 gives k values for intraannotator agreementthese give an indication of how often different judges agree and how often single judges are consistent for repeated judgments respectivelythe interpretation of kappa varies but according to landis and koch 02 is slight 2 4 is fair 4 6 is moderate 6 8 is substantial and the rest almost perfectthe interannotator agreement for the sentence ranking task was fair for the constituent ranking it was moderate and for the yesno judgments it was substantial7 for the intraannotator agreement k indicated that people had moderate consistency with their previous judgments on the sentence ranking task substantial consistency with their previous constituent ranking judgments and nearly perfect consistency with their previous yesno judgmentsthese k values indicate that people are able to more reliably make simple yesno judgments about the translations of short phrases than they are to rank phrases or whole sentenceswhile this is an interesting observation we do not recommend doing away with the sentence ranking judgmentsthe higher agreement on the constituentbased evaluation may be influenced based on the selection criteria for which phrases were selected for evaluation additionally the judgments of the short phrases are not a great substitute for sentencelevel rankings at least in the way we collected themthe average correlation coefficient between the constituentbased judgments with the sentence ranking judgments is only p 051tables 19 and 20 give a detailed break down of the correlation of the different types of human judgments with each other on each translation taskit may be possible to select phrases in such a way that the constituentbased evaluations are a better substitute for the sentencebased ranking for instance by selecting more of constituents from each sentence or attempting to cover most of the words in each sentence in a phrasebyphrase mannerthis warrants further investigationit might also be worthwhile to refine the instructions given to annotators about how to rank the translations of sentences to try to improve their agreement which is currently lower than we would like it to be we used the web interface to collect timing informationthe server recorded the time when a set of sentences was given to a judge and the time when the judge returned the sentencesit took annotators an average of 18 seconds per sentence to rank a list of sentences8 it took an average of 10 seconds per sentence for them to rank constituents and an average of 85 seconds per sentence for them to make yesno judgmentsfigure 3 shows the distribution of times for these tasksthese timing figures indicate that the tasks which the annotators were the most reliable on were also much quicker to complete than the ones they were less reliable on given that they are faster at judging short phrases they can do proportionally more of themfor instance we could collect 211 yesno judgments in the same amount of time that it would take us to collect 100 sentence ranking judgmentshowever this is partially offset by the fact that many of the translations of shorter phrases are identical which means that we have to collect more judgments in order to distinguish between two systems8sets which took longer than 5 minutes were excluded from these calculations because there was a strong chance that annotators were interrupted while completing the taskone strong advantage of the yesno judgments over the ranking judgments is their potential for reusewe have invested hundreds of hours worth of effort evaluating the output of the translation systems submitted to this years workshop and last years workshopwhile the judgments that we collected provide a wealth of information for developing automatic evaluation metrics we cannot not reuse them to evaluate our translation systems after we update their parameters or change their behavior in anywaythe reason for this is that altered systems will produce different translations than the ones that we have judged so our relative rankings of sentences will no longer be applicablehowever the translations of short phrases are more likely to be repeated than the translations of whole sentencestherefore if we collect a large number of yesno judgments for short phrases we could build up a database that contains information about what fragmentary translations are acceptable for each sentence in our test corpuswhen we change our system and want to evaluate it we do not need to manually evaluate those segments that match against the database and could instead have people evaluate only those phrasal translations which are newaccumulating these judgments over time would give a very reliable idea of what alternative translations were allowablethis would be useful because it could alleviate the problems associated with bleu failing to recognize allowable variation in translation when multiple reference translations are not available a large database of human judgments might also be useful as an objective function for minimum error rate training or in other system development taskssimilar to previous editions of this workshop we carried out an extensive manual and automatic evaluation of machine translation performance for translating from european languages into english and vice versaone important aspect in which this years shared task differed from previous years was the introduction of an additional newswire test set that was different in nature to the training datawe also added new language pairs to our evaluation hungarianenglish and germanspanishas in previous years we were pleased to notice an increase in the number of participantsthis year we received submissions from 23 groups from 18 institutionsin addition we evaluated seven commercial rulebased mt systemsthe goal of this sharedtask is twofold first we want to compare stateoftheart machine translation systems and secondly we aim to measure to what extent different evaluation metrics can be used to assess mt qualitywith respect to mt quality we noticed that the introduction of test sets from a different domain did have an impact on the ranking of systemswe observed that rulebased systems generally did better on the news test setoverall it cannot be concluded that one approach clearly outperforms other approaches as systems performed differently on the various translation tasksone general observation is that for the tasks where statistical combination approaches participated they tended to score relatively high in particular with respect to bleuwith respect to measuring the correlation between automated evaluation metrics and human judgments we found that using meteor and ulch resulted in the highest spearman correlation scores on average when translating into englishwhen translating from english into french german and spanish bleu and posbleu resulted in the highest correlations with human judgmentsfinally we investigated inter and intraannotator agreement of human judgments using kappa coefficientswe noticed that ranking whole sentences results in relatively low kappa coefficients meaning that there is only fair agreement between the assessorsconstituent ranking and acceptability judgments on the other hand show moderate and substantial interannotator agreement respectivelyintraannotator agreement was substantial to almost perfect except for the sentence ranking assessment where agreement was only moderatealthough it is difficult to draw exact conclusions from this one might wonder whether the sentence ranking task is simply too complex involving too many aspects according to which translations can be rankedthe huge wealth of the data generated by this workshop including the human judgments system translations and automatic scores is available at httpwwwstatmtorgwmt08 for other researchers to analyzethis work was supported in parts by the euromatrix project funded by the european commission the gale program of the us defense advanced research projects agency contract nohr001106c0022 and the us national science foundation under grant iis0713448we are grateful to abhaya agarwal john henderson rebecca hwa alon lavie mark przybocki stuart shieber and david smith for discussing different possibilities for calculating the sentencelevel correlation of automatic evaluation metrics with human judgments in absence of absolute scoresany errors in design remain the responsibility of the authorsthank you to eckhard bick for parsing the spanish test setsee httpbetavislsdudk for more information about the constraintbased parserthanks to greg hanneman and anttiveikko rosti for applying their system combination algorithms to our data
W08-0309
further metaevaluation of machine translationthis paper analyzes the translation quality of machine translation systems for 10 language pairs translating between czech english french german hungarian and spanishwe report the translation quality of over 30 diverse translation systems based on a largescale manual evaluation involving hundreds of hours of effortwe use the human judgments of the systems to analyze automatic evaluation metrics for translation quality and we report the strength of the correlation with human judgments at both the systemlevel and at the sentencelevelwe validate our manual evaluation methodology by measuring intra and interannotator agreement and collecting timing informationthus the human an notation for the wmt 2008 dataset was collected in the form of binary pairwise preferences that are considerably easier to maketraditionally human ratings for mt quality have been collected in the form of absolute scores on a five or sevenpoint likert scale but low reliability numbers for this type of annotation have raised concerns
optimizing chinese word segmentation for machine translation performance previous work has shown that chinese word segmentation is useful for machine translation to english yet the way different segmentation strategies affect mt is still poorly understood in this paper we demonstrate that optimizing segmentation for an existing segmentation standard does not always yield better mt performance we find that other factors such as segmentation consistency and granularity of chinese words can be more important for machine translation based on these findings we implement methods inside a conditional random field segmenter that directly optimize segmentation granularity with respect to the mt task providing an improvement of 073 bleu we also show that improving segmentation consistency using external lexicon and proper noun features yields a 032 bleu increase word segmentation is considered an important first step for chinese natural language processing tasks because chinese words can be composed of multiple characters but with no space appearing between wordsalmost all tasks could be expected to benefit by treating the character sequence it together with the meaning smallpox rather than dealing with the individual characters and it without a standardized notion of a word traditionally the task of chinese word segmentation starts from designing a segmentation standard based on linguistic and task intuitions and then aiming to building segmenters that output words that conform to the standardone widely used standard is the penn chinese treebank segmentation standard it has been recognized that different nlp applications have different needs for segmentationchinese information retrieval systems benefit from a segmentation that breaks compound words into shorter words paralleling the ir gains from compound splitting in languages like german whereas automatic speech recognition systems prefer having longer words in the speech lexicon however despite a decade of very intense work on chinese to english machine translation the way in which chinese word segmentation affects mt performance is very poorly understoodwith current statistical phrasebased mt systems one might hypothesize that segmenting into small chunks including perhaps even working with individual characters would be optimalthis is because the role of a phrase table is to build domain and application appropriate larger chunks that are semantically coherent in the translation processfor example even if the word for smallpox is treated as two onecharacter words they can still appear in a phrase like itsmallpox so that smallpox will still be a candidate translation when the system translates itnevertheless xu et al show that an mt system with a word segmenter outperforms a system working with individual characters in an alignment template approachon different language pairs and showed that datadriven methods for splitting and preprocessing can improve arabicenglish and germanenglish mtbeyond this there has been no finergrained analysis of what style and size of word segmentation is optimal for mtmoreover most discussion of segmentation for other tasks relates to the size units to identify in the segmentation standard whether to join or split noun compounds for instancepeople generally assume that improvements in a systems word segmentation accuracy will be monotonically reflected in overall system performancethis is the assumption that justifies the concerted recent work on the independent task of chinese word segmentation evaluation at sighan and other venueshowever we show that this assumption is false aspects of segmenters other than error rate are more critical to their performance when embedded in an mt systemunless these issues are attended to simple baseline segmenters can be more effective inside an mt system than more complex machine learning based models with much lower word segmentation error ratein this paper we show that even having a basic word segmenter helps mt performance and we analyze why building an mt system over individual characters does not function as wellbased on an analysis of baseline mt results we pin down four issues of word segmentation that can be improved to get better mt performance while a featurebased segmenter like a support vector machine or conditional random field model may have very good aggregate performance inconsistent contextspecific segmentation decisions can be quite harmful to mt system performance a perceived strength of featurebased systems is that they can generate outofvocabulary words but these can hurt mt performance when they could have been split into subparts from which the meaning of the whole can be roughly compositionally derived conversely splitting oov words into noncompositional subparts can be very harmful to an mt system it is better to produce such oov items than to split them into unrelated character sequences that are known to the systemone big source of such oov words is named entities since the optimal granularity of words for phrasebased mt is unknown we can benefit from a model which provides a knob for adjusting average word sizewe build several different models to address these issues and to improve segmentation for the benefit of mtfirst we emphasize lexiconbased features in a featurebased sequence classifier to deal with segmentation inconsistency and overgenerating oov wordshaving lexiconbased features reduced the mt training lexicon by 295 reduced the mt test data oov rate by 341 and led to a 038 bleu point gain on the test data second we extend the crf label set of our crf segmenter to identify proper nounsthis gives 33 relative improvement on the oov recall rate and a 032 improvement in bleufinally we tune the crf model to generate shorter or longer words to directly optimize the performance of mtfor mt we found that it is preferred to have words slightly shorter than the ctb standardthe paper is organized as follows we describe the experimental settings for the segmentation task and the task in section 2in section 31 we demonstrate that it is helpful to have word segmenters for mt but that segmentation performance does not directly correlate with mt performancewe analyze what characteristics of word segmenters most affect mt performance in section 32in section 4 and 5 we describe how we tune a crf model to fit the word granularity and also how we incorporate external lexicon and information about named entities for better mt performancefor directly evaluating segmentation performance we train each segmenter with the sighan bakeoff 2006 training data and then evaluate on the test datathe training data contains 509k words and the test data has 155k wordsthe percentage of words in the test data that are unseen in the training data is 88detail of the bakeoff data sets is in to understand how each segmenter learns about oov words we will report the f measure the invocabulary recall rate as well as oov recall rate of each segmenterthe mt system used in this paper is moses a stateoftheart phrasebased system we build phrase translations by first acquiring bidirectional giza alignments and using moses growdiag alignment symmetrization heuristic1 we set the maximum phrase length to a large value because some segmenters described later in this paper will result in shorter words therefore it is more comparable if we increase the maximum phrase lengthduring decoding we incorporate the standard eight feature functions of moses as well as the lexicalized reordering modelwe tuned the parameters of these features with minimum error rate training on the nist mt03 evaluation data set and then test the mt performance on nist mt03 and mt05 evaluation data we report the mt performance using the original bleu metric all bleu scores in this paper are uncasedthe mt training data was subsampled from gale year 2 training data using a collection of character 5grams and smaller ngrams drawn from all segmentations of the test datasince the mt training data is subsampled with character ngrams it is not biased towards any particular word segmentationthe mt training data contains 1140693 sentence pairs on the chinese side there are 60573223 nonwhitespace characters and the english sentences have 40629997 wordsour main source for training our fivegram language model was the english gigaword corpus and we also included close to one million english sentences taken from ldc parallel texts gale year 1 training data sinorama asianet and hong kong newswe restricted the gigaword corpus to a subsample of 25 million sentences because of memory constraintsin this section we experiment with three types of segmenters characterbased lexiconbased and featurebased to explore what kind of characteristics are useful for segmentation for mtthe training data for the segmenter is two orders of magnitude smaller than for the mt system it is not terribly well matched to it in terms of genre and variety and the information an mt system learns about alignment of chinese to english might be the basis for a task appropriate segmentation style for chineseenglish mta phrasebased mt system like moses can extract phrases from a word alignment and the system can construct the words that are usefulthese observations suggest the first hypothesisobservation in the experiments we conducted we found that the phrase table cannot capture everything a chinese word segmenter can do and therefore having word segmentation helps phrasebased mt systems2 to show that having word segmentation helps mt we compare a lexiconbased maximummatching segmenter with characterbased segmentation the lexiconbased segmenter finds words by greedily matching the longest words in the lexicon in a lefttoright fashionwe will later refer to this segmenter as maxmatchthe maxmatch segmenter is a simple and common baseline for the chinese word segmentation taskthe segmentation performance of maxmatch is not very satisfying because it cannot generalize to capture words it has never seen beforehowever having a basic segmenter like maxmatch still gives the phrasebased mt system a win over the characterbased segmentation we will refer to the characterbased segmentation as charbasedin table 1 we can see that on the chinese word segmentation task having maxmatch is obviously better than not trying to identify chinese words at all as for mt performance in table 1 we see that having a segmenter even as sim2different phrase extraction heuristics might affect the resultsin our experiments growdiag outperforms both onetomany and manytoone for both maxmatch and charbasedwe report the results only on growdiag ple as maxmatch can help phrasebased mt system by about 137 bleu points on all 1082 sentences of the test data also we tested the performance on 828 sentences of mt05 where all elements are in vocabulary3 for both maxmatch and charbasedmaxmatch achieved 3209 bleu and charbased achieved 3028 bleu which shows that on the sentences where all elements are in vocabulary there maxmatch is still significantly better than charbasedtherefore hypothesis 1 is refutedanalysis we hypothesized in hypothesis 1 that the phrase table in a phrasebased mt system should be able to capture the meaning by building phrases on top of character sequencesbased on the experimental result in table 1 we see that using characterbased segmentation actually performs reasonably well which indicates that the phrase table does capture the meaning of character sequences to a certain extenthowever the results also show that there is still some benefit in having word segmentation for mtwe analyzed the decoded output of both systems on the development set we found that the advantage of maxmatch over charbased is twofold lexical it enhances the ability to disambiguate the case when a character has very different meaning in different contexts and reordering it is easier to move one unit around than having to move two consecutive units at the same timehaving words as the basic units helps the reordering modelfor the first advantage one example is the character ➐ which can both mean intelligence or an abbreviation for chile the comparison between charbased and maxmatch is listed in table 2the word ➈➐ is unknown for both segmentershowever maxmatch gave a better translation of the character ➐the issue here is not that the ➐intelligence entry never appears in the phrase table of charbasedthe real issue is when ➐ means chile it is usually followed by the character ⑤so by grouping them together maxmatch avoided falsely increasing the probability of translating the standalone ➐ into chilebased on our analysis this ambiguity occurs the most when the characterbased system is dealing with a rare or unseen character sequence in the training data and also occurs more often when dealing with transliterationsthe reason is that characters composing a transliterated foreign named entity usually does not preserve their meanings they are just used to compose a chinese word that sounds similar to the original word much more like using a character segmentation of english wordsanother example of this kind is the mt system using charbased segmentation tends to translate some characters individually and drop others while the system using maxmatch segmentation is more likely to translate it rightthe second advantage of having a segmenter like the lexiconbased maxmatch is that it helps the reordering modelresults in table 1 are with the linear distortion limit defaulted to 6since words in charbased are inherently shorter than maxmatch having the same distortion limit means charbased is limited to a smaller context than maxmatchto make a fairer comparison we set the linear distortion limit in moses to unlimited removed the lexicalized reordering model and retested both systemswith this setting maxmatch is 046 bleu point better than charbased on mt03this result suggests that having word segmentation does affect how the reordering model works in a phrasebased systemhypothesis 2better segmentation performance should lead to better mt performance observation we have shown in hypothesis 1 that it is helpful to segment chinese texts into words firstin order to decide a segmenter to use the most intuitive thing to do is to find one that gives higher f measure on segmentationour experiments show that higher f measure does not necessarily lead to higher bleu scorein order to contrast with the simple maximum matching lexiconbased model we built another segmenter with a crf modelcrf is a statistical sequence modeling framework introduced by lafferty et al and was first used for the chinese word segmentation task by peng et al who treated word segmentation as a binary decision taskwe optimized the parameters with a quasinewton method and used gaussian priors to prevent overfittingthe probability assigned to a label sequence for a particular sequence of characters by a crf is given by the equation x is a sequence of t unsegmented characters z is the partition function that ensures that equation 1 is a probability distribution fkokay1 is a set of feature functions and y is the sequence of binary predictions for the sentence where the prediction yt 1 indicates the tth character of the sequence is preceded by a space and where yt 1 indicates there is nonewe trained a crf model with a set of basic features character identity features of the current character previous character and next character and the conjunction of previous and current characters in the zeroorder templateswe will refer to this segmenter as crfbasictable 3 shows that the featurebased segmenter crfbasic outperforms the lexiconbased maxmatch by 59 relative f measurecomparing the oov recall rate and the iv recall rate the reason is that crfbasic wins a lot on the oov recall ratewe see that a featurebased segmenter like crfbasic clearly has stronger ability to recognize unseen wordson mt performance however crfbasic is 038 bleu points worse than maxmatch on the test setin section 32 we will look at how the mt training and test data are segmented by each segmenter and provide statistics and analysis for why certain segmenters are better than othersin section 31 we have refuted two hypothesesnow we know that phrase table construction does not fully capture what a word segmenter can dothus it is useful to have word segmentation for mt a higher f measure segmenter does not necessarily outperforms on the mt taskto understand what factors other than segmentation f measure can affect mt performance we introduce another crf segmenter crflex that includes lexiconbased features by using external lexiconsmore details of crflex will be described in section 51from table 3 we see that the segmentation f measure is that crflex crfbasic maxmatchand now we know that the better segmentation f measure does not always lead to better mt bleu score because of in terms of mt performance crflex maxmatch crfbasicin table 4 we list some statistics of each segmenter to explain this phenomenonfirst we look at the lexicon size of the mt training and test datawhile segmenting the mt data crfbasic generates an mt training lexicon size of 583k unique word tokens and maxmatch has a much smaller lexicon size of 39kcrflex performs best on mt but the mt training lexicon size and test lexicon oov rate is still pretty high compared to maxmatchonly examining the mt training and test lexicon size still does not fully explain why crflex outperforms maxmatchmaxmatch generates a smaller mt lexicon and lower oov rate but for mt it was not better than crflex which has a bigger lexicon and higher oov ratein order to understand why maxmatch performs worse on mt than crflex but better than crfbasic we use conditional entropy of segmentation variations to measure consistencywe use the gold segmentation of the sighan test data as a guidelinefor every work type wi we collect all the different pattern variations vij in the segmentation we want to examinefor example for a word abc in the gold segmentation we look at how it is segmented with a segmenterthere are many possibilitiesif we use cx and cy to indicate other chinese characters and to indicate white spaces cx abc cy is the correct segmentation because the three characters are properly segmented from both sides and they are concatenated with each otherit can also be segmented as cx a because cy which means although the boundary is correct the first character is separated from the other twoor it can be segmented as cxa bccy which means the first character was actually part of the previous word while because are the beginning of the next wordevery time a particular word type wi appears in the text we consider a segmenter more consistent if it can segment wi in the same way every time but it does not necessarily have to be the same as the gold standard segmentationfor example if abc is a chinese person name which appears 100 times in the gold standard data and one segmenter segment it as cx a because cy 100 times then this segmenter is still considered to be very consistent even if it does not exactly match the gold standard segmentationusing this intuition the conditional entropy of segmentation variations h is defined as follows now we can look at the overall conditional entropy h to compare the consistency of each segmenterin table 4 we can see that even though maxmatch has a much smaller mt lexicon size than crflex when we examine the consistency of how maxmatch segments in context we find the conditional entropy is much higher than crflexwe can also see that crfbasic has a higher conditional entropy than the other twothe conditional entropy h shows how consistent each segmenter is and it correlates with the mt performance in table 4note that consistency is only one of the competing factors of how good a segmentation is for mt performancefor example a characterbased segmentation will always have the best consistency possible since every word abc will just have one pattern cx a b c cybut from section 31 we see that charbased performs worse than both maxmatch and crfbasic on mt because having word segmentation can help the granularity of the chinese lexicon match that of the english lexiconin conclusion for mt performance it is helpful to have consistent segmentation while still having a word segmentation matching the granularity of the segmented chinese lexicon and the english lexiconwe have shown earlier that wordlevel segmentation vastly outperforms character based segmentation in mt evaluationssince the word segmentation standard under consideration was neither specifically designed nor optimized for mt it seems reasonable to investigate whether any segmentation granularity in continuum between characterlevel and ctbstyle segmentation is more effective for mtin this section we present a technique for directly optimizing a segmentation propertycharacters per token average for translation quality which yields significant improvements in mt performancein order to calibrate the average word length produced by our crf segmenterie to adjust the rate of word boundary predictions we apply a relatively simple technique originally devised for adjusting the precisionrecall tradeoff of any sequential classifierspecifically the weight vector w and feature vector of a trained linear sequence classifier are augmented at test time to include new classconditional feature functions to bias the classifier towards particular class labelsin our case since we wish to increase the frequency of word boundaries we add a feature function its weight 10 controls the extent of which the classifier will make positive predictions with very large positive 4 values causing only positive predictions and large negative values effectively disabling segmentation boundariestable 5 displays how changes of the bias parameter λ0 affect segmentation granularity4 since we are interested in analyzing the different regimes of mt performance between ctb segmentation and characterbased we performed a grid search in the range between λ0 0 and λ0 32 for each λ0 value we ran an entire mt training and testing cycle ie we resegmented the entire training data ran giza acquired phrasal translations that abide to this new segmentation and ran mert and evaluations on segmented data using the same 4note that characterpertoken averages provided in the table consider each nonchinese word as one character since our segmentation postprocessing prevents these tokens from being segmented tive bias values slightly improves segmentation performancewe also notice that raising λ0 yields relatively consistent improvements in mt performance yet causes segmentation performance to be increasingly worsewhile the latter finding is not particularly surprising it further confirms that segmentation and mt evaluations can yield rather different outcomeswe chose the λ0 2 on another dev set on the test set mt05 λ0 2 yields 3147 bleu which represents a quite large improvement compared to the unbiased segmenter further reducing the average number of characters per token yields gradual drops of performance until characterlevel segmentation here are some examples of how setting λ0 2 shortens the words in a way that can help mtin section 31 we showed that a statistical sequence model with rich features can generalize better than maximum matching segmentershowever it also inconsistently overgenerates a big mt training lexicon and oov words in mt test data and thus causes a problem for mtto improve a featurebased sequence model for mt we propose 4 different approaches to deal with named entities optimal length of word for mt and joint search for segmentation and mt decodingone way to improve the consistency of the crf model is to make use of external lexicons to add lexiconbased featuresall the features we use are listed in table 6our linguistic features are adopted from and there are three categories of features character identity ngrams morphological and character reduplication featuresour lexiconbased features are adopted from where lbegin lmid and lend represent the maximum length of words found in a lexicon that contain the current character as either the first middle or last character and we group any length equal or longer than 6 togetherthe linguistic features help capturing words that were unseen to the segmenter while the lexiconbased features constrain the segmenter with external knowledge of what sequences are likely to be wordswe built a crf segmenter with all the features listed in table 6 the external lexicons we used for the lexiconbased features come from various sources including named entities collected from wikipedia and the chinese section of the un website named entities collected by harbin institute of technology the adso dictionary emm news explorer online chinese tools online dictionary from peking university and hownetthere are 423224 distinct entries in all the external lexiconsthe mt lexicon consistency of crflex in table 4 shows that the mt training lexicon size has been reduced by 295 and the mt test data oov rate is reduced by 341named entities are an important source for oov words and in particular are ones which it is bad to break into pieces therefore we use the proper noun partofspeech tag information from ctb to extend the label sets of our crf model from 2 to 4 this is similar to the allatonce characterbased pos tagging in except that we are only tagging proper nounswe call the 4label extension crflexnrthe segmentation and mt performance of crflexnr is listed in table 7with the 4label extension the oov recall rate improved by 329 while the iv recall rate stays the samesimilar to we found the overall f measure only goes up a tiny bit but we do find a significant oov recall rate improvementon the mt performance crflexnr has a 032 bleu gain on the test set mt05in addition to the bleu improvement crflexnr also provides extra information about proper nouns which can be combined with postprocessing named entity translation modules to further improve mt performancein this paper we investigated what segmentation properties can improve machine translation performancefirst we found that neither characterbased nor a standard word segmentation standard are optimal for mt and show that an intermediate granularity is much more effectiveusing an already competitive crf segmentation model we directly optimize segmentation granularity for translation quality and obtain an improvement of 073 bleu point on mt05 over our lexiconbased segmentation baselinesecond we augment our crf model with lexicon and proper noun features in order to improve segmentation consistency which provide a 032 bleu point improvementthe authors would like to thank menqgiu wang and huihsin tseng for useful discussionsthis paper is based on work funded in part by the defense advanced research projects agency through ibm
W08-0336
optimizing chinese word segmentation for machine translation performanceprevious work has shown that chinese word segmentation is useful for machine translation to english yet the way different segmentation strategies affect mt is still poorly understoodin this paper we demonstrate that optimizing segmentation for an existing segmentation standard does not always yield better mt performancewe find that other factors such as segmentation consistency and granularity of chinese words can be more important for machine translationbased on these findings we implement methods inside a conditional random field segmenter that directly optimize segmentation granularity with respect to the mt task providing an improvement of 073 bleuwe also show that improving segmentation consistency using external lexicon and proper noun features yields a 032 bleu increasewe develop the crfbased stanford chinese segmenter that is trained on the segmentation of the chinese treebank for consistencywe enhance a crf s segmentation model in mt tasks by tuning the word granularity and improving the segmentation consistence
parallel implementations of word alignment tool training word alignment models on large corpora is a very timeconsuming processes this paper describes two parallel implementations of giza that accelerate this word alignment process one of the implementations runs on computer clusters the other runs on multiprocessor system using multithreading technology results show a nearlinear speedup according to the number of cpus used and alignment quality is preserved training stateoftheart phrasebased statistical machine translation systems requires several stepsfirst word alignment models are trained on the bilingual parallel training corporathe most widely used tool to perform this training step is the wellknown gizathe resulting word alignment is then used to extract phrase pairs and perhaps other information to be used in translation systems such as block reordering modelsamong the procedures more than 23 of the time is consumed by word alignment speeding up the word alignment step can dramatically reduces the overall training time and in turn accelerates the development of smt systemswith the rapid development of computing hardware multiprocessor servers and clusters become widely availablewith parallel computing processing time can often be cut down by one or two orders of magnitudetasks which require several weeks on a single cpu machine may take only a few hours on a clusterhowever giza was designed to be singleprocess and singlethreadto make more efficient use of available computing resources and thereby speed up the training of our smt system we decided to modify giza so that it can run in parallel on multiple cpusthe word alignment models implemented in giza the socalled ibm and hmm alignment models are typical implementation of the them algorithm that is to say that each of these models run for a number of iterationsin each iteration it first calculates the best word alignment for each sentence pairs in the corpus accumulating various counts and then normalizes the counts to generate the model parameters for the next iterationthe word alignment stage is the most timeconsuming part especially when the size of training corpus is largeduring the aligning stage all sentences can be aligned independently of each other as model parameters are only updated after all sentence pairs have been alignedmaking use of this property the alignment procedure can be parallelizedthe basic idea is to have multiple processes or threads aligning portions of corpus independently and then merge the counts and perform normalizationthe paper implements two parallelization methodsthe pgiza implementation which is based on uses multiple aligning processeswhen all the processes finish a master process starts to collect the counts and normalizes them to produce updated modelschild processes are then restarted for the new iterationthe pgiza does not limit the number of cpus being used whereas it needs to transfer large amounts of data between processestherefore its performance also depends on the speed of the network infrastructurethe mgiza implementation on the other hand starts multiple threads on a common address space and uses a mutual locking mechanism to synchronize the access to the memoryalthough mgiza can only utilize a single multiprocessor computer which limits the number of cpus it can use it avoids the overhead of slow network iothat makes it an equally efficient solution for many tasksthe two versions of alignment tools are available online at httpwwwcscmueduqinggizathe paper will be organized as follows section 2 provides the basic algorithm of giza and section 3 describes the pgiza implementationsection 4 presents the mgiza implementation followed by the profile and evaluation results of both systems in section 5finally conclusion and future work are presented in section 6giza aligns words based on statistical modelsgiven a source string fj1 f1 fj fj and a target string ei1 e1 ei ei an alignment a of the two strings is defined as a c j 1ji 0i in case that i 0 in some e a it represents that the source word j aligns to an empty target word e0in statistical world alignment the probability of a source sentence given target sentence is written as in which aj1 denotes the alignment on the sentence pairin order to express the probability in statistical way several different parametric forms of p pθ have been proposed and the parameters θ can be estimated using maximum likelihood estimation on a training corpusthe best alignment of the sentence pair giza is an implementation of ml estimators for several statistical alignment models including ibm model 1 through 5 hmm and model 6 although ibm model 5 and model 6 are sophisticated they do not give much improvement to alignment qualityibm model 2 has been shown to be inferior to the hmm alignment model in the sense of providing a good starting point for more complex models so in this paper we focus on model 1 hmm model 3 and 4when estimating the parameters the them algorithm is employedin the estep the counts for all the parameters are collected and the counts are normalized in mstepfigure 1 shows a highlevel view of the procedure in gizatheoretically the estep requires summing over all the alignments of one sentence pair which could be j alignments in totalwhile presents algorithm to implement counting over all the alignments for model 12 and hmm it is prohibitive to do that for models 3 through 6therefore the counts are only collected for a subset of alignmentsfor example suggested two different methods using only the alignment with the maximum probability the socalled viterbi alignment or generating a set of alignments by starting from the viterbi alignment and making changes which keep the alignment probability highthe later is called pegging proposed to use the neighbor alignments of the viterbi alignment and it yields good results with a minor speed overheadduring training we starts from simple models use the simple models to bootstrap the more complex onesusually people use the following sequence model 1 hmm model 3 and finally model 4table 1 lists all the parameter tables needed in each stage and their data structures1among these models the lexicon probability table is the largestit should contain all the p entries which means the table will have an entry for every distinct source and target word pair fi7 ej that cooccurs in at least one sentence pair in the corpushowever to keep the size of this table manageable low probability entries are prunedstill when training the alignment models on large corpora this statistical lexicon often consumes several gigabytes of memorythe computation time of aligning a sentence pair obviously depends on the sentence lengtheg for ibm 1 that alignment is o for the hmm alignment it is o with j the number of words in the source sentence and i the number of words in the target sentencehowever given that the maximum sentence length is fixed the time complexity of the estep grows linearly with the number of sentence pairsthe time needed to perform the mstep is dominated by renormalizing the lexicon probabilitiesthe worst case time complexity is o where vf is the size of the source vocabulary and ve is the size of the target vocabularytherefore the time complexity of the mstep is polynomial in the vocabulary size which typically grows logarithmic in corpus sizeas a result the alignment stage consumes most of the overall processing time when the number of sentences is largebecause the parameters are only updated during the mstep it will be no difference in the result whether we perform the word alignment in the estep sequentially or in parallel2these character2however the rounding problem will make a small differistics make it possible to build parallel versions of gizafigure 2 shows the basic idea of parallel gizawhile working on the required modification to giza to run the alignment step in parallel we identified a bug which needed to be fixedwhen training the hmm model the matrix for the hmm trellis will not be initialized if the target sentence has only one wordtherefore some random numbers are added to the countsthis bug will also crash the system when linking against pthread librarywe observe different alignment and slightly lower perplexity after fixing the bug 3a natural idea of parallelizing giza is to separate the alignment and normalization procedures and spawn multiple alignment processeseach process aligns a chunk of the prepartitioned corpus and outputs partial countsa master process takes these counts and combines them and produces the normalized model parameters for the next iterationthe architecture of pgiza is shown in figure 3 ence in the results even when processing the sentences sequentially but in different orderin order to ensure that the next iteration has the correct model all the information that may affect the alignment needs to be stored and sharedit includes model files and statistics over the training corpustable 1 is a summary of tables used in each modelin addition to these models the summation of sentence weight of the whole corpus should be storedgiza allows assigning a weight wi for each sentence pair si sto indicate the number of occurrence of the sentence pairthe weight is normalized by pi wi ei wi so that ei pi 1then the pi serves as a prior probability in the objective functionas each child processes only see a portion of training data it is required to calculate and share the ei wi among the children so the values can be consistentthe tables and count tables of the lexicon probabilities can be extremely large if not pruned before being written outpruning the count tables when writing them into a file will make the result slightly differenthowever as we will see in section 5 the difference does not hurt translation performance significantlytable 2 shows the size of count tables written by each child process in an experiment with 10 million sentence pairs remember there are more than 10 children writing the the count tables and the master would have to read all these tables the amount of io is significantly reduced by pruning the count tablesthe other issue is the master control scriptthe script should be able to start processes in other nodestherefore the implementation varies according to the software environmentwe implemented three versions of scripts based on secure she will condor and mauialso the master must be notified when a child process finishesin our implementation we use signal files in the network file systemwhen the child process finishes it will touch a predefined file in a shared folderthe script keeps watching the folder and when all the children have finished the script runs the normalization process and then starts the next iterationone of the advantages of pgiza is its scalability it is not limited by the number of cpus of a single machineby adding more nodes the alignment speed can be arbitrarily fast4also by splitting the corpora into multiple segments each child process only needs part of the lexicon which saves memorythe other advantage is that it can adopt different resource management systems such as condor and mauitorqueby splitting the corpus into very small segments and submitting them to a scheduler we can get most out of clustershowever pgiza also has significant drawbacksfirst of all each process needs to load the models of the previous iteration and store the counts of the current step on shared storagetherefore io becomes a bottleneck especially when the number of child processes is largealso the normalization procedure needs to read all the count files from network storageas the number of child processes increases the time spent on readingwriting will also increasegiven the fact that the io demand will not increase as fast as the size of corpus grows pgiza can only provide significant speed up when the size of each training corpus chunk is large enough so that the alignment time is significantly longer than normalization timealso one obvious drawback of pgiza is its complexity in setting up the environmentone has to write scripts specially for the schedulerresource management softwarebalancing the load of each child process is another issueif any one of the corpus chunks takes longer to complete the master has to wait for itin other words the speed of pgiza is actually determined by the slowest child processanother implementation of parallelism is to run several alignment threads in a single processthe threads share the same address space which means it can access the model parameters concurrently without any io overheadthe main thread spawns a number of threads using the same entry functioneach thread will ask a provider for the next sentence pairthe sentence provider is synchronizedthe request of sentences are queued and each sentence pair is guaranteed to be assigned to only one threadthe threads do alignment in their own stacks and read required probabilities from global parameter tables such as the ttable which reside on the heapbecause no update on these global tables will be performed during this stage the reading can be concurrentafter aligning the sentence pairs the counts need to be collectedfor hmmtable and d4table which use maps as their data structure we cannot allow concurrent readwrite to the table because the map structure may be changed when inserting a new entryso we must either put mutual locks to postpone reading until writing is complete or duplicate the tables for each thread and merge them afterwardslocking can be very inefficient because it may block other threads so the duplicatemerge method is a much better solutionhowever for the ttable the size is too large to have multiple copiesinstead we put a lock on every target word so only when two thread try to write counts for the same target word will a collisions happenwe also have to put mutual locks on the accumulators used to calculate the alignment perplexityeach thread outputs the alignment into its own output filesentences in these files are not in sequential ordertherefore we cannot simply concatenate them but rather have to merge them according to the sentence idbecause all the threads within a process share the same address space no data needs to be transferred which saves the io time significantlymgiza is more resourcethrifty comparing to pgiza it do not need to load copies of models into memoryin contrast to pgiza mgiza has a much simpler interface and can be treated as a dropin replacement for giza except that one needs to run a script to merge the final alignment filesthis property makes it very simple to integrate mgiza into machine translation packages such as mosesone major disadvantage of mgiza is also obvious lack of scalabilityaccelerating is limited by the number of cpus the node hascompared to pgiza on the speedup factor by each additional cpu mgiza also shows some deficiencydue to the need for synchronization there are always some cpu time wasted in waitingfor pgiza we performed training on an chineseenglish translation taskthe dataset consists of approximately 10 million sentence pairs with 231 million chinese words and 258 million english wordswe ran both giza and pgiza on the same training corpus with the same parameters then ran pharaoh phrase extraction on the resulting alignmentsfinally we tuned our translation systems on the nist mt03 test set and evaluate them on nist mt06 test setthe experiment was performed on a cluster of several xeon cpus the storage of corpora and models are on a central nfs serverthe pgiza uses condor as its scheduler splitting the training data into 30 fragments and ran training in both direction concurrentlythe scheduler assigns 11 cpus on average to the taskswe ran 5 iterations of model 1 training 5 iteration of hmm 3 model 3 iterations and 3 model 4 iterationsto compare the performance of system we recorded the total training time and the bleu score which is a standard automatic measurement of the translation qualitythe training time and bleu scores are shown in table 4 5 the results show similar bleu scores when using giza and pgiza and a 4 times speed upalso we calculated the time used in normalizationthe average time of each normalization step is shown in table 5as we can see if we rule out the time spent in normalization the speed up is almost linearhigher order models require less time in the normalization step mainly due to the fact that the lexicon becomes smaller and smaller with each models and the wsmt news commentary devtest set 2007 then we used the first parameter set to decode wsmt 2006 europal test set and used the second on wsmt news commentary test set 2007 6table 6 shows the comparison of bleu scores of both systems listed in table 6 note that when decoding using the phrase table resulting from training with mgiza we used the parameter tuned for a phrase table generated from giza alignment which may be the because of lower bleu score in the tuning sethowever the major difference in the training comes from fixing the hmm bug in giza as mentioned beforeto profile the speed of the system according to the number of cpus it use we ran mgiza on 1 2 and 4 cpus of the same speedwhen it runs on 1 cpu the speed is the same as for the original gizatable 7 and figure 5 show the running time of each stage when using 4 cpus the system uses only 41 time comparing to one threadcomparing to pgiza mgiza does not have as high an acceleration ratethat is mainly because of the required locking mechanismhowever the acceleration is also significant especially for small training corpora as we will see in next experimentin order to compare the acceleration rate of pgiza and mgiza we also ran pgiza in the same dataset as described in the previous section with 4 childrento avoid the delay of starting the children processes we chose to use ssh to start remote tasks directly instead of using schedulersthe results are listed in table 8there is nearly no speedup observed and in model 1 training we observe a loss in the speedagain by investigating the time spent in normalization the phenomenon can be explained even after ruling out the normalization time the speed up factor is smaller than mgizathat is because of reading models when child processes start and writing models when child processes finishfrom the experiment we can conclude that pgiza is more suited to train on large corpora than on small or moderate size corporait is also important to determine whether to use pgiza rather than mgiza according to the speed of network storage infrastructureto compare the difference in final viterbi alignment output we counted the number of sentences that have different alignments in these systemswe use giza with the bug fixed as the referencethe results of all other systems are listed in table 10 from the comparison we can see that pgiza has larger difference in the generated alignmentthat is partially because of the pruning on count tablesto also compare the alignment score in the different systemsfor each sentence pair i 1 2 n assume two systems b and c have viterbi alignment scores sb sz we define the residual r as the residuals of the three systems are listed in table 11the residual result shows that the mgiza has a very small difference in alignment scores while pgiza has a larger residualthe results of experiments show the efficiency and also the fidelity of the alignment generated by the two versions of parallel gizahowever there are still small differences in the final alignment result especially for pgizatherefore one should consider which version to choose when building systemsgenerally speaking mgiza provides smoother integration into other packages easy to set up and also more precisepgiza will not perform as good as mgiza on smallsize corporahowever pgiza has good performance on large data and should be considered when building very large scale systemsthe paper describes two parallel implementations of the wellknown and widely used word alignment tool gizapgiza does alignment on a number of independent processes uses network file system to collect counts and performs normalization by a master processmgiza uses a multithreading mechanism to utilize multiple cores and avoid network transportationthe experiments show that the two implementation produces similar results with original giza but lead to a significant speedup in the training processwith compatible interface mgiza is suitable for a dropin replacement for giza while pgiza can utilize huge computation resources which is suitable for building large scale systems that cannot be built using a single machinehowever improvements can be made on both versionsfirst a combination of the two implementation is reasonable ie running multithreaded child processes inside pgizas architecturethis could reduce the io significantly when using the same number of cpussecondly the mechanism of assigning sentence pairs to the child processes can be improved in pgizaa server can take responsibility to assign sentence pairs to available child processes dynamicallythis would avoid wasting any computation resource by waiting for other processes to finishfinally the huge model files which are responsible for a high io volume can be reduced by using binary formatsa first implementation of a simple binary format for the ttable resulted in files only about 13 in size on disk compared to the plain text formatthe recent development of mapreduce framework shows its capability to parallelize a variety of machine learning algorithms and we are attempting to port word alignment tools to this frameworkcurrently the problems to be addressed is the io bottlenecks and memory usage and an attempt to use distributed structured storage such as hypertable to enable fast access to large tables and also performing filtering on the tables to alleviate the memory issue
W08-0509
parallel implementations of word alignment tooltraining word alignment models on large corpora is a very timeconsuming processesthis paper describes two parallel implementations of giza that accelerate this word alignment processone of the implementations runs on computer clusters the other runs on multiprocessor system using multithreading technologyresults show a nearlinear speedup according to the number of cpus used and alignment quality is preservedwe use a multithreaded version of the giza tool this speeds up the process and corrects an error of giza that can appear with rare words
the stanford typed dependencies representation this paper examines the stanford typed dependencies representation which was designed to provide a straightforward description of grammatical relations for any user who could benefit from automatic text understanding for such purposes we argue that dependency schemes must follow a simple design and provide semantically contentful information as well as offer an automatic procedure to extract the relations we consider the underlying design principles of the stanford scheme from this perspective and compare it to the gr and parc representations finally we address the question of the suitability of the stanford scheme for parser evaluation the stanford typed dependencies representation was designed to provide a simple description of the grammatical relationships in a sentence that could easily be understood and effectively used by people without linguistic expertise who wanted to extract textual relationsthe representation was not designed for the purpose of parser evaluationnevertheless we agree with the widespread sentiment that dependencybased evaluation of parsers avoids many of the problems of the traditional parseval measures and to the extent that the stanford dependency representation is an effective representation for the tasks envisioned it is perhaps closer to an appropriate taskbased evaluation than some of the alternative dependency representations availablein this paper we examine the representation and its underlying design principles look at how this representation compares with other dependency representations in ways that reflect the design principles and consider its suitability for parser evaluationa major problem for the natural language processing community is how to make the very impressive and practical technology which has been developed over the last two decades approachable to and usable by everyone who has text understanding needsthat is usable not only by computational linguists but also by the computer science community more generally and by all sorts of information professionals including biologists medical researchers political scientists law firms business and market analysts etcthinking about this issue we were struck by two factsfirst we noted how frequently wordnet gets used compared to other resources such as framenet or the penn treebank we believe that much of the explanation for this fact lies in the difference of complexity of the representation used by the resourcesit is easy for users not necessarily versed in linguistics to see how to use and to get value from the straightforward structure of wordnetsecond we noted the widespread use of minipar and the link parser this clearly shows that it is very easy for a nonlinguist thinking in relation extraction terms to see how to make use of a dependency representation and the availability of high quality easytouse tools is essential for driving broader use of nlp tools1 1on the other hand evaluation seems less important to the best of our knowledge there has never been a convincing and thorough evaluation of either minipar or the link grammar this paper advocates for the stanford typed dependencies representation being a promising vehicle for bringing the breakthroughs of the last 15 years of parsing research to this broad potential user communitythe representation aims to provide a simple habitable designall information is represented as binary relationsthis maps straightforwardly on to common representations of potential users including the logic forms of moldovan and rus 2 semantic web resource description framework triples and graph representations unlike many linguistic formalisms excessive detail is viewed as a defect information that users do not understand or wish to process detracts from uptake and usabilitythe usercentered design process saw the key goal as representing semantically contentful relations suitable for relation extraction and more general information extraction usesthe design supports this use by favoring relations between content words by maintaining semantically useful closed class word information while ignoring linguistic decisions less relevant to users and by not representing less used material about linguistic features such as tense and agreementthe sd scheme thus provides a semantic representation simple and natural enough for people who are not linguists but can benefit from nlp toolsthe style of the sd representation bears a strong intellectual debt to the framework of lexicalfunctional grammar and more directly it owes a debt to both the sets of grammatical relations and the naming defined in two representations that follow an lfg style the gr and parc schemesthese were used as a starting point for developing the stanford dependencies but where the sd scheme deviates from gr parc and its lfg roots is that it has been designed to be a practical model of sentence representation particularly in the context of relation extraction taskssd makes available two options suited to different use cases in one every word of the original sentence is present as a node with relations between it and other nodes whereas in the latter certain words are collapsed out of the representation making such changes as turning prepositions into relationsthe former is useful when a close parallelism to the source text words must be maintained whereas the latter is intended to be more useful for relation extraction and shallow language understanding taskshere we discuss only the latter representation see for a discussion of both options and the precise relationship between themthe intended use cases of usability by people who are not linguists and suitability for relation extraction applications led sd to try to adhere to the following design principles we illustrate many of them in the rest of this section using example sentences which were made available for the parser evaluation shared taskthe grammatical relations of sd are arranged in a hierarchy rooted with the most generic relation dependentthe hierarchy contains 56 grammatical relationswhen the relation between a head and its dependent can be identified more precisely relations further down in the hierarchy are used but when it is unclear more generic dependencies are possible for example the dependent relation can be specialized to aux arg or mod the arg relation is further divided into the subj relation and the comp relation and so onthe backbone of this hierarchy is quite similar to that in gr but there are some crucial differencesthe sd scheme is not concerned with the argumentadjunct distinction which is largely useless in practicein contrast npinternal relations are an inherent part of corpus texts and are critical in realworld applicationsthe sd scheme therefore includes many relations of this kind appos nn num number and abbrev etcfor instance in the sentence i feel like a little kid says a gleeful alex de castro a car salesman who has stopped by a workout of the suns to slip six campaneris cards to the great man himself to be autographed we obtain the following relations under the sd representation sd appos num nn the numeric modifier relation between cards and six is also standard in the parc and gr schemesparc provides an apposition relation between salesman and alex de castro whereas gr only identifies salesman as a text adjunct of castrobut on the whole sd makes more finegrained distinctions in the relations which are needed in practicethe adjunct dependency of the parc scheme lumps together different relationsfor example the adjectival modifier gleeful in the sentence above will not be marked distinctively from the preposition modifying workout nor from the relation between the verbs stop and slip parc adjunct adjunct adjunct adjunct the sd output for the relations between these words looks as follows sd amod amod xcomp prep of the comparison between the two outputs shows that sd proposes a larger set of dependencies capturing relation differences which can play a role in applications while sticking to notions of traditional grammar the sd scheme also chooses content words as heads of the dependencies auxiliaries complementizers and so on are dependents of themthis choice in design is driven by the kind of information that is useful for applicationsfor instance in the sentence considered as a whole mr lane said the filings required under the proposed rules will be at least as effective if not more so for investors following transactions effective is chosen as the head of the quoted phrasethis enables the representation to have a direct dependency between the key content words effective and filingssuch a link is more difficult to infer from the gr scheme where be is chosen as the headhowever the relation between effective and filings is key to extracting the gist of the sentence semantics and it is therefore important for applications to be able to retrieve it easilyalso in the case of structures involving copular verbs a direct link between the subject and the complement enables equivalent representations across languages such parallel representations should presumably help machine translation and this was a further motivation for choosing content words as headsanother instance where direct links between content words is useful is the case of prepositional complementsthe sd scheme offers the option of collapsing dependencies involving a preposition in the example above instead of having two relations adjunct and obj as in parc or ncmod and dobj as in gr sd provides a direct relation between the content words prep ofprepositions often work as role markers and this type of link facilitates the extraction of how the two content words are related and thus these links are often used by downstream applications the usefulness of the representation is exemplified in the sentence a similar technique is almost impossible to apply to other crops such as cotton soybeans and rice for which sd gives direct links between the entities joined through the preposition such as sd prep such as prep such as prep such as a similar collapsing treatment takes place for conjuncts consider the following sentence bell based in los angeles makes and distributes electronic computer and building products figures 1 and 2 give the full dependency output from sd and gr respectivelythe numbers after the words in the sd representation indicate the word position in the sentence3 from the sd representation one can easily see that the sentence talks about electronic products and computer products as well as building productsby collapsing the dependencies involving conjuncts the output produced is closer to the semantics of the sentence and this facilitates information extraction this information is not straightforwardly apparent in the gr scheme nor in the parc scheme which follows a similar treatment of conjunctsanother choice in the design has been to consistently have binary relations all the dependencies form a triple a grammatical relation holding between two words this gives uniformity to the representation and renders it very readable critical features for a usercentered designfurthermore all the information can be represented by a directed graph enabling the creation of both a limpid visual representation for humans and a canonical data structure for softwaremoreover it maps straightforwardly on to semantic web representations such as owl and rdf triples as exploited in this design choice limits the kind of information offered by the sd schemefor instance the parc scheme contains much more information gr about individual words such as verb tense and aspect noun number and person type of ne for proper nouns pronoun form adjective degree etcfor the sentence in figures 1 and 2 the following information is available for the word los angeles in the parc scheme this kind of information is indubitably valuable but is often less used in practice and does not per se pertain to dependency dataadding it lengthens an output already complex enough and impedes readability and conveniencethus sd does not provide such overwhelming detail we feel that turning prepositions into relations is useful for 98 of users 98 of the timenevertheless opting for usability in this way causes the sd scheme to sacrifice some linguistic fidelityone instance is that modifiers of prepositions are dependent on the verb and not on the preposition itselfin bill went over the river and right through the woods right will be an adverbial modifier of wentin he had laughed simultaneously mocking the stupidity of government by cosmetics and confessing that he was also a part of it just as he was part of government by voice coach and acting coach just which modifies as will be a dependent of the head of the adverbial clause ie partthis induces some distortion in the exact semantics of the sentencethe interaction between preposition collapsing and pp conjunction is another instance in which the sd treatment slightly alters the semantics of the sentenceconsider again the sentence bill went over the river and right through the woodsboth prepositions over and through are governed by the verb wentto avoid disjoint subgraphs when collapsing the relations examples like this are transformed into vp coordination which requires making a copy of the word wentthis gives the following representation which corresponds to a sentence like bill went over the river and went right through the woods sd prep over prep through conj and not collapsing the relations in such a case would prevent the alteration of the semantics but would lead to a nonuniform treatment of prepositionsuniformity is key for readability and user convenienceit seems therefore reasonable to use a representation which sacrifices the exact semantics of the original sentence by producing a sentence roughly equivalent but which ensures uniformity across relationstwo vital conditions for the success of a dependency scheme are to provide a suitable representation for users as well as a tool that is easy to usesagae et al note that the availability of an automatic procedure to convert phrase structure parses to sd is the reason for its use in evaluations of parsers in the biomedical domainthe primary focus of the sd scheme however has been to offer grammatical relations appropriate for endusersthe stanford parser4 comes with a tool described in which provides for the rapid extraction of the grammatical relations from phrase structure parsesstructural configurations are used to define grammatical roles the semantic head of each constituent of the parse is identified using rules akin to the collins head rules but modified to retrieve the semantic head of the constituent rather than the syntactic headas mentioned content words are chosen as heads and all the other words in the constituent depend on this headto retrieve adequate heads from a semantic point of view heuristics are used to inject more structure when the penn treebank gives only flat constituents as is often the case for conjuncts eg and qp constituents eg then for each grammatical relation patterns are defined over the phrase structure parse tree using the treeexpression syntax defined by tregex conceptually each pattern is matched against every tree node and the matching pattern with the most specific grammatical relation is taken as the type of the dependencythe automatic extraction of the relations is not infalliblefor instance in the sentence behind their perimeter walls lie freshly laundered flowers verdant grass still sparkling from the last shower yew hedges in an ecstasy of precision clipping the system will erroneously retrieve apposition relations between flowers and grass as well as between flowers and hedges whereas these should be conj and relationsthe system is clueless when there is no overt maker of conjunctionanother limitation of the tool is the treatment of longdistance dependencies such as whmovement and controlraising the system cannot handle longdistance dependencies that cross clausesin a sentence like what does he think the system will correctly find that what is a direct object of think sd dobj aux nsubj however in a sentence such as who the hell does he think he is kidding the automatic extraction will fail to find that who is the direct object of kiddinghere it is vital to distinguish between sd as a representation versus the extant conversion toollongdistance dependencies are not absent from the formalism but the tool does not accurately deal with them5sd has been successfully used by researchers in different domainsin the pascal recognizing 5as possible future work we have thought of using a tool such as the one of levy and manning to correctly determine long distance dependencies as input to the current dependency conversion systemthis would presumably be effective but would make the conversion process much heavier weighttextual entailment challenges the increase in the use of sd is clearly apparentthe goal in these challenges consists of identifying whether one sentence follows from a piece of text and general background knowledge according to the intuitions of an intelligent human readerin 2007 out of the 21 systems which participated in the challenge 5 used the sd representation whereas the year before only the stanford entry was using itsd is also widely present in the bioinformatic world where it is used with success fundel et al found that in extraction of relations between genes and proteins a system based on the sd scheme greatly outperformed the previous best system on the lll challenge dataset airola et al provide more systematic results on a number of proteinprotein interaction datasetstheir graph kernel approach uses an alldependencypaths kernel which allows their system to consider full dependency graphstheir system is based on the sd scheme and they demonstrate stateoftheart performance for this approachin the biomedical domain sd has recently been used in evaluations of parsers pyysalo et al assessed the suitability of the sd scheme over the link grammar dependency scheme in an applicationoriented evaluationthe link parser indeed uses a very finegrained set of relations which often makes distinctions of a structural rather than a semantic natureone example is the mx relation which connects modifying phrases with commas to preceding nouns the link parser uses a different set of dependency types for dependencies appearing in questions and relative clausesanother example is the prepositional phrase where alternative attachment structures are indicated by different relationsmany of these distinctions are too fine and nonsemantic to be of practical valuethe sd scheme by aiming for an intermediate level of granularity and targeting semantic dependencies provides a more adequate representation for applicationstherefore to increase the usability of the bioinfer corpus which provides manually annotated data for information extraction in the biomedical domain and originally followed the link grammar scheme pyysalo et al developed a version of the corpus annotated with the sd schemethey also made available a program and conversion rules that they used to transform link grammar relations into sd graphs which were then handcorrected while a limited amount of gold standard annotated data was prepared for the parser evaluation shared task this is the main source of goldstandard sd data which is currently availablein other domains zhuang et al uses the representation to extract opinions about features in reviews and meena and prabhakar uses it to improve the quality of sentencelevel sentiment analysisthe open information extraction system textrunner also makes use of the sd graph representation its first module uses the stanford parser and the dependency tool to automatically identify and label trustworthy and untrustworthy extractionseven in theoretical linguistic work sd has proven very useful it has hugely facilitated data extraction from corpora in the context of the nsffunded project dynamics of probabilistic grammar carried out at the stanford linguistics departmentwhen seeking a goldstandard dependency scheme for parser evaluation the ultimate goal of such an evaluation is an important questionit is necessary to contrast the two different forms that evaluation can take extrinsic taskbased evaluation and intrinsic evaluationwe tend to agree with molla and hutchinson that intrinsic evaluations have limited value and that taskbased evaluation is the correct approachsome of the results of the previous section at least broadly support the utility of the sd scheme for practical use in higherlevel tasksnevertheless given the current trend in the nlp community as well as in other fields such as bioinformatics where the advantage of dependency representations for shallow text understanding tasks has become salient we would argue following clegg and shepherd that dependencybased evaluation is close to typical user tasksmoreover it avoids some of the known deficiencies of other parser evaluation measures such as parseval recent work on parser evaluation using dependency graphs in the biomedical domain confirms that researchers regard dependencybased evaluation as a more useful surrogate for extrinsic taskbased evaluation in their evaluation clegg and shepherd aimed at analyzing the capabilities of syntactic parsers with respect to semantically important tasks crucial to biological information extraction systemsto do so they used the sd scheme which provides a de facto standard for comparing a variety of constituent parsers and treebanks at the dependency level and they assessed its suitability for evaluationthey found that the sd scheme better illuminates the performance differences between higher ranked parsers and lower ranked parsers their parser evaluation accommodates user needs they used the collapsed version of the dependency graphs offered by the sd scheme arguing that this is the kind of graph one would find most useful in an information extraction projectalthough clegg and shepherd also favor dependency graph representations for parser evaluation they advocate retention of parse trees so information lost in the dependency structures can be accessedin essence any existing dependency scheme could be adopted as the goldstandard for evaluationhowever if one believes in ultimately valuing extrinsic taskbased evaluation a dependency representation which proposes a suitable design for users and user tasks is probably the best surrogate for intrinsic evaluationmoreover the existence of tools for automatically generating and converting dependency representations has aided greatly in making parser comparison possible across different formalismswe believe that the sd scheme approaches these goalsif one accepts the goals set here in order to enforce uniformity between application and evaluation it seems sensible to have a unique scheme for both purposessome of the positive results from use of the sd representation as well as the evaluations carried out in the biomedical field point to the usability of the sd scheme for both purposeswe wish to thank andrew brian clegg and sampo pyysalo for their useful feedback on the dependency extraction tooltheir comments enabled us to improve the toolwe also thank the workshop reviewers for their helpful comments
W08-1301
the stanford typed dependencies representationthis paper examines the stanford typed dependencies representation which was designed to provide a straightforward description of grammatical relations for any user who could benefit from automatic text understandingfor such purposes we argue that dependency schemes must follow a simple design and provide semantically contentful information as well as offer an automatic procedure to extract the relationswe consider the underlying design principles of the stanford scheme from this perspective and compare it to the gr and parc representationsfinally we address the question of the suitability of the stanford scheme for parser evaluationstanford dependencies provide a simple description of relations between pairs of words in a sentence
tag dynamic programming and the perceptron for efficient featurerich parsing we describe a parsing approach that makes use of the perceptron algorithm in conjunction with dynamic programming methods to recover full constituentbased parse trees the formalism allows a rich set of parsetree features including pcfgbased features bigram and trigram dependency features and surface features a severe challenge in applying such an approach to full syntactic parsing is the efficiency of the parsing algorithms involved we show that efficient training is feasible using a tree adjoining grammar based parsing formalism a lowerorder dependency parsing model is used to restrict the search space of the full model thereby making it efficient experiments on the penn wsj treebank show that the model achieves stateoftheart performance for both constituent and dependency accuracy in global linear models for structured prediction the optimal label y for an input x is where y is the set of possible labels for the input x f e rd is a feature vector that represents the pair and w is a parameter vectorthis paper describes a glm for natural language parsing trained using the averaged perceptronthe parser we describe recovers full syntactic representations similar to those derived by a probabilistic contextfree grammar a key motivation for the use of glms in parsing is that they allow a great deal of flexibility in the features which can be included in the definition of fa critical problem when training a glm for parsing is the computational complexity of the inference problemthe averaged perceptron requires the training set to be repeatedly decoded under the model under even a simple pcfg representation finding the arg max in eq1 requires o time where n is the length of the sentence and g is a grammar constantthe average sentence length in the data set we use is over 23 words the grammar constant g can easily take a value of 1000 or greaterthese factors make exact inference algorithms virtually intractable for training or decoding glms for full syntactic parsingas a result in spite of the potential advantages of these methods there has been very little previous work on applying glms for full parsing without the use of fairly severe restrictions or approximationsfor example the model in is trained on only sentences of 15 words or less reranking models restrict y to be a small set of parses from a firstpass parser see section 11 for discussion of other related workthe following ideas are central to our approachdescribe a novel tagbased parsing formalism that allows full constituentbased trees to be recovereda driving motivation for our approach comes from the flexibility of the featurevector representations f that can be used in the modelthe formalism that we describe allows the incorporation of basic pcfgstyle features the use of features that are sensitive to bigram dependencies between pairs of words and features that are sensitive to trigram dependenciesany of these feature types can be combined with surface features of the sentence x in a similar way to the use of surface features in conditional random fields crucially in spite of these relatively rich representations the formalism can be parsed efficiently time using dynamicprogramming algorithms described by eisner use of a lowerorder model for pruningthe o running time of the tag parser is still too expensive for efficient training with the perceptronwe describe a method that leverages a simple firstorder dependency parser to restrict the search space of the tag parser in training and testingthe lowerorder parser runs in o time where h g experiments show that it is remarkably effective in pruning the search space of the full tag parserexperiments on the penn wsj treebank show that the model recovers constituent structures with higher accuracy than the approaches of and with a similar level of performance to the reranking parser of the model also recovers dependencies with significantly higher accuracy than stateoftheart dependency parsers such as previous work has made use of various restrictions or approximations that allow efficient training of glms for parsingthis section describes the relationship between our work and this previous workin reranking approaches a firstpass parser is used to enumerate a small set of candidate parses for an input sentence the reranking model which is a glm is used to select between these parses a crucial advantage of our approach is that it considers a very large set of alternatives in y and can thereby avoid search errors that may be made in the firstpass parser1 another approach that allows efficient training of glms is to use simpler syntactic representations in particular dependency structures dependency parsing can be implemented in o time using the algorithms of eisner in this case there is no grammar constant and parsing is therefore efficienta disadvantage of these approaches is that they do not recover full constituentbased syntactic structures the increased linguistic detail in full syntactic structures may be useful in nlp applications or may improve dependency parsing accuracy as is the case in our experiments2 there has been some previous work on glm approaches for full syntactic parsing that make use of dynamic programmingtaskar et al describe a maxmargin approach however in this work training sentences were limited to be of 15 words or lessclark and curran describe a loglinear glm for ccg parsing trained on the penn treebankthis method makes use of parallelization across an 18 node cluster together with up to 25gb of memory used for storage of dynamic programming structures for training dataclark and curran describe a perceptronbased approach for ccg parsing which is considerably more efficient and makes use of a supertagging model to prune the search space of the full parsing modelrecent work describes loglinear glms applied to pcfg representations but does not make use of dependency featuresthis section describes the idea of derivations in our parsing formalismas in contextfree grammars or tags a derivation in our approach is a data structure that specifies the sequence of operations used in combining basic structures in a grammar to form a full parse treethe parsing formalism we use is related to the tree adjoining grammar formalisms described in however an important difference of our work from this previous work is that our formalism is defined to be splittable allowing use of the efficient parsing algorithms of eisner a derivation in our model is a pair where e is a set of spines and d is a set of dependencies specifying how the spines are combined to form a parse treethe spines are similar to elementary trees in tagsome examples are as follows these structures do not have substitution nodes as is common in tags3 instead the spines consist of a lexical anchor together with a series of unary projections which usually correspond to different xbar levels associated with the anchorthe operations used to combine spines are similar to the tag operations of adjunction and sister adjunctionwe will call these operations regular adjunction and sister adjunction as one example the cake spine shown above can be sadjoined into the vp node of the ate spine to form the tree shown in figure 1in contrast if we use the radjunction operation to adjoin the cake tree into the vp node we get a different structure which has an additional vp level created by the radjunction operation the resulting tree is shown in figure 1the radjunction operation is similar to the usual adjunction operation in tags but has some differences that allow our grammars to be splittable see section 23 for more discussionwe now give formal definitions of the sets e and d take x to be a sentence consisting of n 1 words x0 xn where x0 is a special root symbol which we will denote as a derivation for the input sentence x consists of a pair where the label l is a tuple ηh and ηm are the head and modifier spines that are being combinedpos specifies which node in ηh is being adjoined intoa is a binary flag specifying whether the combination operation being used is sadjunction or radjunctionl is a binary flag specifying whether or not any previous modifier has been radjoined into the position pos in ηhby a previous modifier we mean a modifier m that was adjoined from the same direction as m it would be sufficient to define l to be the pair the inclusion of ηh ηm and l adds redundant information that can be recovered from the set e and other dependencies in dbut it will be convenient to include this information in the labelin particular it is important that given this definition of l it is possible to define a function grm that maps a label l to a triple of nonterminals that represents the grammatical relation between m and h in the dependency structurefor example in the tree shown in figure 1 the grammatical relation between cake and ate is the triple grm in the tree shown in figure 1 the grammatical relation between cake and ate is the triple grm the conditions under which a pair forms a valid derivation for a sentence x are similar to those in conventional ltagseach e e must be such that η is an elementary tree whose anchor is the word xithe dependencies d must form a directed projective tree spanning words 0 n with at the root of this tree as is also the case in previous work on discriminative approches to dependency parsing we allow any modifier tree ηm to adjoin into any position in any head tree ηh but the dependencies d must nevertheless be coherentfor example they must be consistent with the spines in e and they must be nested correctly4 we will allow multiple modifier spines to sadjoin or radjoin into the same node in a head spine see section 23 for more detailsthe model used for parsing with this approach is a global linear modelfor a given sentence x we define y to be the set of valid derivations for x where each y e y is a pair as described in the previous sectiona function f maps pairs to featurevectors f e rdthe parameter vector w is also a vector in rdgiven these definitions the optimal derivation for an input sentence x is y arg maxyey w fwe now come to how the featurevector f is defined in our approacha simple firstorder model would define d figure 3 an example tree formed by a combinaed tion of the two structures in figure 2here we use e and d to respectively refer to the set of spines and dependencies in ythe function e maps a sentence x paired with a spine to a feature vectorthe function d maps dependencies within y to feature vectorsthis decomposition is similar to the firstorder model of mcdonald et al but with the addition of the e featureswe will extend our model to include higherorder features in particular features based on sibling dependencies and grandparent dependencies as in if y is a derivation then the first modifier to the left of the spine for m the featurevector definition then becomes where s g and q are feature vectors corresponding to the new higherorder elements5 as in tag approaches there is a mapping from derivations to parse trees in our case we map a spine and its dependencies to a constituent structure by first handling the dependencies on each side separately and then combining the left and right sidesfirst it is straightforward to build the constituent structure resulting from multiple adjunctions on the same side of a spineas one example the structure in figure 2 is formed by first sadjoining the spine with anchor cake into the vp node of the spine for ate then radjoining spines anchored by today and quickly into the same node where all three modifier words are to the right of the head wordnotice that each radjunction operation creates a new vp level in the tree whereas sadjunctions do not create a new levelnow consider a tree formed by first radjoining a spine for luckily into the vp node for ate followed by sadjoining the spine for john into the s node in both cases where the modifiers are to the left of the headin this case the structure that would be formed is shown in figure 2next consider combining the left and right structures of a spinethe main issue is how to handle multiple radjunctions or sadjunctions on both sides of a node in a spine because our derivations do not specify how adjunctions from different sides embed with each otherin our approach the combination operation preserves the height of the different modifiers from the left and right directionsto illustrate this figure 3 shows the result of combining the two structures in figure 2the combination of the left and right modifier structures has led to flat structures for example the rule vp advp vp np in the above treenote that our radjunction operation is different from the usual adjunction operation in tags in that wrapping adjunctions are not possible and radjunctions from the left and right directions are independent from each other because of this our grammars are splittablethis section describes the algorithm for finding y arg maxyy w f where f is defined through either the firstorder model or the secondorder model for the firstorder model the methods described in can be used for the parsing algorithmin eisners algorithms for dependency parsing each word in the input has left and right finitestate automata which generate the left and right modifiers of the word in questionwe make use of this idea of automata and also make direct use of the method described in section 42 of that allows a set of possible senses for each word in the input stringin our use of the algorithm each possible sense for a word corresponds to a different possible spine that can be associated with that wordthe left and right automata are used to keep track of the last position in the spine that was adjoined into on the leftright of the head respectivelywe can make use of separate left and right automataie the grammar is splittablebecause left and right modifiers are adjoined independently of each other in the treethe extension of eisners algorithm to the secondorder model is similar to the algorithm described in but again with explicit use of word senses and leftright automatathe resulting algorithms run in o and o time for the firstorder and secondorder models respectively where g and h are grammar constantsthe efficiency of the parsing algorithm is important in applying the parsing model to test sentences and also when training the model using discriminative methodsthe grammar constants g and h introduced in the previous section are polynomial in factors such as the number of possible spines in the model and the number of possible states in the finitestate automata implicit in the parsing algorithmthese constants are large making exhaustive parsing very expensiveto deal with this problem we use a simple initial model to prune the search space of the more complex modelthe firststage model we use is a firstorder dependency model with labeled dependencies as described in as described shortly we will use this model to compute marginal scores for dependencies in both training and test sentencesa marginal score µ is a value between 0 and 1 that reflects the plausibility of a dependency for sentence x with headword xh modifier word xm and label l in the firststage pruning model the labels l are triples of nonterminals representing grammatical relations as described in section 21 of this paperfor example one possible label would be and in general any triple of nonterminals is possiblegiven a sentence x and an index m of a word in that sentence we define dmax to be the highest scoring dependency with m as a modifier dmax max µ hl for a sentence x we then define the set of allowable dependencies to be to the number of nonterminals in the grammar which is far more manageablewe use the algorithm described in to train the conditional loglinear model this method was found to converge to a good model after 10 iterations over the training data where α is a constant dictating the beam size that is used the set 7r is used to restrict the set of possible parses under the full tagbased modelin section 21 we described how the tag model has dependency labels of the form and that there is a function grm that maps labels of this form to triples of nonterminalsthe basic idea of the pruned search is to only allow dependencies of the form if the tuple is a member of 7r thus reducing the search space for the parserwe now turn to how the marginals µ are defined and computeda simple approach would be to use a conditional loglinear model with features as defined by mcdonald et al to define a distribution p where the parse structures y are dependency structures with labels that are triples of nonterminalsin this case we could define which can be computed with insideoutside style algorithms applied to the data structures from the complexity of training and applying such a model is again o where g is the number of possible labels and the number of possible labels is around g 1000 in the case of treebank parsing this value for g is still too large for the method to be efficientinstead we train three separate models µ1 µ2 and µ3 for the three different positions in the nonterminal tripleswe then take µ to be a product of these three models for example we would calculate µ µ1 x µ2 xµ3 training the three models and calculating the marginals now has a grammar constant equal section 22 described the use of feature vectors associated with spines used in a derivation together with firstorder sibling and grandparent dependenciesthe dependency features used in our experiments are closely related to the features described in which are an extension of the mcdonald and pereira features to cover grandparent dependencies in addition to firstorder and sibling dependenciesthe features take into account the identity of the labels l used in the derivationsthe features could potentially look at any information in the labels which are of the form but in our experiments we map labels to a pair athus the label features are sensitive only to the triple of nonterminals corresponding to the grammatical relation involved in an adjunction and a binary flag specifiying whether the operation is sadjunction or radjunctionfor the spine features e we use feature templates that are sensitive to the identity of the spine η together with contextual features of the string xthese features consider the identity of the words and partofspeech tags in a window that is centered on xi and spans the range x xin the experiments in this paper the following threestep process was used derivations were extracted from a training set drawn from the penn wsj treebank and then used to train a parsing model the test data was parsed using the resulting model giving a derivation for each test data sentence the resulting testdata derivations were mapped back to penntreebank style trees using the method described in section 21to achieve step we first apply a set of headfinding rules which are similar to those described in once the headfinding rules have been applied it is straightforward to extractsections 23 and 24 of the treebank using the method of to extract dependencies from parse trees from our modelkcc08 unlabeled is from a model that has previously been shown to have higher accuracy than kcc08 labeled is the labeled dependency parser from here we only evaluate the unlabeled accuracy derivations from the penn treebank treesnote that the mapping from parse trees to derivations is manytoone for example the example trees in section 23 have structures that are as flat as is possible given the set d that is involvedother similar trees but with more vp levels will give the same set d however this issue appears to be benign in the penn wsj treebankfor example on section 22 of the treebank if derivations are first extracted using the method described in this section then mapped back to parse trees using the method described in section 23 the resulting parse trees score 100 precision and 9981 recall in labeled constituent accuracy indicating that very little information is lost in this processsentences in training test and development data are assumed to have partofspeech tagspos tags are used for two purposes in the features described above and to limit the set of allowable spines for each word during parsingspecifically for each pos tag we create a separate performance of the parser on the development set in each case α refers to the beam size used in both training and testing the modelactive percentage of dependencies that remain in the beam out of the total number of labeled dependencies coverage percentage of correct dependencies in the beam out of the total number of correct dependenciesoracle f1 maximum achievable score of constituents given the beamspeed parsing time in minsec for the tagbased model f1 score of predicted constituents dictionary listing the spines that have been seen with this pos tag in training data during parsing we only allow spines that are compatible with this dictionaryfuture work should consider incorporating the tagging step within the model it is not challenging to extend the model in this waysections 221 of the penn wall street journal treebank were used as training data in our experiments and section 22 was used as a development setsections 23 and 24 were used as test setsthe model was trained for 20 epochs with the averaged perceptron algorithm with the development data performance being used to choose the best epochtable 1 shows the results for the methodour experiments show an improvement in performance over the results in we would argue that the collins method is considerably more complex than ours requiring a firststage generative model together with a reranking approachthe charniak model is also arguably more complex again using a carefully constructed generative modelthe accuracy of our approach also shows some improvement over results in this work makes use of a pcfg with latent variables that is trained using a splitmerge procedure together with the them algorithmthis work is in many ways complementary to oursfor example it does not make use of glms dependency features or of representations that go beyond pcfg productionsand some combination of the two methods may give further gainscharniak and johnson and huang describe approaches that make use of nonlocal features in conjunction with the charniak model future work may consider extending our approach to include nonlocal featuresfinally other recent work has had a similar goal of scaling glms to full syntactic parsingthese models make use of pcfg representations but do not explicitly model bigram or trigram dependenciesthe results in this work are lower than our f1 score of 911 this is evidence of the benefits of the richer representations enabled by our approachtable 2 shows the accuracy of the model in recovering unlabeled dependenciesthe method shows improvements over the method described in which is a stateoftheart secondorder dependency parser similar to that of suggesting that the incorporation of constituent structure can improve dependency accuracytable 3 shows the effect of the beamsize on the accuracy and speed of the parser on the development setwith the beam setting used in our experiments only 034 of possible dependencies are considered by the tagbased model but 99 of all correct dependencies are includedat this beam size the best possible f1 constituent score is 985tighter beams lead to faster parsing times with slight drops in accuracywe have described an efficient and accurate parser for constituent parsinga key to the approach has been to use a splittable grammar that allows efficient dynamic programming algorithms in combination with pruning using a lowerorder modelthe method allows relatively easy incorporation of features future work should leverage this in producing more accurate parsers and in applying the parser to different languages or domainsacknowledgments x carreras was supported by the catalan ministry of innovation universities and enterprise by the gale program of darpa contract nohr001106c0022 and by a grant from ntt agmtdtd6211998t koo was funded by nsf grant iis0415030m collins was funded by nsf grant iis0347631 and darpa contract nohr001106c0022thanks to jenny rose finkel for suggesting that we evaluate dependency parsing accuracies
W08-2102
tag dynamic programming and the perceptron for efficient featurerich parsingwe describe a parsing approach that makes use of the perceptron algorithm in conjunction with dynamic programming methods to recover full constituentbased parse treesthe formalism allows a rich set of parsetree features including pcfgbased features bigram and trigram dependency features and surface featuresa severe challenge in applying such an approach to full syntactic parsing is the efficiency of the parsing algorithms involvedwe show that efficient training is feasible using a tree adjoining grammar based parsing formalisma lowerorder dependency parsing model is used to restrict the search space of the full model thereby making it efficientexperiments on the penn wsj treebank show that the model achieves stateoftheart performance for both constituent and dependency accuracymany edges can be ruled out beforehand based on the marginals computed from a simpler parsing model
the conll 2008 shared task on joint parsing of syntactic and semantic dependencies the conference on computational natural language learning is accompanied every year by a shared task whose purpose is to promote natural language processing applications and evaluate them in a standard setting in 2008 the shared task was dedicated to the joint parsing of syntactic and semantic dependencies this shared task not only unifies the shared tasks of the previous four years under a unique dependencybased formalism but also extends them significantly this years syntactic dependencies include more information such as namedentity boundaries the semantic dependencies model roles of both verbal and nominal predicates in this paper we define the shared task and describe how the data sets were created furthermore we report and analyze the results and describe the approaches of the participating systems in 2004 and 2005 the shared tasks of the conference on computational natural language learning were dedicated to semantic role labeling in a monolingual setting in 2006 and 2007 the shared tasks were devoted to the parsing of syntactic dependencies using corpora from up to 13 languagesthe conll2008 shared task1 proposes a unified dependencybased formalism which models both syntactic dependencies and semantic rolesusing this formalism this shared task merges both the task of syntactic dependency parsing and the task of identifying semantic arguments and labeling them with semantic rolesconceptually the 2008 shared task can be divided into three subtasks parsing of syntactic dependencies identification and disambiguation of semantic predicates and identification of arguments and assignment of semantic roles for each predicateseveral objectives were addressed in this shared task given the complexity of this shared task we limited the evaluation to a monolingual englishonly settingthe evaluation is separated into two different challenges a closed challenge where systems have to be trained strictly with information contained in the given training corpus and an open challenge where systems can be developed making use of any kind of external tools and resourcesthe participants could submit results in either one or both challengesthis paper is organized as followssection 2 defines the task including the format of the data the evaluation metrics and the two challengessection 3 introduces the corpora used and our constituenttodependency conversion proceduresection 4 summarizes the results of the submitted systemssection 5 discusses the approaches implemented by participantssection 6 analyzes the results using additional nonofficial evaluation measuressection 7 concludes the paperin this section we provide the definition of the shared task starting with the format of the shared task data followed by a description of the evaluation metrics used and a discussion of the two shared task challenges ie closed and openthe data format used in this shared task was highly influenced by the formats used in the 20042007 shared tasksthe data follows these general rules table 1 describes the fields stored for each token in the closedtrack data setscolumns 13 and 58 are available at both training and test timecolumn 4 which contains goldstandard partofspeech tags is not given at test timethe same holds for columns 9 and above which contain the syntactic and semantic dependency structures that the systems should predictthe ppos and pposs fields were automatically predicted using the svmtool pos tagger to predict the tags in the training set a 5fold crossvalidation procedure was usedthe lemma and split lemma fields were predicted using the builtin lemmatizer in wordnet based on the most frequent sense for the form and partofspeech tagsince nombank uses a subword analysis in some hyphenated words the data format represents the parts in hyphenated words as separate tokens however the format also represents how the parts originally fit together before splitting padding characters are used in columns 25 to ensure the same number of rows for all columns corresponding to one sentenceall syntactic and semantic dependencies are annotated relative to the split word forms table 2 shows the columns available to the systems participating in the open challenge namedentity labels as in the conll2003 shared task and from the bbn wall street journal entity corpus2 wordnet supersense tags and the output of an offtheshelf dependency parser columns 13 were predicted using the tagger of ciaramita and altun because the bbn corpus shares lexical content with the penn treebank we generated the bbn tags using a 2fold crossvalidation procedurewe separate the evaluation measures into two groups official measures which were used for the ranking of participating systems and additional unofficial measures which provide further insight into the performance of the participating systemsthe official evaluation measures consist of three different scores syntactic dependencies are scored using the labeled attachment score semantic dependencies are evaluated using a labeled fi score and the overall task is scored with a macro average of the two previous scoreswe describe all these scoring measures nextthe las score is defined similarly as in the previous two shared tasks as the percentage of tokens for which a system has predicted the correct head and deprel columns same as before our scorer also computes the unlabeled attachment score ie the percentage of tokens with correct head and label accuracy ie the percentage of tokens with correct deprelthe semantic propositions are evaluated by converting them to semantic dependencies ie we create a semantic dependency from every predicate to all its individual argumentsthese dependencies are labeled with the labels of the corresponding argumentsadditionally we create a semantic dependency from each predicate to a virtual root nodethe latter dependencies are labeled with the predicate sensesthis approach guarantees that the semantic dependency structure conceptually forms a singlerooted connected graphmore importantly this scoring strategy implies that if a system assigns the incorrect predicate sense it still receives some points for the arguments correctly assignedfor example for the correct proposition verb01 arg0 arg1 argmtmp the system that generates the following output for the same argument tokens verb02 arg0 arg1 argmloc receives a labeled precision score of 24 because two out of four semantic dependencies are incorrect the dependency to root is labeled 02 instead of 01 and the dependency to the argmtmp is incorrectly labeled argmlocusing this strategy we compute precision recall and fi scores for both labeled and unlabeled semantic dependenciesfinally we combine the syntactic and semantic measures into one global measure using macro averagingwe compute macro precision and recall scores by averaging the labeled precision and recall for semantic dependencies with the las for syntactic dependencies3 where lmp is the labeled macro precision and lpsem is the labeled precision for semantic dependenciessimilarly lmr is the labeled macro recall and lrsem is the labeled recall for semantic dependencieswsem is the weight assigned to the semantic task4 the macro labeled fi score which was used for the ranking of the participating systems is computed as the harmonic mean of lmp and lmrwe used several additional evaluation measures to further analyze the performance of the participating systemsthe first additional measure used is exact match which reports the percentage of sentences that are completely correct ie all the generated syntactic dependencies are correct and all the semantic propositions are present and correctwhile this score is significantly lower than any of the official scores it will award systems that performed joint learning or optimization for all subtasksin the same spirit but focusing on the semantic subtasks we report the perfect proposition f1 score where we score entire semantic frames or propositionsthis measure is similar to the pprops accuracy score from the 2005 shared task with the caveat that this year this score is implemented as an f1 measure because predicates are not provided in the test datahence propositions may be over or under generated at prediction timelastly we analyze systems based on the ratio between labeled f1 score for semantic dependencies and the las for syntactic dependenciesin other words this measure normalizes the semantic scores relative to the performance of the parsing componentthis measure estimates the true overall performance of the semantic subtasks independent of the syntactic parser5 for example this score addresses the situations where the semantic labeled f1 score of one system is artificially low because the corresponding syntactic component does not perform wellsimilarly to the conll2005 shared task this shared task evaluation is separated into two challenges closed challenge systems have to be built strictly with information contained in the given training corpus and tuned with the development sectionin addition the propbank and nombank lexical frames can be usedthese restrictions mean that constituentbased parsers or srl systems can not be used in this challenge because the constituentbased annotations are not provided in our training setthe aim of this challenge is to compare the performance of the participating systems in a fair environmentopen challenge systems can be developed making use of any kind of external tools and resourcesthe only condition is that such tools or resources must not have been developed with the annotations of the test set both for the input and output annotations of the datain this challenge we are interested in learning methods which make use of any tools or resources that might improve the performancefor example we encourage the use of semantic information as provided by ne recognition or wordsense disambiguation systems also in this challenge participants are encouraged to use constituentbased parsers and srl systems as long as these systems were trained only with the sections of penn treebank used in the shared task training corpusto encourage the participation of the groups that are only interested in srl the organizers provide also the output of a stateoftheart dependency parser as input in this challengethe comparison of different systems in this setting may not be fair and thus ranking of systems is not necessarily importantthe corpora used in the shared task evaluation were generated through a process that merges several input corpora and converts them from the constituentbased formalism to dependenciesthis section starts with an introduction of the input corpora used followed by a description of the constituenttodependency conversion processthe section concludes with an overview of the shared task corporainput to our merging procedures includes the penn treebank bbns named entity corpus propbank and nombankin this section we will provide brief descriptions of these annotations in terms of both form and contentall annotations are currently being distributed by the linguistic data consortium with the exception of nombank which is freely downloadable6 the penn treebank 3 corpus consists of handcoded parses of the wall street journal and a small subset of the brown corpus these hand parses are notated inline and sometimes involve changing the strings of the input datafor example in file wsj 0309 the token fearlast in the text corresponds to the two tokens fear and last in the annotated datain a similar way cannot is regularly split to can and notit is significant that the other annotations assume the tokenization of the penn treebank as this makes it easier for us to merge the annotationthe penn treebank syntactic annotation includes phrases parts of speech empty category representations of various fillergap constructions and other phenomena based on a theoretical perspective similar to that of government and binding theory bbns ne annotation of the wall street journal corpus takes the form of sgml inline markup of text tokenized to be completely compatible with the penn treebank annotation eg fearlast and cannot are split in the same waysnamed entity categories include person organization location gpe facility money percent time and date based on the definitions of these categories in muc and ace7 taskssubcategories are included as wellnote however that from this corpus we only use ne boundaries to derive name dependencies between ne tokens eg we create a name dependency from mary to smith given the ne mention mary smiththe propbank annotation classifies the arguments of all the main verbs in the penn treebank corpus other than bearguments are numbered based on lexical entries or frame filesdifferent sets of arguments are assumed for different rolesetsdependent constituents that fall into categories independent of the lexical entries are classified as various types of argm 8 rather than using propbank directly we used the version created for the conll2005 shared task propbanks pointers to subtrees are converted into the list of leaves of those subtrees minus the empty categorieson occasion arguments of verbs end up being two nonadjacent substringsfor example the argument of claims in the following sentence is indicated in bold this sentence mary claims is selfreferentialthe conll2005 format handles this by marking both strings a1 but adding a c prefix to the argument tag on the second argumentanother difference between the propbank annotation and the conll2005 version of it is their treatments of filler gap constructions involving empty categoriespropbank annotation includes the whole chain of empty categories as well as the antecedent of the empty category in contrast the conll2005 version only includes the filler of the gap and if there is no filler the argument is omitted eg no arg0 for leave would be included in i said to leave because the subject of leave is unspecifiednombank annotation uses essentially the same framework as propbank to annotate arguments of nounsdifferences between propbank and nombank stem from differences between noun and verb argument structure differences in treatment of nouns and verbs in the penn treebank and differences in the sophistication of previous research about noun and verb argument structureonly the subset of nouns that take arguments are annotated in nombank and only a subset of the nonargument siblings of nouns are marked as argmthese limitations were necessary to make the nombank task consistent and tractablein addition long distance dependencies of nouns eg the relation between mary and walk in mary took dozens of walks is handled as follows mary is marked as the arg0 of walk and took dozens of is marked as a support chain in nombankin contrast verbal long distance dependencies can be handled by means of empty categories in the penn treebank eg the relation between john and walked in john seemed t to walksupport chains are needed because nominal long distance dependencies are not captured under the penn treebanks system of empty categoriesthere exists no largescale dependency treebank for english and we thus had to construct a dependencyannotated corpus automatically from the penn treebank since dependency syntax represents grammatical structure by means of labeled binary headdependent relations rather than phrases the task of the conversion procedure is to identify and label the headdependent pairsthe idea underpinning constituenttodependency conversion algorithms is that headdependent pairs are created from constituents by selecting one word in each phrase as the head and setting all other as its dependentsthe dependency labels are then inferred from the phrasesubphrase or phraseword relationsour conversion procedure differs from this basic approach by exploiting the rich structure of the constituent format used in penn treebank 3 of the grammatical function tags available in the treebank we removed the hln nom tpc and ttl tags since they represent structural properties of single phrases rather than binary relationsfor compatibility between the wsj and brown corpora we removed the etc unf and imp tags from brown and the clr tag from wsjalgorithms 1 and 2 show the constituenttodependency conversion algorithm and function labeling respectivelythe first steps apply structural transformations to the constituent treesnext a head word is assigned to each constituentafter this grammatical functions are inferred allowing a dependency tree to be createdto find head children a system of rules is used or if h and d are inside a split word set the function tag of d to l in t if h and d are part of a larger constituent procedure reattachtraces for each empty category t in t set the function tag of s to obj set the function tag of s to oprd else e index of last child of n find head child h between 1 and e according to head rules nhead hhead procedure iscoordinated if n has the label ucp return true if n has a cc or conjp child which is not leftmost return true if n has a or child c and c is not leftmost or rightmost or crossed by an apposition link return true else return false procedure createdependencytree the first column in the table indicates the phrase type the second is the search direction and the third is a priority list of phrase types to look forfor instance to find the head of an s phrase we look from right to left for a vpif no vp is found look for anything with a prd function tag and so onmoreover since the grammatical structure inset the function tag of c to l procedure inferfunction let c be the head of c p the parent of c and p the head of p side noun phrases is underspecified in the penn treebank we imported dependencies inside nps and hyphenated words from a version of the penn treebank mapped into glarf the grammatical and logical argument representation framework the parts of glarfs np analysis that are most relevant to this task include identifying apposites the identification of name boundaries taken from bbns ne annotation eg identifying that smith depends on mary which depends on appointment in the mary smith appointment identifying title and posthon dependencies eg determining that ms and iii depend on mary in ms mary smith iiithese identifications were carried out by handcoded rules that have been fine tuned as part of glarf over the past several yearsfor example identifying apposition constructions requires identifying that both the head and the apposite can stand alone proper nouns plural nouns and singular common nouns with determiners are standalone cases whereas singular nouns without determiners do not qualifywe split treebank tokens at a hyphen or a forward slash if the segments on either side of these delimiters are a word in a dictionary part of a markable named entity9 or a prefix from the list co pre post un anti ante ex extra fore non over pro re super sub tri bi uni ultrafor example yorkbased was split into 3 segments york and basedwhen encoding the semantic dependencies it was necessary to convert the underlying constituent analysis of propbank and nombank into a dependency analysisbecause semantic predicates are already assigned to individual tokens in both propbank and nombank constituenttodependency conversion is thus necessary only for semantic argumentsconceptually this conversion can be handled using similar heuristics as described in section 321however in order to avoid replicating this effort and to ensure compatibility between syntactic and semantic dependencies we decided to generate semantic dependencies using only argument boundaries and the syntactic dependencies generated in section 321 ie ignoring syntactic constituentsgiven this input we identify the head of a semantic argument using the following heuristic the head of a semantic argument is assigned to the token inside the argument boundaries whose head is a token outside the argument boundariesthis heuristic works remarkably well over 99 of the propbank arguments in the training corpus have a single token whose head is located outside of the argument boundariesas a simple example consider the following annotated text soldpred 1214 carsarg1 in the yousargmlocusing the above heuristic the head of the arg1 argument is set to cars because it has an obj dependency to sold and the head of the argmloc argument is set to in because it modifies sold through a loc dependencywhile this heuristic processes the vast majority of arguments there are several cases that require special treatmentwe discuss these situations in the remainder of this sectionarguments with several syntactic heads for 07 of the semantic arguments the above heuristic detects several syntactic heads for the given boundaryfor example in the text itarg0 expectspred its yous sales to remain steady at about 1200 carsarg1 the above heuristic assigns two syntactic heads to arg1 sales which modifies expects through an obj dependency and to which modifies expects through a prd dependencythese situations are caused by the constituenttodependency conversion process described in section 321 which in some cases interprets syntax differently than the original treebank annotation eg the raising phenomenon for the prd dependency in the above examplein such cases we split the original argument into a sequence of discontinuous arguments eg the arg1 in the above example becomes its yous salesarg1 to remain steady at about 1200 carscarg1while in the above case we split arguments there are situations where we can merge arguments that were initially discontinuous in propbank or nombankthis typically happens when the propbanknombank predicate is infixed inside one of its argumentsfor example in the text milliondollar conferencesarg1 were heldpred to chew on subjects such as carg1 propbank lists multiple constituents as aggregately filling the arg1 slot of heldthese cases are detected automatically because the least common ancestor of the argument pieces is actually one of the argument segmentsin the above example to chew on subjects such as depends on milliondollar conferences because to modifies conferences through a nmod dependencyin these situations we treat the least common ancestor eg conferences in the above text as the true argumentthis heuristic allowed us to merge 1665 arguments that were initially discontinuous in the propbank training corpusempty categories propbank and nombank both encode chains of empty categoriesas with the 2005 shared task we used the head of the antecedent of empty categories as arguments rather than empty categoriesfurthermore empty category arguments with no antecedents were ignored10 for example given the man wanted t to make a speech we assume that the a0 of make and speech is man rather than the chain consisting of the empty category represented as t and mannombank and penn treebank annotators sometimes disagree about constituent structurenombank annotators are in effect assuming that the constituents provided form a phrasein this case the constituents are adjacent to each otherfor example consider the np the human rights discussionin this case the penn treebank would treat each of the four words the human rights discussion as daughters of a single np nodehowever nombank would treat human rights as a single arg1 of discussionsince noun noun modification constructions are head final we can easily determine that rights is the markable dependent of discussionfinally nombanks encoding of support chains is handled as chains of dependencies in the data for example given mary took dozens of walks where mary is the arg0 of walks the support chain took dozens of is represented as a sequence of dependencies of depends on mary dozens depends on of and took depends on dozenseach of these dependencies is labeled suthe syntactic dependency types are divided into atomic types that consist of a single label and nonatomic types consisting of more than one labelthere are 38 atomic and 70 nonatomic labels in the corpusthere are three types of nonatomic labels those consisting of a prd or oprd concatenated with an adverbial label such as loc or tmp gapping labels such as gapsbj and combined adverbial tags such as loctmptable 4 shows statistics for the atomic syntactic dependencies label type the frequency of the label in the complete corpus and a description of the labeltable 5 shows the corresponding statistics for nonatomic dependencies excluding gapping dependenciesthe nonatomic labels are rare which made it difficult to learn these relations effectivelytable 6 shows the table for nonatomic labels containing a gapping labela dependency link wi wj is said to be projective if all words occurring between wi and wj in the surface word order are dominated by wi nonprojective links are impossible to handle for the search procedures in many types of dependency parsersit has been previously observed that the majority of dependencies in all languages are projective and this is particularly true for english in the complete corpus only 4118 links are nonprojective3312 sentences or 76 contain at least one nonprojective linkeven to make love he says you need experience split noun phrases such as hold a hearing tomorrow on the topic and all other types of nonprojective linkslastly tables 8 and 9 summarizes statistics for semantic predicates and rolestable 8 shows the number of nonsupport predicates with a given pos tag in the whole corpus the last line shows the number of predicates with a pos tag that does not start with nn or vbthis last table entry is generated by pos tagger mistakes when producing the pposs tags or by errors in our nombankpropbank conversion software11 nevertheless the overall picture given by the table indicates that predicates are almost perfectly distributed between nouns and verbs there are 98525 nominal and 98553 verbal predicatestable 9 shows the number of arguments with a given role labelfor brevity we list only labels that are instantiated at least 10 times in the whole corpusthe total number of arguments labeled with a role label with frequency lower than 10 is listed in the last line in the tablethe table indicates that while the top three most common role labels are core labels modifier arguments account for approximately 20 of the total number of argumentson the other hand discontinuous arguments are not common only 07 of the total number of arguments have a continuation label nineteen groups submitted test runs in the closed challenge and five groups participated in the open challengethree of the latter groups participated only in the open challenge and two of these submitted results only for the semantic subtaskthese results are summarized in tables 10 and 11table 10 summarizes the official results ie results at evaluation deadline for the closed challengenote that several teams corrected bugs andor improved their systems and they submitted postevaluation scores the table indicates that most of the top results cluster together three systems had a labeled macro f1 score on the wsjbrown corpus around 82 points five systems scored around 79 labeled macro f1 points remarkably the topscoring system is in a class of its own with scores 23 points higher than the next systemthis is most likely caused by the fact that johansson and nugues implemented a thorough system that addressed all facets of the task with stateoftheart methods secondorder parsing model argument identificationclassification models separately tuned for propbank and nombank reranking inference for the srl task and finally joint optimization of the complete task using metalearning table 11 lists the official results in the open challengethe results in this challenge are lower than in the closed challenge but this was somewhat to be expected considering that there were fewer participants in this challenge and none of the top five groups in the closed challenge submitted results in the open challengeonly one of the systems that participated in both challenges improved the results submitted in the closed challengezhang et al achieved this by extracting features for their semantic subtask models both from the parser used in the closed challenge and a secondary parser that was trained on a different corpusthe improvements measured were relatively small for the indomain wsj corpus but larger for the outofdomain brown corpus tables 10 and 11 indicate that in both challenges the results on the outofdomain corpus are much lower than the results measured indomain the difference is around 78 las points for the syntactic subtask and 1214 labeled f1 points for semantic dependenciesoverall this yields a drop of approximately 10 labeled macro f1 points for most systemsthis performance decrease on outofdomain corpora is consistent with the results reported in conll2005 on srl these results indicate that domain adaptation is a problem that is far from being solved for both syntactic and semantic analysis of textfurthermore as the scores on the syntactic and semantic subtasks indicate domain adaptation becomes even harder as the task to be solved gets more complexwe describe the participating systems in the next sectionthen in section 6 we revert to result analysis using different evaluation measures and different views of the data task websiteteams are denoted by the last name of the first author of the corresponding paper in the proceedings or the last name of the person who registered the team if no paper was submitteditalics indicate that there is no corresponding paper in the proceedingsresults are sorted in descending order of the labeled macro f1 score on the wsjbrown corpusthe number in parentheses next to the wsjbrown scores indicates the system rank in the corresponding task proceedings or the last name of the person who registered the team if no paper was submitteditalics indicate that there is no corresponding paper in the proceedingsresults are sorted in descending order of the labeled f1 score for semantic dependencies on the wsjbrown corpusthe number in parentheses next to the wsjbrown scores indicates the system rank in the corresponding tasktable 5 summarizes the properties of the systems that participated in the closed the open challengesthe second column of the table highlights the overall architectureswe used to indicate that the components are sequentially connectedthe lack of a sign indicates that the corresponding tasks are performed jointlyfor example riedel and mezaruiz perform predicate and argument identification and classification jointly whereas ciaramita et al implemented a pipeline architecture of three componentswe use the to indicate that several different architectures that span multiple subtasks were deployed in parallelthis summary of system architectures indicates that it is common that systems combine several components in the semantic or syntactic subtasks eg nine systems jointly performed predicateargument identification and classification but only four systems combined components between the syntactic and semantic subtasks henderson et al who implemented a generative historybased model where syntactic and semantic structures are separately generated but using a synchronized derivation samuelsson et al who within an ensemblebased architecture implemented a joint syntacticsemantic model using maltparser with labels enriched with semantic information lluis and marquez who used a modified version of the eisner algorithm to jointly predict syntactic and semantic dependencies and finally sun et al who integrated dependency label classification and argument identification using a maximumentropy markov modeladditionally johansson and nugues who had the highest ranked system in the closed challenge integrate syntactic and semantic analysis in a final reranking step which maximizes the joint syntacticsemantic score in the top k solutionsin the same spirit chen et al search in the top k solutions for the one that maximizes a global measure in this case the joint probability of the complete problemthese joint learning strategies are summarized in the joint learningopt column in the tablethe system of riedel and mezaruiz deserves a special mention even though riedel and mezaruiz did not implement a syntactic parser they are the only group that performed the complete srl subtask ie predicate identification and classification argument identification and classification jointly simultaneously for all the predicates in a sentencethey implemented a joint srl model using markov logic networks and they selected the overall best solution using inference based on the cuttingplane algorithmalthough some of the systems that implemented joint approaches obtained good results the top five systems in the closed challenge are essentially systems with pipeline architecturesfurthermore johansson and nugues and riedel and mezaruiz showed that joint learningoptimization improves the overall results but the improvement is not largethese initial efforts indicate at least that the joint modeling of this problem is not a trivial taskthe d arch and d inference columns summarize the parsing architectures and the corresponding inference strategiessimilar to last years shared task the vast majority of parsing models fall in two classes transitionbased or graphbased modelsby and large transitionbased models use a greedy inference strategy whereas graphbased models used different maximum spanning tree algorithms carreras mstc eisner mste or chuliuedmonds mstclemore interestingly most of the best systems used some strategy to mitigate parsing errorsin the top three systems in the closed challenge two used parser combination through voting andor stacking of different models samuelsson et al perform a mst inference with the bag of all dependencies output by the individual malt parser variantsjohansson and nugues use a single parsing model but this model is extended with secondorder featuresthe pa arch and pa inference columns summarize the architectures and inference strategies used for the identification and classification of predicates and argumentsthe columns indicate that most systems modeled the srl problem as a tokenbytoken classification problem with a corresponding greedy inference strategysome systems incorporate srl within parsing in which case we report the corresponding parsing architecture and inference approachvickrey and koller simplify the sentences to be labeled using a set of handcrafted rules before deploying a classification model on top of a constituentbased representationunlike in the case of parsing few systems combine several pa models and the combination is limited to simple voting strategies finally the ml methods column lists the machine learning methods usedthe column indicates that maximum entropy was the most popular method support vector machines and the perceptron algorithm were also popular ml methodssection 4 summarized the results in the closed and open challenges using the official evaluation measuresin this section we analyze the submitted runs using different evaluation measures eg exact match or perfect proposition f1 scores and different views of the data eg only nonprojective dependencies or nombank versus propbank framestable 12 summary of system architectures that participated in the closed and open challengesthe closedchallenge systems are sorted by macro labeled fl score on the wsjbrown corpusbecause some openchallenge systems did not implement syntactic parsing these systems are sorted by labeled fl score of the semantic dependencies on the wsjbrown corpusonly the systems that have a corresponding paper in the proceedings are includedsystems that participated in both challenges are listed only in the closed challengeacronyms used d syntactic dependencies p predicate a argument i identification c classificationoverall arch stands for the complete system architecture d arch stands for the architecture of the syntactic parser d comb indicates if the final parser output was generated using parser combination d inference stands for the type of inference used for syntactic parsing pa arch stands the type of architecture used for paic pa comb indicates if the pa output was generated through system combination pa inference stands for the the type of inference used for paic joint learningopt indicates if some form of joint learning or optimization was implemented for the syntactic semantic global task ml methods lists the ml methods used throughout the complete system challengesthe closedchallenge systems are sorted in descending order of exact match scores on the wsjbrown corpusopenchallenge submissions are sorted in descending order of the perfect proposition f1 scorethe number in parentheses next to the wsjbrown scores indicates the system rank according to the corresponding scoring measuretable 13 lists the exact match and perfect proposition f1 scores for test runs submitted in both challengesboth these scores measure the capacity of a system to correctly parse structures with granularity much larger than a simple dependency ie entire sentences for exact match and complete propositions for perfect proposition f1 the table indicates that these values are much smaller than the scores previously reported eg labeled macro f1this is to be expected the probability of an incorrectly parsed unit is much larger given its granularityhowever the main purpose of this analysis is to investigate if systems that focused on joint learning or optimization performed better than others with respect to these global measuresthis indeed seems to be the case for at least two systemsthe system of johansson and nugues which jointly optimizes the labeled f1 score and then the labeled macro f1 score increases its distance from the next ranked system its perfect proposition f1 score is over 6 points higher than the score of the second system in table 13the system of henderson et al which was designed for joint learning of the complete task improves its rank from eighth to fifth compared to the official results table 14 shows the unlabeled f1 scores for prediction of nonprojective syntactic dependenciessince nonprojectivity is quite rare many teams chose to ignore this issuethe table shows only those systems that submitted wellformed dependency trees and whose output contained at least one nonprojective linkthe small number of nonprojective links in the training set makes it hard to learn to predict such links and this is also reflected in the figuresin general the figures for nonprojective whmovements and split clauses are higher and they are also the most common typesalso they are detectable by fairly simple patterns such as the presence of a whword or a pair of commastable 63 lists the scores for the semantic subtask measured as the ratio of the labeled f1 score and lasas previously mentioned this score estimates the performance of the srl component independent of the performance of the syntactic parserthis analysis is not a substitute for the actual experiment where the srl components are evaluated using correct syntactic information but nevertheless it indicates several interesting factsfirst the ranking of the top three systems in table 10 changes the system of che et al is now ranked first and the system of johansson and nugues is secondthis shows that che et al have a relatively stronger srl component whereas johansson and nugues developed a better parsersecond several other systems improved their ranking compared to table 10 eg chen from position thirteenth to ninth and choi from sixteenth to eighththis indicates that these systems were penalized in the official ranking mainly due to the relative poor performance of their parsersnote that this experiment is relevant only for systems that implemented pipeline architectures where the semantic components are in fact separated from the syntactic ones this excludes the systems that blended syntax with srl henderson sun and lluisfurthermore systems that had significantly lower scores in syntax will receive an unreasonable boost in ranking according to this measurefortunately there was only one such outlier in this evaluation shown in gray in the tabletable 16 lists the labeled f1 scores for semantic dependencies for two different views of the testing data sets for propositions centered around verbal predicates ie from propbank and for propositions centered around nominal predicates ie from nombank tic dependencies and las for syntactic dependenciessystems are sorted in descending order of this ratio score on the wsjbrown corpuswe only show systems that participated in both the syntactic and semantic subtasksthe table indicates that generally systems performed much worse on nominal predicates than on verbal predicatesthis is to be expected considering that there is significant body of previous work that analyzes the srl problem on propbank but minimal work for nombankon average the difference between the labeled f1 scores for verbal predicates and nominal predicates on the wsjbrown corpus is 784 pointsfurthermore the average difference between labeled f1 scores on the brown corpus alone is 1236 pointsthis indicates that the problem of srl for nominal predicates is more sensitive to domain changes than the equivalent problem for verbal predicatesour conjecture is that because there is very little syntactic structure between nominal predicates and their arguments srl models for nominal predicates select mainly lexical features which are more brittle than syntactic or other nonlexicalized featuresremarkably there is one system which performed better on the wsjbrown for nominal predicates than verbal predicatesunfortunately this group did not submit a systemdescription paper so it is not clear what was their approachvickrey 7841 7975 6957 7186 7329 5325 riedel 7713 7872 6675 7025 7103 6017 zhang 7500 7662 6444 6676 6779 5376 li 7374 7557 6205 6124 6238 4636 wang 6750 7034 4972 6653 6983 2896 table 16 labeled fi scores for frames centered around verbal and nominal predicatesthe number in parentheses next to the wsjbrown scores indicates the system rank in the corresponding data setsystems can mitigate the inherent differences between verbal and nominal predicates with different models for the two subproblemsthis was indeed the approach taken by two out of the top three systems johansson and nugues developed different models for verbal and nominal predicates and implemented separate feature selection processes for each modelche et al followed the same method but they also implemented separate domain constraints for inference for the two modelsthe previous four conll shared tasks popularized and without a doubt boosted research in semantic role labeling and dependency parsingthis years shared task introduces a new task that essentially unifies the problems addressed in the past four years under a unique dependencybased formalismthis novel task is attractive both from a research perspective and an applicationoriented perspective we believe that the proposed dependencybased representation is a better fit for many applications where it is often sufficient to identify the dependency between the predicate and the head of the argument constituent rather than extracting the complete argument constituent it was shown that the extraction of syntactic and semantic dependencies can be performed with stateoftheart performance in linear time this can give a significant boost to the adoption of this technology in realworld applications we hope that this shared task will motivate several important research directionsfor example is the dependencybased representation better for srl than the constituentbased formalismdoes joint learning improve syntactic and semantic analysis surface linguistic features can often be detected with greater reliability than deep featuresin contrast deep features can cover more ground because they regularize across differences in surface stringsmachine learning systems can be more effective by using evidence from both deep and surface features jointly even though this shared task was more complex than the previous shared tasks 22 different teams submitted results in at least one of the challengesbuilding on this success we hope to expand this effort in the future with evaluations on multiple languages and on larger outofdomain corporawe want to thank the following people who helped us with the generation of the data sets jesus gimenez for generating the predicted pos tags with his svmtool pos tagger and massimiliano ciaramita for generating columns 1 2 and 3 in the openchallenge corpus with his semantic taggerwe also thank the following people who helped us with the organization of the shared task paola merlo and james henderson for the idea and the implementation of the exact match measure sebastian riedel for his dependency visualization software12 hai zhao for the the idea of the f1 ratio score and carlos castillo for help with the shared task websitelast but not least we thank the organizers of the previous four shared tasks sabine buchholz xavier carreras ryan mcdonald amit dubey johan hall yuval krymolowski sandra kubler erwin marsi jens nilsson sebastian riedel and deniz yuretthis shared task would not have been possible without their previous effortmihai surdeanu is a research fellow in the ramon y cajal program of the spanish ministry of science and technologyrichard johansson was funded by the swedish national graduate school of language technology adam meyers participation was supported by the national science foundation award cns0551615 and iis0534700 lluis marquezs participation was supported by the spanish ministry of education and science through research projects trangram and openmt
W08-2121
the conll 2008 shared task on joint parsing of syntactic and semantic dependenciesthe conference on computational natural language learning is accompanied every year by a shared task whose purpose is to promote natural language processing applications and evaluate them in a standard settingin 2008 the shared task was dedicated to the joint parsing of syntactic and semantic dependenciesthis shared task not only unifies the shared tasks of the previous four years under a unique dependencybased formalism but also extends them significantly this year syntactic dependencies include more information such as namedentity boundaries the semantic dependencies model roles of both verbal and nominal predicatesin this paper we define the shared task and describe how the data sets were createdfurthermore we report and analyze the results and describe the approaches of the participating systemswe first introduce the predicate classification task which can be regarded as the predicate sense disambiguationthe complete merging process and the conversion from the constituent representation to dependencies is detailed in this work
findings of the 2009 workshop on statistical machine translation j schroeder ed ac uk abstract this paper presents the results of the wmt09 shared tasks which included a translation task a system combination task and an evaluation task we conducted a largescale manual evaluation of 87 machine translation systems and 22 system combination entries we used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for more than 20 metrics we present a new evaluation technique whereby system output is edited and judged for correctness this paper presents the results of the shared tasks of the 2009 eacl workshop on statistical machine translation which builds on three previous workshops there were three shared tasks this year a translation task between english and five other european languages a task to combine the output of multiple machine translation systems and a task to predict human judgments of translation quality using automatic evaluation metricsthe performance on each of these shared task was determined after a comprehensive human evaluationthere were a number of differences between this years workshop and last years workshop beyond ranking the output of translation systems we evaluated translation quality by having people edit the output of systemslater we asked annotators to judge whether those edited translations were correct when shown the source and reference translationthe primary objectives of this workshop are to evaluate the state of the art in machine translation to disseminate common test sets and public training data with published performance numbers and to refine evaluation methodologies for machine translationall of the data translations and human judgments produced for our workshop are publicly available1 we hope they form a valuable resource for research into statistical machine translation system combination and automatic evaluation of translation qualitythe workshop examined translation between english and five other languages german spanish french czech and hungarianwe created a test set for each language pair by translating newspaper articleswe additionally provided training data and a baseline systemthe test data for this years task was created by hiring people to translate news articles that were drawn from a variety of sources during the period from the end of september to midoctober of 2008a total of 136 articles were selected in roughly equal amounts from a variety of czech english french german hungarian italian and spanish news sites2 hungarian hvghu napi mno nepszabadsag czech ihnedcz idnescz lidovkycz aktualnˇecz novinky french dernieresnouvelles le figaro les echos liberation le devoir spanish abces el mundo english bbc new york times times of london german suddeutsche zeitung frankfurter allgemeine zeitung spiegel welt italian adn kronos affari italiani asca corriere della sera il sole 24 ore il quotidiano la republica note that italian translation was not one of this years official translation tasksthe translations were created by the members of euromatrix consortium who hired a mix of professional and nonprofessional translatorsall translators were fluent or native speakers of both languagesalthough we made efforts to proofread all translations many sentences still contain minor errors and disfluenciesall of the translations were done directly and not via an intermediate languagefor instance each of the 20 hungarian articles were translated directly into czech english french german italian and spanishthe total cost of creating the test sets consisting of roughly 80000 words across 3027 sentences in seven languages was approximately 31700 euros previous evaluations additionally used test sets drawn from the europarl corpusour rationale behind discontinuing the use of europarl as a test set was that it overly biases towards statistical systems that were trained on this particular domain and that european parliament proceedings were less of general interest than news storieswe focus on a single task since the use of multiple test sets in the past spread our resources too thin especially in the manual evaluationas in past years we provided parallel corpora to train translation models monolingual corpora to train language models and development sets to tune parameterssome statistics about the training materials are given in figure 1109 word parallel corpus to create the large frenchenglish parallel corpus we conducted a targeted web crawl of bilingual web sitesthese sites came from a variety of sources including the canadian government the european union the united nations and other international organizationsthe crawl yielded on the order of 40 million files consisting of more than 1tb of datapairs of translated documents were identified using a set of simple heuristics to transform french urls into english urls documents that matched were assumed to be translations of each otherall html and pdf documents were converted into plain text which yielded 2 million french files paired with their english equivalentstext files were split so that they contained one sentence per line and had markers between paragraphsthey were sentencealigned in batches of 10000 document pairs using a sentence aligner that incorporates ibm model 1 probabilities in addition to sentence lengths the documentaligned corpus contained 220 million segments with 29 billion words on the french side and 215 million segments with 25 billion words on the english sideafter sentence alignment there were 177 million sentence pairs with 25 billion french words and 22 billion english wordsthe sentencealigned corpus was cleaned to remove sentence pairs which consisted only of numbers or paragraph markers or where the french and english sentences were identicalthe later step helped eliminate documents that were not actually translated which was necessary because we did not perform language identificationafter cleaning the parallel corpus contained 105 million sentence pairs with 2 billion french words and 18 billion english wordsin addition to cleaning the sentencealigned parallel corpus we also deduplicated the corpus removing all sentence pairs that occured more than once in the parallel corpusmany of the documents gathered in our web crawl were duplicates or near duplicates and a lot of the text is repeated as with web site navigationwe further eliminated sentence pairs that varied from previous sentences by only numbers which helped eliminate template web pages such as expense reportswe used a bloom filter to do deduplication so it may have discarded more sentence pairs than strictly necessaryafter deduplication the parallel corpus contained 28 million sentence pairs with 08 billion french words and 07 billion english wordswe have crawled the news sources that were the basis of our test sets since august 2007this allowed us to assemble large corpora in the target domain to be mainly used as training data for language modelingwe collected texts from the beginning of our data collection period to one month before the test set period segmented these into sentences and randomized the order of the sentences to obviate copyright concernsto lower the barrier of entry for newcomers to the field we provided moses an open source toolkit for phrasebased statistical translation the performance of this baseline system is similar to the best submissions in last years shared tasktwelve participating groups used the moses toolkit for the development of their systemwe received submissions from 22 groups from 20 institutions as listed in table 1 a similar turnout to last years shared taskof the 20 groups that participated with regular system submissions in last years shared task 12 groups returned this yeara major hurdle for many was a darpagale evaluation that occurred at the same time as this shared taskwe also evaluated 7 commercial rulebased mt systems and googles online statistical machine translation systemwe note that google did not submit an entry itselfits entry was created by the wmt09 organizers using googles online system3 in personal correspondence franz och clarified that the online system is different from googles research system in that it runs at faster speeds at the expense of somewhat lower translation qualityon the other hand the training data used by google is unconstrained which means that it may have an advantage compared to the research systems evaluated in this workshop since they were trained using only the provided materialsin total we received 87 primary system submissions along with 42 secondary submissionsthese were made available to participants in the system combination shared taskbased on feedback that we received on last years system combination task we provided two additional resources to participants in addition to soliciting system combination entries for each of the language pairs we treated system combination as a way of doing multisource translation following schroeder et al for the multisource system combination task we provided all 46 primary system submissions from any language into english along with an additional 32 secondary systemstable 2 lists the six participants in the system combination taskas with past workshops we placed greater emphasis on the human evaluation than on the automatic evaluation metric scoresit is our contention that automatic measures are an imperfect substitute for human assessment of translation qualitytherefore we define the manual evaluation to be primary and use the human judgments to validate automatic metricsmanual evaluation is time consuming and it requires a large effort to conduct it on the scale of our workshopwe distributed the workload across a number of people including sharedtask participants interested volunteers and a small number of paid annotatorsmore than 160 people participated in the manual evaluation with 100 people putting in more than an hours worth of effort and 30 putting in more than four hoursa collective total of 479 hours of labor was investedwe asked people to evaluate the systems output in two different ways the total number of judgments collected for the different modes of annotation is given in table 3in all cases the output of the various translation outputs were judged on equal footing the output of system combinations was judged alongside that of the individual system and the constrained and unconstrained systems were judged togetherranking translations relative to each other is a reasonably intuitive taskwe therefore kept the instructions simple rank translations from best to worst relative to the other choices in our the manual evaluation annotators were shown at most five translations at a timefor most language pairs there were more than 5 systems submissionswe did not attempt to get a complete ordering over the systems and instead relied on random selection and a reasonably large sample size to make the comparisons fairrelative ranking is our official evaluation metricindividual systems and system combinations are ranked based on how frequently they were judged to be better than or equal to any other systemthe results of this are reported in section 4appendix a provides detailed tables that contain pairwise comparisons between systemswe experimented with a new type of evaluation this year where we asked judges to edit the output of mt systemswe did not show judges the reference translation which makes our editbased evaluation different than the humantargeted translation error rate measure used in the darpa gale program rather than asking people to make the minimum number of changes to the mt output in order capture the same meaning as the reference we asked them to edit the translation to be as fluent as possible without seeing the referenceour hope was that this would reflect peoples understanding of the outputthe instructions that we gave our judges were the following each translated sentence was shown in isolation without any additional contexta screenshot is shown in figure 2since we wanted to prevent judges from seeing the reference before editing the translations we split the test set between the sentences used in the ranking task and the editing task moreover annotators edited only a single systems output for one source sentence to ensure that their understanding of it would not be influenced by another systems outputhalfway through the manual evaluation period we stopped collecting edited translations and instead asked annotators to do the following indicate whether the edited translations represent fully fluent and meaningequivalent alternatives to the reference sentencethe reference is shown with context the actual sentence is boldin addition to edited translations unedited items that were either marked as acceptable or as incomprehensible were also shownjudges gave a simple yesno indication to each itema screenshot is shown in figure 3in order to measure intraannotator agreement 10 of the items were repeated and evaluated twice by each judgein order to measure interannotator agreement 40 of the items were randomly drawn from a common pool that was shared across all annotators so that we would have items that were judged by multiple annotatorsthe two types of manual evaluation we measured pairwise agreement among annotators using the kappa coefficient which is defined as where p is the proportion of times that the annotators agree and p is the proportion of time that they would agree by chancefor interannotator agreement we calculated p for the yesno judgments by examining all items that were annotated by two or more annotators and calculating the proportion of time they assigned identical scores to the same itemsfor the ranking tasks we calculated p by examining all pairs of systems which had been judged by two or more judges and calculated the proportion of time that they agreed that a b a b or a b intraannotator agreement was computed similarly but we gathered items that were annotated on multiple occasions by a single annotatortable 4 gives k values for interannotator and intraannotator agreementthese give an indication of how often different judges agree and how often single judges are consistent for repeated judgments respectivelythe interpretation of kappa varies but according to landis and koch 0 2 is slight 2 4 is fair 4 6 is moderate 6 8 is substantial and the rest almost perfectbased on these interpretations the agreement for yesno judgments is moderate for interannotator agreement and substantial for intraannotator agreement but the interannotator agreement for sentence level ranking is only fairwe analyzed two possible strategies for improving interannotator agreement on the ranking task first we tried discarding initial judgments to give tors initial judgments up to the first 50 items tors initial judgments up to the first 50 items the lowest agreement disregarding up to 40 annotators tators annotators a chance to learn to how to perform the tasksecond we tried disregarding annotators who have very low agreement with others by throwing away judgments for the annotators with the lowest judgmentsfigures 4 and 5 show how the k values improve for intra and interannotator agreement under these two strategies and what percentage of the judgments are retained as more annotators are removed or as the initial learning period is made longerit seems that the strategy of removing the worst annotators is the best in terms of improving interannotator k while retaining most of the judgmentsif we remove the 33 judges with the worst agreement we increase the interannotator k from fair to moderate and still retain 60 of the datafor the results presented in the rest of the paper we retain all judgmentswe used the results of the manual evaluation to analyze the translation quality of the different systems that were submitted to the workshopin our analysis we aimed to address the following questions table 6 shows best individual systemswe define the best systems as those which had no other system that was statistically significantly better than them under the sign test at p g 014 multiple systems are listed for many language pairs because it was not possible to draw a statistically significant difference between the systemscommercial translation software did well in each of the language pairsresearch systems that utilized 4in one case this definition meant that the system that was ranked the highest overall was not considered to be one of the best systemsfor germanenglish translation rbmt5 was ranked highest overall but was statistically significantly worse than rbmt2 only the provided data did as well as commercial vendors in half of the language pairsthe table also lists the best systems among those which used only the provided materialsto determine this decision we excluded unconstrained systems which employed significant external resourcesspecifically we ruled out all of the commercial systems since google has access to significantly greater data sources for its statistical system and since the commercial rbmt systems utilize knowledge sources not available to other workshop participantsthe remaining systems were research systems that employ statistical modelswe were able to draw distinctions between half of these for each of the language pairsthere are some borderline cases for instance limsi only used additional monolingual training resources and liumsystran used additional translation dictionaries as well as additional monolingual resourcestable 5 summarizes the performance of the system combination entries by listing the best ranked combinations and by indicating whether they have a statistically significant difference with the best individual systemsin general system combinations performed as well as the best individual systems but not statistically significantly better than themmoreover it was hard to draw a distinction between the different system combination strategies themselvesthere are a number of possibilities as to why we failed to find significant differences the number of judgments that we collected were not sufficient to find a differencealthough we collected several thousand judgments for each language pair most pairs of systems were judged together fewer than 100 times generally difficult to draw a statistically significant differences between the two groups and between the combinations themselves there is significant overlap in the strategies that they employimproved system combination warrants further investigationwe would suggest collecting additional judgments and doing oracle experiments where the contributions of individual systems are weighted according to human judgments of their qualityunderstandability our hope is that judging the acceptability of edited output as discussed in section 3 gives some indication of how often a systems output was understandablefigure 6 gives the percentage of times that each systems edited output was judged to be acceptable the edited output of the best performing systems under this evaluation model were deemed acceptable around 50 of the time for frenchenglish englishfrench englishspanish germanenglish and englishgermanfor spanishenglish the edited output of the best system was acceptable around 40 of the time for englishczech it was 30 and for czechenglish and hungarianenglish it was around 20this style of manual evaluation is experimental and should not be taken to be authoritativesome caveats about this measure correlate with the more established method of ranking translations relative to each other for all the language pairs5 please also note that the number of corrected translations per system are very low for some language pairs as low as 23 corrected sentences per system for the language pair englishfrenchc indicates constrained condition meaning only using the supplied training data and possibly standard monolingual linguistic tools indicates a win in the category meaning that no other system is statistically significantly better at plevelg01 in pairwise comparison indicates a constrained win no other constrained system is statistically betterfor all pairwise comparisons between systems please check the appendixgiven these low numbers the numbers presented in figure 6 should not be read as comparisons between systems but rather viewed as indicating the state of machine translation for different language pairsin addition to allowing us to analyze the translation quality of different systems the data gathered during the manual evaluation is useful for validating the automatic evaluation metricslast year nist began running a similar metrics for machine translation challenge and presented their findings at a workshop at amta in this years shared task we evaluated a number of different automatic metrics for nonidentical itemsto find a maximum weight matching that matches each system item to at most one reference item the items are then modeled as nodes in a bipartite graph wcd6p4er a measure based on cder with wordbased substitution costsleusch and ney also submitted two contrastive metrics bleusp4114 a modified version of bleus with tuned ngram weights and bleusp with constant weights wcd6p4er is an error measure and bleusp is a quality score metricsthe set of metrics used include rouge meteor measures of overlap between constituent parses dependency parses semantic roles and discourse representationsthe ulc metric had the strongest correlation with human judgments in wmt08 wpf and wpbleu these metrics are based on words and part of speech sequences wpf is an ngram based fmeasure which takes into account both word ngrams and part of speech ngrams wpbleu is a combnination of the normal blue score and a part of speechbased bleu scorewe measured the correlation of the automatic metrics with the human judgments of translation quality at the systemlevel using spearmans rank correlation coefficient p we converted the raw scores assigned to each system into rankswe assigned a human ranking to the systems based on the percent of time that their translations were judged to be better than or equal to the translations of any other system in the manual evaluationwhen there are no ties p can be calculated using the simplified equation where di is the difference between the rank for systemi and n is the number of systemsthe possible values of p range between 1 and 1 thus an automatic evaluation metric with a higher absolute value for p is making predictions that are more similar to the human judgments than an automatic evaluation metric with a lower absolute p because the sentencelevel judgments collected in the manual evaluation are relative judgments rather than absolute judgments it is not possible for us to measure correlation at the sentencelevel in the same way that previous work has done rather than calculating a correlation coefficient at the sentencelevel we instead ascertained how consistent the automatic metrics were with the human judgmentsthe way that we calculated consistency was the following for every pairwise comparison of two systems on a single sentence by a person we counted the automatic metric as being consistent if the relative scores were the same we divided this by the total number of pairwise comparisons to get a percentagebecause the systems generally assign real numbers as scores we excluded pairs that the human annotators ranked as tiestable 7 shows the correlation of automatic metrics when they rank systems that are translating into englishnote that terp ter and wcd6p4er are error metrics so a negative correlation is better for themthe strength of correlation varied for the different language pairsthe automatic metrics were able to rank the frenchenglish systems reasonably well with correlation coefficients in the range of 8 and 9in comparison metrics performed worse for hungarianenglish where half of the systems had negative correlationthe ulc metric once again had strongest correlation with human judgments of translation qualitythis was followed closely by maxsim and rte with meteor and terp doing respectably well in 4th and 5th placenotably bleu and its variants were the worst performing metrics in this translation directiontable 8 shows correlation for metrics which operated on languages other than englishmost of the best performing metrics that operate on english do not work for foreign languages because they perform some linguistic analysis or rely on a resource like wordnetfor translation into foreign languages terp was the best system overallthe wpbleu and wpf metrics also did extremely well performing the best in the language pairs that they were applied to wpbleu and wpf were not applied to czech because the authors of the metric did not have a czech taggerenglishgerman proved to be the most problematic language pair to automatically evaluate with all of the metrics having a negative correlation except wpbleu and tertable 9 gives detailed results for how well variations on a number of automatic metrics do for the task of ranking five englishczech systems6 these systems were submitted by kos and bojar and they investigate the effects of using prague dependency treebank annotations during automatic evaluationthey linearizing the czech trees and evaluated either the lemmatized forms of the czech read off the trees or the tectogrammatical form which retained only lemmatized content words the table also demonstrates sempos meteor and gtm perform better on czech than many other metricstables 10 and 11 show the percent of times that the metrics scores were consistent with human rankings of every pair of translated sentences7 since we eliminated sentence pairs that were judged to be equal the random baseline for this task is 50many metrics failed to reach the baseline this indicates that sentencelevel evaluation of machine translation quality is very difficultrte and ulc again do the best overall for the intoenglish directionthey are followed closely by wpf and wcd6p4er which considerably improve their performance over their systemlevel correlationswe tried a variant on measuring sentencelevel consistencyinstead of using the scores assigned to each individual sentence we used the systemlevel score and applied it to every sentence that was produced by that systemthese can be thought of as a metrics prior expectation about how a system should preform based on their performance on the whole data settables 12 and 13 show that using the systemlevel scores in place of the sentencelevel scores results in considerably higher consistency with human judgmentsthis suggests an interesting line of research for improving sentencelevel predictions by using the performance on a larger data set as a prioras in previous editions of this workshop we carried out an extensive manual and automatic evaluation of machine translation performance for translating from european languages into english rics when their systemlevel ranks are treated as sentencelevel scoresoracle shows the consistency of using the systemlevel human ranks that are given in table 6 rics when their systemlevel ranks are treated as sentencelevel scoresoracle shows the consistency of using the systemlevel human ranks that are given in table 6 and vice versathe number of participants remained stable compared to last years wmt workshop with 22 groups from 20 institutions participating in wmt09this years evaluation also included 7 commercial rulebased mt systems and googles online statistical machine translation systemcompared to previous years we have simplified the evaluation conditions by removing the indomain vs outofdomain distinction focusing on news translations onlythe main reason for this was eliminating the advantage statistical systems have with respect to test data that are from the same domain as the training dataanalogously to previous years the main focus of comparing the quality of different approaches is on manual evaluationhere also we reduced the number of dimensions with respect to which the different systems are compared with sentencelevel ranking as the primary type of manual evaluationin addition to the direct quality judgments we also evaluated translation quality by having people edit the output of systems and have assessors judge the correctness of the edited outputthe degree to which users were able to edit the translations served as a measure of the overall comprehensibility of the translationalthough the interannotator agreement in the sentenceranking evaluation is only fair agreement can be improved by removing the first judgments of each assessor focusing on the judgments that were made once the assessors are more familiar with the taskinterannotator agreement with respect to correctness judgments of the edited translations were higher which is probably due to the simplified evaluation criterion interannotator agreement for both conditions can be increased further by removing the judges with the worst agreementintraannotator agreement on the other hand was considerably higher ranging between moderate and substantialin addition to the manual evaluation criteria we applied a large number of automated metrics to see how they correlate with the human judgmentsthere is considerably variation between the different metrics and the language pairs under considerationas in wmt08 the ulc metric had the highest overall correlation with human judgments when translating into english with maxsim and rte following closely behindterp and wpbleu were best when translating into other languagesautomatically predicting human judgments at the sentencelevel proved to be quite challenging with many of the systems performing around chancewe performed an analysis that showed that if metrics systemlevel scores are used in place of their scores for individual sentences that they do quite a lot betterthis suggests that prior probabilities ought to be integrated into sentencelevel scoringall data sets generated by this workshop including the human judgments system translations and automatic scores are publicly available for other researchers to analyze8this work was supported in parts by the euromatrix project funded by the european commission the gale program of the us defense advanced research projects agency contract nohr001106c0022 and the us national science foundation under grant iis0713448we are grateful to holger schwenk and preslav nakov for pointing out the potential bias in our method for ranking systems when selfjudgments are excludedwe analyzed the results and found that this did not holdwe would like to thank maja popovic for sharing thoughts about how to improve the manual evaluationthanks to cam fordyce for helping out with the manual evaluation again this yearan extremely big thanks to sebastian pado for helping us work through the logic of segmentlevel scoring of automatic evaluation metric
W09-0401
findings of the 2009 workshop on statistical machine translationthis paper presents the results of the wmt09 shared tasks which included a translation task a system combination task and an evaluation taskwe conducted a largescale manual evaluation of 87 machine translation systems and 22 system combination entrieswe used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for more than 20 metricswe present a new evaluation technique whereby system output is edited and judged for correctnessour fren 109 corpus aggregates huge numbers of parallel french english sentences from the webwe show that the performance of corpusbased statistical machine translation has come up to the traditional rulebased method
joshua an open source toolkit for parsingbased machine translation describe an open source toolkit for statistical machine translation joshua implements all of the algorithms required for synchronous context grammars chartparsing gram language model integration beamcubepruning and extraction the toolkit also implements suffixarray grammar extraction and minimum error rate training it uses parallel and distributed computing techniques for scalability we demonstrate that the toolkit achieves state of the art translation performance on the wmt09 frenchenglish translation task large scale parsingbased statistical machine translation quirk et al galley et al and liu et al has made remarkable progress in the last few yearshowever most of the systems mentioned above employ tailormade dedicated software that is not open sourcethis results in a high barrier to entry for other researchers and makes experiments difficult to duplicate and comparein this paper we describe joshua a generalpurpose open source toolkit for parsingbased machine translation serving the same role as moses does for regular phrasebased machine translationour toolkit is written in java and implements all the essential algorithms described in chiang chartparsing ngram language model integration beam and cubepruning and kbest extractionthe toolkit also implements suffixarray grammar extraction and minimum error rate training additionally parallel and distributed computing techniques are exploited to make it scalable we have also made great effort to ensure that our toolkit is easy to use and to extendthe toolkit has been used to translate roughly a million sentences in a parallel corpus for largescale discriminative training experiments we hope the release of the toolkit will greatly contribute the progress of the syntaxbased machine translation researchwhen designing our toolkit we applied general principles of software engineering to achieve three major goals extensibility endtoend coherence and scalabilityextensibility the joshua code is organized into separate packages for each major aspect of functionalityin this way it is clear which files contribute to a given functionality and researchers can focus on a single package without worrying about the rest of the systemmoreover to minimize the problems of unintended interactions and unseen dependencies which is common hinderance to extensibility in large projects all extensible components are defined by java interfaceswhere there is a clear point of departure for research a basic implementation of each interface is provided as an abstract class to minimize the work necessary for new extensionsendtoend cohesion there are many components to a machine translation pipelineone of the great difficulties with current mt pipelines is that these diverse components are often designed by separate groups and have different file format and interaction requirementsthis leads to a large investment in scripts to convert formats and connect the different components and often leads to untenable and nonportable projects as well as hindering repeatability of experimentsto combat these issues the joshua toolkit integrates most critical components of the machine translation pipelinemoreover each component can be treated as a standalone tool and does not rely on the rest of the toolkit we providescalability our third design goal was to ensure that the decoder is scalable to large models and data setsthe parsing and pruning algorithms are carefully implemented with dynamic programming strategies and efficient data structures are used to minimize overheadother techniques contributing to scalability includes suffixarray grammar extraction parallel and distributed decoding and bloom filter language modelsbelow we give a short description about the main functions implemented in our joshua toolkitrather than inducing a grammar from the full parallel training data we made use of a method proposed by kishore papineni to select the subset of the training data consisting of sentences useful for inducing a grammar to translate a particular test setthis method works as follows for the development and test sets that will be translated every ngram is gathered into a map w and associated with an initial count of zeroproceeding in order through the training data for each sentence pair whose sourcetotarget length ratio is within one standard deviation of the average if any ngram found in the source sentence is also found in w with a count of less than k the sentence is selectedwhen a sentence is selected the count of every ngram in w that is found in the source sentence is incremented by the number of its occurrences in the source sentencefor our submission we used k 20 which resulted in 15 million sentence pairs being selected for use as training datathere were 30037600 english words and 30083927 french words in the subsampled training corpushierarchical phrasebased translation requires a translation grammar extracted from a parallel corpus where grammar rules include associated feature valuesin real translation tasks the grammars extracted from large training corpora are often far too large to fit into available memoryin such tasks feature calculation is also very expensive in terms of time required huge sets of extracted rules must be sorted in two directions for relative frequency calculation of such features as the translation probability p and reverse translation probability p since the extraction steps must be rerun if any change is made to the input training data the time required can be a major hindrance to researchers especially those investigating the effects of tokenization or word segmentationto alleviate these issues we extract only a subset of all available rulesspecifically we follow callisonburch et al and use a source language suffix array to extract only those rules which will actually be used in translating a particular set of test sentencesthis results in a vastly smaller rule set than techniques which extract all rules from the training setthe current code requires suffix array rule extraction to be run as a preprocessing step to extract the rules needed to translate a particular test sethowever we are currently extending the decoder to directly access the suffix arraythis will allow the decoder at runtime to efficiently extract exactly those rules needed to translate a particular sentence without the need for a rule extraction preprocessing stepgrammar formalism our decoder assumes a probabilistic synchronous contextfree grammar currently it only handles scfgs of the kind extracted by heiro but is easily extensible to more general scfgs and closely related formalisms like synchronous tree substitution grammars chart parsing given a source sentence to decode the decoder generates a onebest or kbest translations using a cky algorithmspecifically the decoding algorithm maintains a chart which contains an array of cellseach cell in turn maintains a list of proven itemsthe parsing process starts with the axioms and proceeds by applying the inference rules repeatedly to prove new items until proving a goal itemwhenever the parser proves a new item it adds the item to the appropriate chart cellthe item also maintains backpointers to antecedent items which are used for kbest extractionpruning severe pruning is needed in order to make the decoding computationally feasible for scfgs with large targetlanguage vocabulariesin our decoder we incorporate two pruning techniques beam and cube pruning hypergraphs and kbest extraction for each sourcelanguage sentence the chartparsing algorithm produces a hypergraph which represents an exponential set of likely derivation hypothesesusing the kbest extraction algorithm we extract the k most likely derivations from the hypergraphparallel and distributed decoding we also implement parallel decoding and a distributed language model by exploiting multicore and multiprocessor architectures and distributed computing techniquesmore details on these two features are provided by li and khudanpur in addition to the distributed lm mentioned above we implement three local ngram language modelsspecifically we first provide a straightforward implementation of the ngram scoring function in javathis java implementation is able to read the standard arpa backoff ngram models and thus the decoder can be used independently from the srilm toolkit3 we also provide a native code bridge that allows the decoder to use the srilm toolkit to read and score ngramsthis native implementation is more scalable than the basic java lm implementationwe have also implemented a bloom filter lm in joshua following talbot and osborne johsuas mert module optimizes parameter weights so as to maximize performance on a development set as measuered by an automatic evaluation metric such as bleuthe optimization consists of a series of lineoptimizations along the dimensions corresponding to the parametersthe search across a dimension uses the efficient method of och each iteration of our mert implementation consists of multiple weight updates each reflecting a greedy selection of the dimension giving the most gaineach iteration also optimizes several random intermediate initial points in addition to the one surviving from the previous iteration as an approximation to performing multiple random restartsmore details on the mert method and the implementation can be found in zaidan 4we assembled a very large frenchenglish training corpus by conducting a web crawl that targted bilingual web sites from the canadian government the european union and various international organizations like the amnesty international and the olympic committeethe crawl gathered approximately 40 million files consisting of over 1tb of datawe converted pdf doc html asp php etc files into text and preserved the directory structure of the web crawlwe wrote set of simple heuristics to transform french urls onto english urls and considered matching documents to be translations of each otherthis yielded 2 million french documents paired with their english equivalentswe split the sentences and paragraphs in these documents performed sentencealigned them using software that ibm model 1 probabilities into account we filtered and deduplcated the resulting parallel corpusafter discarding 630 thousand sentence pairs which had more than 100 words our final corpus had 219 million sentence pairs with 587867024 english words and 714137609 french wordswe distributed the corpus to the other wmt09 participants to use in addition to the europarl v4 frenchenglish parallel corpus which consists of approximately 14 million sentence pairs with 39 million english words and 44 million french wordsour translation model was trained on these corpora using the subsampling descried in section 21for language model training we used the monolingual news and blog data that was assembled by the university of edinburgh and distributed as part of wmt09this data consisted of 212 million english sentences with half a billion wordswe used srilm to train a 5gram language model using a vocabulary containing the 500000 most frequent words in this corpusnote that we did not use the english side of the parallel corpus as language model training datato tune the system parameters we used news test set from wmt08 which consists of 2051 sentence pairs with 43 thousand english words and 46 thousand french wordsthis is indomain data that was gathered from the same news sources as the wmt09 test setthe translation scores for four different systems are reported in table 15 baseline in this system we use the giza toolkit a suffixarray architecture the srilm toolkit and minimum error rate training to obtain wordalignments a translation model language models and the optimal weights for combining these models respectivelyminimum bayes risk rescoring in this system we reranked the nbest output of our baseline system using minimum bayes risk we rescore the top 300 translations to minimize expected loss under the bleu metricdeterministic annealing in this system instead of using the regular mert whose training objective is to minimize the onebest error we use the deterministic annealing training procedure described in smith and eisner whose objective is to minimize the expected error variational decoding statistical models in machine translation exhibit spurious ambiguitythat is the probability of an output string is split among many distinct derivations in principle the goodness of a string is measured by the total probability of its many derivationshowever finding the best string is then computationally intractabletherefore most systems use a simple viterbi approximation that measures the goodness of a string using only its most probable derivationinstead we develop a variational approximation which considers all the derivations but still allows tractable decodingmore details will be provided in li et al in this system we have used both deterministic annealing and variational decoding we have described a scalable toolkit for parsingbased machine translationit is written in java and implements all the essential algorithms described in chiang and li and khudanpur chartparsing ngram language model integration beam and cubepruning and kbest extractionthe toolkit also implements suffixarray grammar extraction and minimum error rate training additionally parallel and distributed computing techniques are exploited to make it scalablethe decoder achieves state of the art translation performancethis research was supported in part by the defense advanced research projects agencys gale program under contract nohr00110620001 and the national science foundation under grants no0713448 and 0840112the views and findings are the authors alone
W09-0424
joshua an open source toolkit for parsingbased machine translationwe describe joshua an open source toolkit for statistical machine translationjoshua implements all of the algorithms required for synchronous context free grammars chartparsing ngram language model integration beam and cubepruning and kbest extractionthe toolkit also implements suffixarray grammar extraction and minimum error rate trainingit uses parallel and distributed computing techniques for scalabilitywe demonstrate that the toolkit achieves state of the art translation performance on the wmt09 frenchenglish translation taskwe develop the syntaxbased mt system joshua which implements dynamic programming algorithms for secondorder expectation semi rings to efficiently compute the gradients needed for optimization
domain adaptation for statistical machine translation with monolingual resources domain adaptation has recently gained interest in statistical machine translation to cope with the performance drop observed when testing conditions deviate from training conditions the basic idea is that indomain training data can be exploited to adapt all components of an already developed system previous work showed small performance gains by adapting from limited indomain bilingual data here we aim instead at significant performance gains by exploiting large but cheap monolingual indomain data either in the source or in the target language we propose to synthesize a bilingual corpus by translating the monolingual adaptation data into the counterpart language investigations were conducted on a stateoftheart phrasebased system trained on the spanishenglish part of the un corpus and adapted on the corresponding europarl data translation reordering and language models were estimated after translating indomain texts with the baseline by optimizing the interpolation of these models on a development set the bleu score was improved from 2260 to 2810 on a test set a wellknown problem of statistical machine translation is that performance quickly degrades as soon as testing conditions deviate from training conditionsthe very simple reason is that the underlying statistical models always tend to closely approximate the empirical distributions of the training data which typically consist of bilingual texts and monolingual targetlanguage textsthe former provide a means to learn likely translations pairs the latter to form correct sentences with translated wordsbesides the general difficulties of language translation which we do not consider here there are two aspects that make machine learning of this task particularly hardfirst human language has intrinsically very sparse statistics at the surface level hence gaining complete knowledge on translation phrase pairs or target language ngrams is almost impracticalsecond language is highly variable with respect to several dimensions style genre domain topics etceven apparently small differences in domain might result in significant deviations in the underlying statistical modelswhile data sparseness corroborates the need of large language samples in smt linguistic variability would indeed suggest to consider many alternative data sources as wellby rephrasing a famous saying we could say that no data is better than more and assorted datathe availability of language resources for smt has dramatically increased over the last decade at least for a subset of relevant languages and especially for what concerns monolingual corporaunfortunately the increase in quantity has not gone in parallel with an increase in assortment especially for what concerns the most valuable resource that is bilingual corporalarge parallel data available to the research community are for the moment limited to texts produced by international organizations press agencies and technical manualsthe limited availability of parallel data poses challenging questions regarding the portability of smt across different application domains and language pairs and its adaptability with respect to language variability within the same application domainthis work focused on the second issue namely the adaptation of a spanishtoenglish phrasebased smt system across two apparently close domains the united nation corpus and the european parliament corpuscrossdomain adaptation is faced under the assumption that only monolingual texts are available either in the source language or in the target languagethe paper is organized as followssection 2 presents previous work on the problem of adaptation in smt section 3 introduces the exemplar task and research questions we addressed section 4 describes the smt system and the adaptation techniques that were investigated section 5 presents and discusses experimental results and section 6 provides conclusionsdomain adaptation in smt has been investigated only recentlyin adaptation is limited to the target language model the background lm is combined with one estimated on documents retrieved from the web by using the input sentence as query and applying crosslanguage information retrieval techniquesrefinements of this approach are described in in information retrieval techniques are applied to retrieve sentence pairs from the training corpus that are relevant to the test sentencesboth the language and the translation models are retrained on the extracted datain two basic settings are investigated crossdomain adaptation in which a small sample of parallel indomain text is assumed and dynamic adaptation in which only the current input source text is consideredadaptation relies on mixture models estimated on the training data through some unsupervised clustering methodgiven available adaptation data mixture weights are reestimated adhoca variation of this approach was also recently proposed in in mixture models are instead employed to adapt a word alignment model to indomain parallel datain crossdomain adaptation techniques were applied on a phrasebased smt trained on the europarl task in order to translate news commentaries from french to englishin particular a small portion of indomain bilingual data was exploited to adapt the europarl language model and translation models by means of linear interpolation techniquesueffing et al proposed several elaborate adaptation methods relying on additional bilingual data synthesized from the development or test setour work is mostly related to but explores different assumptions about available adaptation data ie only monolingual indomain texts are availablethe adaptation of the translation and reordering models is performed by generating synthetic bilingual data from monolingual texts similarly to what proposed in interpolation of multiple phrase tables is applied in a more principled way than in all entries are merged into one single table corresponding feature functions are concatenated and smoothing is applied when observations are missingthe approach proposed in this paper has many similarities with the simplest technique in but it is applied to a much larger monolingual corpusfinally with respect to previous work we also investigate the behavior of the minimum error training procedure to optimize the combination of feature functions on a small indomain bilingual samplethis paper addresses the issue of adapting an already developed phrasebased translation system in order to work properly on a different domain for which almost no parallel data are available but only monolingual texts1 the main components of the smt system are the translation model which aims at porting the content from the source to the target language and the language model which aims at building fluent sentences in the target languagewhile the former is trained with bilingual data the latter just needs monolingual target textsin this work a lexicalized reordering model is also exploited to control reordering of target wordsthis model is also learnable from parallel dataassuming some large monolingual indomain texts are available two basic adaptation approaches are pursued here generating synthetic bilingual data with an available smt system and use this data to adapt its translation and reordering models using synthetic or provided target texts to also or only adapt its language modelthe following research questions summarize our basic interest in this workthe investigation presented in this paper was carried out with the moses toolkit a stateoftheart opensource phrasebased smt systemwe trained moses in a standard configuration including a 4feature translation model a 7feature lexicalized reordering model one lm word and phrase penaltiesthe translation and the reordering model relied on growdiagfinal symmetrized wordtoword alignments built using giza and the training script of mosesa 5gram language model was trained on the target side of the training parallel corpus using the irstlm toolkit exploiting modified kneserney smoothing and quantizing both probabilities and backoff weightsdecoding was performed applying cubepruning with a poplimit of 6000 hypothesesloglinear interpolations of feature functions were estimated with the parallel version of minimum error rate training procedure distributed with mosesthe standard procedure of moses for the estimation of the translation and reordering models from a bilingual corpus consists in three main steps recently we enhanced moses decoder to also output the wordtoword alignment between the input sentence and its translation given that they have been added to the phrase table at training timenotice that the additional information introduces an overhead in disk usage of about 70 but practically no overhead at decoding timehowever when training translation and reordering models from synthetic data generated by the decoder this feature allows to completely skip the timeexpensive step 12 we tested the efficiency of this solution for training a translation model on a synthesized corpus of about 300k spanish sentences and 88m running words extracted from the europarl corpuswith respect to the standard procedure the total training time was reduced by almost 50 phrase extraction produced 10 more phrase pairs and the final translation system showed a loss in translation performance below 1 relativegiven this outcome we decided to apply the faster procedure in all experimentsonce monolingual adaptation data is automatically translated we can use the synthetic parallel corpus to estimate new language translation and reordering modelssuch models can either replace or be combined with the original models of the smt systemthere is another simple option which is to concatenate the synthetic parallel data with the original training data and rebuild the systemwe did not investigate this approach because it does not allow to properly balance the contribution of different data sources and also showed to underperform in preliminary workconcerning the combination of models in the following we explain how moses was extended to manage multiple translation models and multiple reordering models in moses a tm is provided as a phrase table which is a set s of phrase pairs associated with a given number of features values hin our configuration 5 features for the tm are taken into accountin the first phase of the decoding process moses generates translation options for all possible input phrases f through a lookup into s it simply extracts alternative phrase pairs for a specific f and optionally applies pruning to limit the number of such pairsin the second phase of decoding it creates translation hypotheses of the full input sentence by combining in all possible ways the prefetched translation optionsin this phase the hypotheses are scored according to all features functions ranked and possibly prunedwhen more tms sj are available moses can behave in two different ways in prefetching the translation optionsit searches a given f in all sets and keeps a phrase pair if it belongs to either i their intersection or ii their unionthe former method corresponds to building one new tm si whose set is the intersection of all given sets phrasebased and lexicalbased direct features are defined as follows here φ is the probability of ek given fh provided by the wordtoword lexicon computed on sjthe inverted features are defined similarlythe phrase penalty is trivially set to 1the same approach has been applied to build the union of reordering modelsin this case however the smoothing value is constant and set to 0001as concerns as the use of multiple lms moses has a very easy policy consisting of querying each of them to get the likelihood of a translation hypotheses and uses all these scores as featuresit is worth noting that the exploitation of multiple models increases the number of features of the whole system because each model adds its set of featuresfurthermore the first approach of moses for model combination shrinks the size of the phrase table while the second one enlarges itthe set of features of the new tm is the union of the features of all single tmsstraightforwardly all feature values are welldefinedthe second method corresponds to building one new tm su whose set is the union of all given sets su j e sj again the set of features of the new tm is the union of the features of all single tms but for a phrase pair belonging to su sj the feature values h are undefinedin these undefined situations moses provides a default value of 0 which is the highest available score as the feature values come from probabilistic distributions and are expressed as logarithmshenceforth a phrase pair belonging to all original sets is penalized with respect to phrase pairs belonging to few of them onlyto address this drawback we proposed a new method3 to compute a more reliable and smoothed score in the undefined case based on the ibm model 1 if e su sj for any j the in this work the background domain is given by the spanishenglish portion of the un parallel corpus4 composed by documents coming from the office of conference services at the un in new york spanning the period between 1988 and 1993the adaptation data come from the european parliament corpus as provided for the shared translation task of the 2008 workshop on statistical machine translation5 development and test sets for this task namely dev2006 and test2008 are supplied as well and belong to the european parliament domainwe use the symbol s to denote synthetic spanish dataspanishtoenglish and englishtospanish systems trained on un data were exploited to generate english and spanish synthetic portions of the original ep corpus respectivelyin this way we created two synthetic versions of the ep corpus named seep and seep respectivelyall presented translation systems were optimized on the dev2006 set with respect to the bleu score and tested on test2008table 1 reports statistics of original and synthetic parallel corpora as well of the employed development and evaluation data setsall the texts were just tokenized and mixed case was kepthence all systems were developed to produce casesensitive translationsthree spanishtoenglish baseline systems were trained by exploiting different parallel or monolingual corpora summarized in the first three lines in table 2for each system the table reports the perplexity and outofvocabulary percentage of their lm and its translation performance achieved on the test set in terms of bleu score nist score wer and per the distance in style genre jargon etc between the un and the ep corpora is made evident by the gap in perplexity and oov percentage between their english lms 286 vs 74 and 112 vs 015 respectivelyperformance of the system trained on the ep corpus can be taken as an upper bound for any adaptation strategy trying to exploit parts of the ep corpus while those of the first line clearly provide the corresponding lowerboundthe system in the second row can instead be consider as the lower bound when only monolingual english adaptation data are assumedthe synthesis of the seep corpus was performed with the system trained just on the un training data because we had assumed that the indomain data were only monolingual spanish and thus not useful for neither the tm rm nor target lm estimationsimilarly the system in the last row of table 2 was developed on the un corpus to translate the english part of the ep data to generate the synthetic seep corpusagain any indomain data were exploited to train this sytemof course this system cannot be compared with any other because of the different translation directionin order to compare reported performance with the stateoftheart table 2 also reports results of the best system published in the euromatrix project website6 and of the google online translation engine7 it is wellknown that tuning the smt system is fundamental to achieve good performancethe standard tuning procedure consists of a minimum error rate training which relies on the availability of a development data seton the other hand the most important assumption we make is that almost no parallel indomain data are available the tuning process and bleu score achieved on the test set by the uniform interpolation weights and by the optimal weights with different configurations of the tuning parametersin a preliminary phase we investigated different settings of the tuning process in order to understand how much development data is required to perform a reliable weight optimizationour models were trained on the seep parallel corpus and by using uniform interpolation weights the system achieved a bleu score of 2228 on the test set we assumed to dispose of either a regular indomain development set of 2000 sentences or a small portion of it of just 200 sentencesmoreover we tried to employ either 1000best or 200best translation candidates during the mert processfrom a theoretical point of view computational effort of the tuning process is proportional to the square of the number of translation alternatives generated at each iteration times the number of iterations until convergencefigure 1 reports incremental tuning time and translation performance on the test set at each iterationnotice that the four tuning configurations are ranked in order of complexitytable 3 summaries the final performance of each tuning process after convergence was reachednotice that decoding time is not included in this plot as moses allows to perform this step in parallel on a computer clusterhence to our view the real bottleneck of the tuning process is actually related to the strictly serial part of the mert implementation of mosesas already observed in previous literature first iterations of the tuning process produces very bad weights this exceptional performance drop is attributed to an overfitting on the candidate repositoryconfigurations exploiting the small development set show a slower and more unstable convergence however their final performance in table 3 result only slightly lower than that obtained with the standard dev sets due to the larger number of iterations they needed both configurations are indeed more time consuming than the intermediate configuration which seems the best onein conclusion we found that the size of the nbest list has essentially no effect on the quality of the final weights but it impacts significantly on the computational timemoreover using the regular development set with few translation alternatives ends up to be the most efficient configuration in terms of computational effort robustness and performanceour analysis suggests that it is important to dispose of a sufficiently large development set although reasonably good weights can be obtained even if such data are very fewa set of experiments was devoted to the adaptation of the lm onlywe trained three different lms on increasing portions of the ep and we employed them either alone or in combination with the background lm trained on the un corpuspercentage of monolingual english adaptation data systemsthe absolute gain with respect to the baseline is fairly high even with the smallest amount of adaptation data the benefit of using the background data together with indomain data is very small and rapidly vanishes as the amount of such data increasesif english synthetic texts are employed to adapt the lm component the increase in performance is significantly lower but still remarkable by employing all the available data the gain in bleu score was of 4 relative that is from 2260 to 2352 opment set as usualresults of these experiments are reported in figure 3results suggest that regardless of the used bilingual corpora the indomain tms and rms work better alone than combined with the original modelswe think that this behavior can be explained by a limited disciminative power of the resulting combined modelthe background translation model could contain phrases which either do or do not fit the adaptation domainas the weights are optimized to balance the contribution of all phrases the system is not able to well separate the positive examples from the negative onesin addition to it system tuning is much more complex because the number of features increases from 14 to 26finally tms and rms estimated from synthetic data show to provide smaller but consistent contributions than the corresponding lmswhen english indomain data is provided bleu score increases from 2260 to 2810 tm and rm contribute by about 5 relative by covering the gap from 2783 to 2810when spanish indomain data is provided bleu score increases from 2260 to 2368 tm and rm contribute by about 15 relative by covering the gap from 2352 to 2368 summarizing the most important role in the domain adaptation is played by the lm nevertheless the adaptation of the tm and rm gives a small further improvement another set of experiments relates to the adaptation of the tm and the rmindomain tms and rms were estimated on three different versions of the full parallel ep corpus namely ep seep and seepindomain lms were trained on the corresponding english sideall indomain models were either used alone or combined with the baseline models according to multiplemodel paradigm explained in section 43tuning of the interpolation weights was performed on the standard develthis paper investigated crossdomain adaptation of a stateoftheart smt system by exploiting large but cheap monolingual datawe proposed to generate synthetic parallel data by translating monolingual adaptation data with a background system and to train statistical models from the synthetic corpuswe found that the largest gain is achieved when indomain data are available for the target languagea smaller performance improvement is still observed if source adaptation data are availablewe also observed that the most important role is played by the lm adaptation while the adaptation of the tm and rm gives consistent but small improvementwe also showed that a very tiny development set of only 200 parallel sentences is adequate enough to get comparable performance as a 2000sentence setfinally we described how to reduce the time for training models from a synthetic corpus generated through moses by 50 at least by exploiting wordalignment information provided during decoding
W09-0432
domain adaptation for statistical machine translation with monolingual resourcesdomain adaptation has recently gained interest in statistical machine translation to cope with the performance drop observed when testing conditions deviate from training conditionsthe basic idea is that indomain training data can be exploited to adapt all components of an already developed systemprevious work showed small performance gains by adapting from limited indomain bilingual datahere we aim instead at significant performance gains by exploiting large but cheap monolingual indomain data either in the source or in the target languagewe propose to synthesize a bilingual corpus by translating the monolingual adaptation data into the counterpart languageinvestigations were conducted on a stateoftheart phrasebased system trained on the spanishenglish part of the un corpus and adapted on the corresponding europarl datatranslation reordering and language models were estimated after translating indomain texts with the baselineby optimizing the interpolation of these models on a development set the bleu score was improved from 2260 to 2810 on a test setin order to use sourceside monolingual data we employ the transductive learning to first translate the sourceside monolingual data using the best configuration and obtain 1best translation for each sourceside sentencewe adapt an smt system with automatic translations and trained the translation and reordering models on the word alignment used by moses
fluency adequacy or hter exploring different human judgments with a tunable mt metric automatic machine translation evaluation metrics have traditionally been evaluated by the correlation of the scores they assign to mt output with human judgments of translation performance different types of human judgments such as fluency adequacy and hter measure varying aspects of mt performance that can be captured by automatic mt metrics we explore these differences through the use of a new tunable mt metric terplus which extends the translation edit rate evaluation metric with tunable parameters and the incorporation of morphology synonymy and paraphrases terplus was shown to be one of the top metrics in nists metrics matr 2008 challenge having the highest average rank in terms of pearson and spearman correlation optimizing terplus to different types of human judgments yields significantly improved correlations and meaningful changes in the weight of different types of edits demonstrating significant differences between the types of human judgments since the introduction of the bleu metric statistical mt systems have moved away from human evaluation of their performance and towards rapid evaluation using automatic metricsthese automatic metrics are themselves evaluated by their ability to generate scores for mt output that correlate well with human judgments of translation qualitynumerous methods of judging mt output by humans have been used including fluency adequacy and more recently humanmediated translation edit rate fluency measures whether a translation is fluent regardless of the correct meaning while adequacy measures whether the translation conveys the correct meaning even if the translation is not fully fluentfluency and adequacy are frequently measured together on a discrete 5 or 7 point scale with their average being used as a single score of translation qualityhter is a more complex and semiautomatic measure in which humans do not score translations directly but rather generate a new reference translation that is closer to the mt output but retains the fluency and meaning of the original referencethis new targeted reference is then used as the reference translation when scoring the mt output using translation edit rate or when used with other automatic metrics such as bleu or meteor one of the difficulties in the creation of targeted references is a further requirement that the annotator attempt to minimize the number of edits as measured by ter between the mt output and the targeted reference creating the reference that is as close as possible to the mt output while still being adequate and fluentin this way only true errors in the mt output are countedwhile hter has been shown to be more consistent and finer grained than individual human annotators of fluency and adequacy it is much more time consuming and taxing on human annotators than other types of human judgments making it difficult and expensive to usein addition because hter treats all edits equally no distinction is made between serious errors and minor edits different types of translation errors vary in importance depending on the type of human judgment being used to evaluate the translationfor example errors in tense might barely affect the adequacy of a translation but might cause the translation be scored as less fluenton the other hand deletion of content words might not lower the fluency of a translation but the adequacy would sufferin this paper we examine these differences by taking an automatic evaluation metric and tuning it to these these human judgments and examining the resulting differences in the parameterization of the metricto study this we introduce a new evaluation metric terplus 1 that improves over the existing translation edit rate metric incorporating morphology synonymy and paraphrases as well as tunable costs for different types of errors that allow for easy interpretation of the differences between human judgmentssection 2 summarizes the ter metric and discusses how terp improves on itcorrelation results with human judgments including independent results from the 2008 nist metrics matr evaluation where terp was consistently one of the top metrics are presented in section 3 to show the utility of terp as an evaluation metricthe generation of paraphrases as well as the effect of varying the source of paraphrases is discussed in section 4section 5 discusses the results of tuning terp to fluency adequacy and hter and how this affects the weights of various edit typesboth ter and terp are automatic evaluation metrics for machine translation that score a translation the hypothesis of a foreign language text the source against a translation of the source text that was created by a human translator called a reference translationthe set of possible correct translations is very largepossibly infinite and any single reference translation is just a single point in that spaceusually multiple reference translations typically 4 are provided to give broader sampling of the space of correct translationsautomatic mt evaluation metrics compare the hypothesis against this set of reference translations and assign a score to the similarity higher scores are given to hypotheses that are more similar to the referencesin addition to assigning a score to a hypothesis the ter metric also provides an alignment between the hypothesis and the reference enabling it to be useful beyond general translation evaluationwhile ter has been shown to correlate well with human judgments of translation quality it has several flaws including the use of only a single reference translation and the measuring of similarity only by exact word matches between the hypothesis and the referencethe handicap of using a single reference can be addressed by the construction of a lattice of reference translationssuch a technique has been used with ter to combine the output of multiple translation systems terp does not utilize this methodology2 and instead focuses on addressing the exact matching flaw of tera brief description of ter is presented in section 21 followed by a discussion of how terp differs from ter in section 22one of the first automatic metrics used to evaluate automatic machine translation systems was word error rate which is the standard evaluation metric for automatic speech recognitionwer is computed as the levenshtein distance between the words of the system output and the words of the reference translation divided by the length of the reference translationunlike speech recognition there are many correct translations for any given foreign sentencethese correct translations differ not only in their word choice but also in the order in which the words occurwer is generally seen as inadequate for evaluation for machine translation as it fails to combine knowledge from multiple reference translations and also fails to model the reordering of words and phrases in translationter addresses the latter failing of wer by allowing block movement of words called shifts within the hypothesisshifting a phrase has the same edit cost as inserting deleting or substituting a word regardless of the number of words being shiftedwhile a general solution to wer with block movement is npcomplete ter addresses this by using a greedy search to select the words to be shifted as well as further constraints on the words to be shiftedthese constraints are intended to simulate the way in which a human editor might choose the words to shiftfor exact details on these constraints see snover et al there are other automatic metrics that follow the general formulation as ter but address the complexity of shifting in different ways such as the cder evaluation metric when ter is used with multiple references it does not combine the referencesinstead it scores the hypothesis against each reference individuallythe reference against which the hypothesis has the fewest number of edits is deemed the closet reference and that number of edits is used as the numerator for calculating the ter scorefor the denominator ter uses the average number of words across all the referencesterplus is an extension of ter that aligns words in the hypothesis and reference not only when they are exact matches but also when the words share a stem or are synonymsin addition it uses probabilistic phrasal substitutions to align phrases in the hypothesis and referencethese phrases are generated by considering possible paraphrases of the reference wordsmatching using stems and synonyms and using paraphrases have previously been shown to be beneficial for automatic mt evaluationparaphrases have also been shown to be useful in expanding the number of references used for parameter tuning although they are not used directly in this fashion within terpwhile all edit costs in ter are constant all edit costs in terp are optimized to maximize correlation with human judgmentsthis is because while a set of constant weights might prove adequate for the purpose of measuring translation qualityas evidenced by correlation with human judgments both for ter and hterthey may not be ideal for maximizing correlationterp uses all the edit operations of ter matches insertions deletions substitutions and shiftsas well as three new edit operations stem matches synonym matches and phrase substitutionsterp identifies words in the hypothesis and reference that share the same stem using the porter stemming algorithm two words are determined to be synonyms if they share the same synonym set according to wordnet sequences of words in the reference are considered to be paraphrases of a sequence of words in the hypothesis if that phrase pair occurs in the terp phrase tablethe terp phrase table is discussed in more detail in section 4with the exception of the phrase substitutions the cost for all other edit operations is the same regardless of what the words in question arethat is once the edit cost of an operation is determined via optimization that operation costs the same no matter what words are under considerationthe cost of a phrase substitution on the other hand is a function of the probability of the paraphrase and the number of edits needed to align the two phrases according to terpin effect the probability of the paraphrase is used to determine how much to discount the alignment of the two phrasesspecifically the cost of a phrase substitution between the reference phrase p1 and the hypothesis phrase p2 is where w1 w2 w3 and w4 are the 4 free parameters of the edit cost edit is the edit cost according to terp of aligning p1 to p2 and pr is the probability of paraphrasing p1 as p2 obtained from the terp phrase tablethe w parameters of the phrase substitution cost may be negative while still resulting in a positive phrase substitution cost as w2 is multiplied by the log probability which is always a negative numberin practice this term will dominate the phrase substitution edit costthis edit cost for phrasal substitutions is therefore specified by four parameters w1 w2 w3 and w4only paraphrases specified in the terp phrase table are considered for phrase substitutionsin addition the cost for a phrasal substitution is limited to values greater than or equal to 0 ie the substitution cost cannot be negativein addition the shifting constraints of terp are also relaxed to allow shifting of paraphrases stems and synonymsin total terp uses 11 parameters out of which four represent the cost of phrasal substitutionsthe match cost is held fixed at 0 so that only the 10 other parameters can vary during optimizationall edit costs except for the phrasal substitution parameters are also restricted to be positivea simple hillclimbing search is used to optimize the edit costs by maximizing the correlation of human judgments with the terp scorethese correlations are measured at the sentence or segment levelalthough it was done for the experiments described in this paper optimization could also be performed to maximize document level correlation such an optimization would give decreased weight to shorter segments as compared to the segment level optimizationthe optimization of the terp edit costs and comparisons against several standard automatic evaluation metrics using human judgments of adequacy is first described in section 31we then summarize in section 32 results of the nist metrics matr workshop where terp was evaluated as one of 39 automatic metrics using many test conditions and types of human judgmentsas part of the 2008 nist metrics matr workshop a development subset of translations from eight arabictoenglish mt systems submitted to nists mteval 2006 was released that had been annotated for adequacywe divided this development set into an optimization set and a test set which we then used to optimize the edit costs of terp and compare it against other evaluation metricsterp was optimized to maximize the segment level pearson correlation with adequacy on the optimization setthe edit costs determined by this optimization are shown in table 1we can compare terp with other metrics by comparing their pearson and spearman correlations with adequacy at the segment document and system leveldocument level adequacy scores are determined by taking the length weighted average of the segment level scoressystem level scores are determined by taking the weighted average of the document level scores in the same mannerwe compare terp with bleu meteor and ter the ibm version of bleu was used in case insensitive mode with an ngramsize of 4 to calculate the bleu scorescase insensitivity was used with bleu as it was found to have much higher correlation with adequacyin addition we also examined bleu using an ngramsize of 2 instead of the default ngramsize of 4 as it often has a higher correlation with human judgmentswhen using meteor the exact matching porter stemming matching and wordnet synonym matching modules were usedter was also used in case insensitive modewe show the pearson and spearman correlation numbers of terp and the other automatic metrics on the optimization set and the test set in tables 2 and 3correlation numbers that are statistically indistinguishable from the highest correlation using a 95 confidence interval are shown in bold and numbers that are actually not statistically significant correlations are marked with a terp has the highest pearson correlation in all conditions although not all differences are statistically significantwhen examining the spearman correlation terp has the highest correlation on the segment and system levels but performs worse than meteor on the document level spearman correlatonsterp was one of 39 automatic metrics evaluated in the 2008 nist metrics matr challengein order to evaluate the state of automatic mt evaluation nist tested metrics across a number of conditions across 8 test setsthese conditions included segment document and system level correlations with human judgments of preference fluency adequacy and hterthe test sets included translations from arabictoenglish chinesetoenglish farsitoenglish arabictofrench and englishtofrench mt systems involved in nists mteval 2008 the gale phase 2 and phrase 25 program transtac january and july 2007 and cesta run 1 and run 2 covering multiple genresthe version of terp submitted to this workshop was optimized as described in section 31the development data upon which terp was optimized was not part of the test sets evaluated in the challengedue to the wealth of testing conditions a simple overall view of the official matr08 results released by nist is difficultto facilitate this analysis we examined the average rank of each metric across all conditions where the rank was determined by their pearson and spearman correlation with human judgmentsto incorporate statistical significance we calculated the 95 confidence interval for each correlation coefficient and found the highest and lowest rank from which the correlation coefficient was statistically indistinguishable resulting in lower and upper bounds of the rank for each metric in each conditionthe average lower bound actual and upper bound ranks of the top metrics as well as bleu and ter are shown in table 4 sorted by the average upper bound pearson correlationfull descriptions of the other metrics3 the evaluation results and the test set composition are available from nist this analysis shows that terp was consistently one of the top metrics across test conditions and had the highest average rank both in terms of pearson and spearman correlationswhile this analysis is not comprehensive it does give a general idea of the performance of all metrics by synthesizing the results into a single tablethere are striking differences between the spearman and pearson correlations for other metrics in particular the cder metric had the second highest rank in spearman correlations but was the sixth ranked metric according to the pearson correlationin several cases terp was not the best metric although it performed well on averagein particular terp did significantly better than the ter metric indicating the benefit of the enhancements made to terterp uses probabilistic phrasal substitutions to align phrases in the hypothesis with phrases in the referenceit does so by looking upin a precomputed phrase tableparaphrases of phrases in the reference and using its associated edit cost as the cost of performing a match against the hypothesisthe paraphrases used in terp were extracted using the pivotbased method as described in with several additional filtering mechanisms to increase the precisionthe pivotbased method utilizes the inherent monolingual semantic knowledge from bilingual corpora we first identify englishtof phrasal correspondences then map from english to english by following translation units from english to f and backfor example if the two english phrases e1 and e2 both correspond to the same foreign phrase f then they may be considered to be paraphrases of each other with the following probability if there are several pivot phrases that link the two english phrases then they are all used in computing the probability the corpus used for extraction was an arabicenglish newswire bitext containing a million sentencesa few examples of the extracted paraphrase pairs that were actually used in a run of terp on the metrics matr 2008 development set are shown below a discussion of paraphrase quality is presented in section 41 followed by a brief analysis of the effect of varying the pivot corpus used by the automatic paraphrase generation upon the correlation performance of the terp metric in section 42we analyzed the utility of the paraphrase probability and found that it was not always a very reliable estimate of the degree to which the pair was semantically relatedfor example we looked at all paraphrase pairs that had probabilities greater than 09 a set that should ideally contain pairs that are paraphrastic to a large degreein our analysis we found the following five kinds of paraphrases in this set pairs only differ in the morphological form for one of the wordsas the examples show any knowledge that these pairs may provide is already available to terp via stemming given this distribution of the pivotbased paraphrases we experimented with a variant of terp that did not use the paraphrase probability at all but instead only used the actual edit distance between the two phrases to determine the final cost of a phrase substitutionthe results for this experiment are shown in the second row of table 5we can see that this variant works as well as the full version of terp that utilizes paraphrase probabilitiesthis confirms our intuition that the probability computed via the pivotmethod is not a very useful predictor of semantic equivalence for use in terpto determine the effect that the pivot language might have on the quality and utility of the extracted paraphrases in terp we used paraphrase pairsmade available by callisonburch these paraphrase pairs were extracted from europarl data using each of 10 european languages as a pivot language separately and then combining the extracted paraphrase pairscallisonburch also extracted and made available syntactically constrained paraphrase pairs from the same data that are more likely to be semantically relatedwe used both sets of paraphrases in terp as alternatives to the paraphrase pairs that we extracted from the arabic newswire bitextthe results are shown in the last four rows of table 5 and show that using a pivot language other than the one that the mt system is actually translating yields results that are almost as goodit also shows that the syntactic constraints imposed by callisonburch on the pivotbased paraphrase extraction process are useful and yield improved results over the baseline pivotmethodthe results further support our claim that the pivot paraphrase probability is not a very useful indicator of semantic relatednessto evaluate the differences between human judgment types we first align the hypothesis to the references using a fixed set of edit costs identical to the weights in table 1 and then optimize the edit costs to maximize the correlation without realigningthe separation of the edit costs used for alignment from those used for scoring allows us to remove the confusion of edit costs selected for alignment purposes from those selected to increase correlationfor adequacy and fluency judgments the mteval 2002 human judgement set4 was usedthis set consists of the output of ten mt systems 3 arabictoenglish systems and 7 chinesetoenglish systems consisting of a total across all systems and both language pairs of 7452 segments across 900 documentsto evaluate hter the gale 2007 hter scores were usedthis set consists of the output of 6 mt systems 3 arabictoenglish systems and 3 chinesetoenglish systems although each of the systems in question is the product of system combinationthe hter data consisted of a total across all systems and language pairs of 16267 segments across a total of 1568 documentsbecause hter annotation is especially expensive and difficult it is rarely performed and the only source to the authors knowledge of available hter annotations is on gale evaluation data for which no fluency and adequacy judgments have been made publicly availablethe edit costs learned for each of these human judgments along with the alignment edit costs are shown in table 6while all three types of human judgements differ from the alignment costs used in alignment the hter edit costs differ most significantlyunlike adequacy and fluency which have a low edit cost for insertions and a very high cost for deletions hter has a balanced cost for the two edit typesinserted words are strongly penalized against in hter as opposed to in adequacy and fluency where such errors are largely forgivenstem and synonym edits are also penalized against while these are considered equivalent to a match for both adequacy and fluencythis penalty against stem matches can be attributed to fluency requirements in hter that specifically penalize against incorrect morphologythe cost of shifts is also increased in hter strongly penalizing the movement of phrases within the hypothesis while adequacy and fluency give a much lower cost to such errorssome of the differences between hter and both fluency and adequacy can be attributed to the different systems usedthe mt systems evaluated with hter are all highly performing state of the art systems while the systems used for adequacy and fluency are older mt systemsthe differences between adequacy and fluency are smaller but there are still significant differencesin particular the cost of shifts is over twice as high for the fluency optimized system than the adequacy optimized system indicating that the movement of phrases as expected is only slightly penalized when judging meaning but can be much more harmful to the fluency of a translationfluency however favors paraphrases more strongly than the edit costs optimized for adequacythis might indicate that paraphrases are used to generate a more fluent translation although at the potential loss of meaningwe introduced a new evaluation metric terplus and showed that it is competitive with stateoftheart evaluation metrics when its predictions are correlated with human judgmentsthe inclusion of stem synonym and paraphrase edits allows terp to overcome some of the weaknesses of the ter metric and better align hypothesized translations with reference translationsthese new edit costs can then be optimized to allow better correlation with human judgmentsin addition we have examined the use of other paraphrasing techniques and shown that the paraphrase probabilities estimated by the pivotmethod may not be fully adequate for judgments of whether a paraphrase in a translation indicates a correct translationthis line of research holds promise as an external evaluation method of various paraphrasing methodshowever promising correlation results for an evaluation metric may be the evaluation of the final output of an mt system is only a portion of the utility of an automatic translation metricoptimization of the parameters of an mt system is now done using automatic metrics primarily bleuit is likely that some features that make an evaluation metric good for evaluating the final output of a system would make it a poor metric for use in system tuningin particular a metric may have difficulty distinguishing between outputs of an mt system that been optimized for that same metricbleu the metric most frequently used to optimize systems might therefore perform poorly in evaluation tasks compared to recall oriented metrics such as meteor and terp future research into the use of terp and other metrics as optimization metrics is needed to better understand these metrics and the interaction with parameter optimizationfinally we explored the difference between three types of human judgments that are often used to evaluate both mt systems and automatic metrics by optimizing terp to these human judgments and examining the resulting edit costswhile this can make no judgement as to the preference of one type of human judgment over another it indicates differences between these human judgment types and in particular the difference between hter and adequacy and fluencythis exploration is limited by the the lack of a large amount of diverse data annotated for all human judgment types as well as the small number of edit types used by terpthe inclusion of additional more specific edit types could lead to a more detailed understanding of which translation phenomenon and translation errors are most emphasized or ignored by which types of human judgmentsthis work was supported in part by bbn technologies under the gale program darpaipto contract nohr001106c0022 and in part by the human language technology center of excellence terp is available on the web for download at httpwwwumiacsumdedusnoverterp
W09-0441
fluency adequacy or hter exploring different human judgments with a tunable mt metricautomatic machine translation evaluation metrics have traditionally been evaluated by the correlation of the scores they assign to mt output with human judgments of translation performancedifferent types of human judgments such as fluency adequacy and hter measure varying aspects of mt performance that can be captured by automatic mt metricswe explore these differences through the use of a new tunable mt metric terplus which extends the translation edit rate evaluation metric with tunable parameters and the incorporation of morphology synonymy and paraphrasesterplus was shown to be one of the top metrics in nist metrics matr 2008 challenge having the highest average rank in terms of pearson and spearman correlationoptimizing terplus to different types of human judgments yields significantly improved correlations and meaningful changes in the weight of different types of edits demonstrating significant differences between the types of human judgmentswe extend the ter algorithm in a similar fashion to produce a new evaluation metric ter plus which allows tuning of the edit costs in order to maximize correlation with human judgment
a metalearning approach to processing the scope of negation finding negation signals and their scope in text is an important subtask in information extraction in this paper we present a machine learning system that finds the scope of negation in biomedical texts the system combines several classifiers and works in two phases to investigate the robustness of the approach the system is tested on the three subcorpora of the bioscope corpus representing different text types it achieves the best results to date for this task with an error reduction of 3207 compared to current state of the art results in this paper we present a machine learning system that finds the scope of negation in biomedical textsthe system works in two phases in the first phase negation signals are identified and in the second phase the full scope of these negation signals is determinedalthough the system was developed and tested on biomedical text the same approach can also be used for text from other domainsfinding the scope of a negation signal means determining at sentence level the sequence of words in the sentence that is affected by the negationthis task is different from determining whether a word is negated or notfor a sentence like the one in example taken from the bioscope corpus the system detects that lack neither and nor are negation signals that lack has as its scope lack of cd5 expression and that the discontinuous negation signal neither nor has as its scope neither to segregation of human autosome 11 on which the cd5 gene has been mapped nor to deletion of the cd5 structural genepredicting the scope of negation is relevant for text mining and information extraction purposesas vincze et al put it extracted information that falls in the scope of negation signals cannot be presented as factual informationit should be discarded or presented separatelyszarvas et al report that 1345 of the sentences in the abstracts section of the bioscope corpus and 1270 of the sentences in the full papers section contain negationsa system that does not deal with negation would treat the facts in these cases incorrectly as positivesadditionally information about the scope of negation is useful for entailment recognition purposesthe approach to the treatment of negation in nlp presented in this paper was introduced in morante et al this system achieved a 5005 percentage of correct scopes but had a number of important shortcomingsthe system presented here uses a different architecture and different classification task definitions it can deal with multiword negation signals and it is tested on three subcorpora of the bioscope corpusit achieves an error reduction of 3207 compared to the previous systemthe paper is organised as followsin section 2 we summarise related workin section 3 we describe the corpus on which the system has been developedin section 4 we introduce the task to be performed by the system which is described in section 5results are presented and discussed in section 6finally section 7 puts forward some conclusionsnegation has been a neglected area in opendomain natural language processingmost research has been performed in the biomedical domain and has focused on detecting whether a medical term is negated or not whereas in our approach we focus on detecting the full scope of negation signalschapman et al developed negex a regular expression based algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absentthe reported results are 9451 precision and 7784 recallmutalik et al developed negfinder a rulebased system that recognises negated patterns in medical documentsit consists of two tools a lexical scanner that uses regular expressions to generate a finite state machine and a parserthe reported results are 9570 recall and 9180 precisionsanchezgraillet and poesio present an analysis of negated interactions in 50 biomedical articles and a heuristicsbased system that extracts such informationthe preliminary results reported range from 5432 fscore to 7668 depending on the method appliedelkin et al describe a rulebased system that assigns to concepts a level of certainty as part of the generation of a dyadic parse tree in two phases first a preprocessor breaks each sentence into text and operatorsthen a rule based system is used to decide if a concept has been positively negatively or uncertainly assertedthe system achieves 9720 recall and 9880 precisionthe systems mentioned above are essentially based on lexical informationhuang and lowe propose a classification scheme of negations based on syntactic categories and patterns in order to locate negated concepts regardless of their distance from the negation signaltheir hybrid system that combines regular expression matching with grammatical parsing achieves 9260 recall and 9980 precisionadditionally boytcheva et al incorporate the treatment of negation in a system mehr that extracts from electronic health records all the information required to generate automatically patient chroniclesthey report 57 of negations correctly recognisedthe abovementioned research applies rulebased algorithms to negation findingmachine learning techniques have been used in some casesaverbuch et al developed an algorithm that uses information gain to learn negative context patternsgolding and chapman experiment with naive bayes and decision trees to distinguish whether a medical observation is negated by the word not in a corpus of hospital reportsthey report a maximum of 90 fscoregoryachev et al compare the performance of four different methods of negation detection two regular expressionbased methods and two classificationbased methods trained on 1745 discharge reportsthey show that the regular expressionbased methods show better agreement with humans and better accuracy than the classification methodslike in most of the work mentioned the task consists in determining whether a medical term is negatedrokach et al present a new patternbased algorithm for identifying context in freetext medical narrativesthe originality of the algorithm lies in that it automatically learns patterns similar to the manually written patterns for negation detectionwe are not aware of any research that has focused on learning the full scope of negation signals outside biomedical natural language processingthe system has been developed using the bioscope corpus 1 a freely available resource that consists of medical and biological textsin the corpus every sentence is annotated with information about negation and speculationthe annotation indicates the boundaries of the scope and the keywords as shown in abovein the annotation scopes are extended to the biggest syntactic unit possible so that scopes have the maximal length and the negation signal is always included in the scopethe annotation guidelines and the interannotator agreement information can be found on the web pagethe bioscope corpus consists of three parts clinical freetexts biological full papers and biological paper abstracts from the genia corpus table 1 shows statistics about the corporanegation signals are represented by one or more tokensonly one negation signal that occurs in the papers subcorpus does not occur in the abstracts subcorpus and six negation signals accounts for 1174 of the negation signals in the abstracts subcorpus 1288 in the papers subcorpus and 7665 in the clinical subcorpusthe negation signal not accounts for 5889 of the negation signals in the abstracts subcorpus 5322 in the papers subcorpus and 672 in the clinical subcorpusthe texts have been processed with the genia tagger a bidirectional inference based tagger that analyzes english sentences and outputs the base forms partofspeech tags chunk tags and named entity tags in a tabseparated formatadditionally we converted the annotation about scope of negation into a tokenpertoken representation following the standard format of the 2006 conll shared task where sentences are separated by a blank line and fields are separated by a single tab charactera sentence consists of a sequence of tokens each one starting on a new linewe model the scope finding task as two consecutive classification tasks a first one that consists of classifying the tokens of a sentence as being at the beginning of a negation signal inside or outsidethis allows the system to find multiword negation signalsthe second classification task consists of classifying the tokens of a sentence as being the first element of the scope the last or neitherthis happens as many times as there are negation signals in the sentencewe have chosen this classification model after experimenting with two additional models that produced worse results in one case we classifed tokens as being inside or outside of the scopein another case we classified chunks instead of tokens as being inside or outside of the scopethe two classification tasks are implemented using supervised machine learning methods trained on part of the annotated corpusin this phase a classifier predicts whether a token is the first token of a negation signal inside a negation signal or outside of itwe use igtree as implemented in timbl timbl2 is a software package that contains implementations of memorybased learning algorithms like ib1 and igtreewe also experimented with ib1 but it produced lower resultsthe classifier was parameterised by using gain ratio for feature weightingthe instances represent all tokens in the corpus and they have features of the token and of the token context word form pos and chunk iob tag3 of one token to the left and to the right word form of the second token to the left and to the rightaccording to the gain ratio scores the most informative feature is the lemma of the token followed by the chunk iob tag of the token to the right and the features relative to the token to the leftthe test file is preprocessed using a list of negation signals extracted from the training corpus that are unambiguous in the training corpusthe list comprises the following negation signals absence absent fail failure impossible lack loss miss negative neither never no none nor not unable withoutinstances with this negation signals are directly assigned their classthe classifier predicts the class of the rest of tokensin this phase three classifiers predict whether a token is the first token in the scope sequence the last or neithera fourth classifier is a metalearner that uses the predictions of the three classifiers to predict the scope classesthe three object classifiers that provide input to the metalearner were trained using the following machine learning methods in crf051 crfs define a conditional probability distribution over label sequences given a particular observation sequence rather than a joint distribution over label and observation sequences and are reported to avoid the label bias problem of hmms and other learning approachesthe memorybased learning algorithm was parameterised by using overlap as the similarity metric gain ratio for feature weighting using 7 knearest neighbors and weighting the class vote of neighbors as a function of their inverse linear distancethe svm was parameterised in the learning phase for classification cost factor of 1 and biased hyperplane and it used a linear kernel functionthe crfs classifier used regularization algorithm l2 for training the hyperparameter and the cutoff threshold of features were set to 1an instance represents a pair of a negation signal and a token from the sentencethis means that all tokens in a sentence are paired with all negation signals that occur in the sentencenegation signals are those that have been classified as such in the previous phaseonly sentences that have negation signals are selected for this phasewe started with a larger extensive pool of 131 features which encoded information about the negation signal the paired token their contexts and the tokens in betweenfeature selection experiments were carried out with the memorybased learning classifierfeatures were selected based on their gain ratio starting with all the features and eliminating the least informative featureswe also performed experiments applying the feature selection process reported in tjong kim sang et al a bidirectional hill climbing processhowever experiments with this method did not produce a better selection of featuresthe features of the first three classifers are rightthe fourth classifier a metalearner is also a crf as implemented in crfthe features of this classifier are negation signals in the bioscope corpus always have one consecutive block of scope tokens including the signal token itselfhowever the classifiers only predict the first and last element of the scopewe need to process the output of the classifers in order to build the complete sequence of tokens that constitute the scopewe apply the following postprocessing if one token has been predicted as first and none has been predicted as last the sequence is formed by the token predicted as first if one token has been predicted as last and none as first the sequence will start at the negation signal and it will finish at the token predicted as last if one token has been predicted as first and more than one as last the sequence will end with the first token predicted as last after the token predicted as first if there is one if one token has been predicted as last and more than one as first the sequence will start at the negation signal if no token has been predicted as first and more than one as last the sequence will start at the negation signal and will end at the first token predicted as last after the negation signalthe results provided for the abstracts part of the corpus have been obtained by performing 10fold cross validation experiments whereas the results provided for papers and clinical reports have been obtained by training on the full abstracts subcorpus and testing on the papers and clinical reports subcorpusthe latter experiment is therefore a test of the robustness of the system when applied to different text types within the same domainthe evaluation is made using the precision and recall measures and their harmonic mean fscorein the negation finding task a negation token is correctly classified if it has been classified as being at the beginning or inside the negation signalwe also evaluate the percentage of negation signals that have been correctly identifiedin the scope finding task a token is correctly classified if it has been correctly classified as being inside or outside of the scope of all the negation signals that there are in the sentencethis means that when there is more than one negation signal in the sentence the token has to be correctly assigned a class for as many negation signals as there areadditionally we evaluate the percentage of correct scopes a scope is correct if all the tokens in the sentence have been assigned the correct scope class for a specific negation signalthe evaluation in terms of precision and recall measures takes as unit a token whereas the evaluation in terms of pcs takes as unit a scopean informed baseline system has been created by tagging as negation signals the tokens with the words absence absent fail failure impossible instead of lack loss miss negative neither never no none nor not rather than unable with the exception of withoutthe list has been extracted from the training corpusbaseline results and interannotator agreement scores are shown in table 2table 3 shows the results of the system which are significantly higher than the results of the baseline systemwith a more comprehensive list of negation signals it would be possible to identify all of them in a textthe lower result of the papers subcorpus is caused by the high frequency of the negation signal not in this corpus that is correct in 9368 of the casesthe same negation signal is also frequent in the abstracts subcorpus but in this case it is correct in 9825 of the casesin the clinical subcorpus not has low frequency which means that the performance of the classifer for this negation signal does not affect so much the global results of the classifiermost errors in the classification of not are caused by the system predicting it as a negation signal in cases not marked as such in the corpusthe following sentences are some examples however programs for trna identification do not necessarily perform well on unknown onesthe evaluation of this ratio is difficult because not all true interactions are knownhowever the disorder module does not contribute significantly to the predictionan informed baseline system has been created by calculating the average length of the scope to the right of the negation signal in each corpus and tagging that number of tokens as scope tokenswe take the scope to the right for the baseline because it is much more frequent than the scope to the left as is shown by the statistics contained in table 1 of section 3baseline results and interannotator agreement scores are presented in table 4the percentage of correct scopes has been measured in two ways pcs measures the proportion of correctly classified tokens in the scope sequence whereas pcs2 measures the proportion of nouns and verbs that are correctly classifed in the scope sequencethis less strict way of computing correctness is motivated by the fact that being able to determine the concepts and relations that are negated is the most important use of the negation scope finderthe low pcs for the three subcorpora indicates that finding the scope of negations is not a trivial taskthe higher pcs for the clinical subcorpus follows a trend that applies also to the results of the systemthe fact that despite a very low pcs precision recall and f1 are relatively high indicates that these measures are in themselves not reliable to evaluate the performance of the systemthe upperbound results of the metalearner system assuming gold standard identification of negation signals are shown in table 5the results of the metalearner system are presented in table 6results with goldstandard negation signals are especially better for the clinical subcorpus because except for lack negative and not all negation signals score a pcs higher than 90 thus in the clinical subcorpus if the negation signals are identified their scope will be correctly foundthis does not apply to the abstracts and papers subcorpusin terms of pcs results are considerably higher than baseline results whereas in terms of precision recall and f1 results are slightly highercompared to state of the art results for the abstracts subcorpus the system achieves an error reduction of 3207 which shows that the system architecture presented in this paper leads to more accurate resultsevaluating the system in terms of a more relaxed measure does not reflect a significant increase in its performancethis suggests that when a scope is incorrectly predicted main content tokens are also incorrectly left out of the scope or addedan alternative to the pcs2 measure would be to mark in the corpus the relevant negated content words and evaluate if they are under the scoperesults also show that the system is portable to different types of documents although performance varies depending on the characteristics of the corpusclinical reports are easier to process than papers and abstracts which can be explained by several factorsone factor is the length of sentences 7585 of the sentences in the clinical reports have 10 or less words whereas this rate is 317 for abstracts and 1127 for papersthe average length of a sentence for clinical reports is 773 tokens whereas for abstracts it is 2643 and for papers 2624shorter sentences imply shorter scopesin the scope finding phase when we process the output of the classifiers to build the complete sequence of tokens that constitute the scope we give preference to short scopes by choosing as last the token classified as last that is the closest to the negation signala way to make the system better portable to texts with longer sentences would be to optimise the choice of the last token in the scopeanother factor that causes a higher performance on the clinical subcorpus is the frequency of the negation signal no which has also a high pcs in abstracts as shown in table 7typical example sentences with this negation signal are shown in its main characteristics are that the scope is very short and that it scopes to the right over a noun phrase no findings to account for symptomsno signs of tuberculosisthe lower performance of the system on the papers subcorpus compared to the abstracts subcorpus is due to the high proportion of the negation signal not which scores a low pcs as shown in table 7table 7 also shows that except for can not all negation signals score a lower pcs on the papers subcorpusthis difference can not be caused by the sentence length since the average sentence length in the abstracts subcorpus is similar to the average sentence length in the papers subcorpus the difference may be related to the difference in the length of the scopes and their directionfor example the average length of the scope of not is 885 in the abstracts subcorpus and 645 in the papers subcorpusthe scopes to the left for not amount to 2328 in the papers subcorpus and to 1641 in the abstracts subcorpus and the average scope to the left is 56 tokens in the papers subcorpus and 882 in the abstracts subcorpusas for the results per negation signal on the abstracts corpus the negation signals that score higher pcs have a low or null percentage of scopes to the leftan exception is not with a high score and 1641 of scopes to the leftthe negation signals with lower pcs have a higher percentage of scopes to the left a typical error for the negation signal unable is exemplified by the sentence vdr dnabinding mutants were unable to either bind to this element in vitro or repress in vivo in which the gold scope starts at the beginning of the sentence where the predicted scopes starts at the negation signal621 results of the metalearner versus results of the first three classifiers the choice of a metalearner approach has been motivated by the significantly higher results that the metalearner produces compared to the results of the first three classifiersthe results of each of the classifiers independently are presented in table 8pcs results show that the metalearner system performs significantly better than the three classifiers for the abstracts and papers subcorpora but not for the clinical subcorpus in which case timbl and svm produce higher scores although only the svm results are significantly better with a difference of 117 pcsan analysis in detail of the svm scores per negation signal shows that the main difference between the scores of the metalearner and svm is that the svm is good at predicting the scopes of the negation signal no when it occurs as the first token in the sentence like in abovewhen no occurs in other positions svm scores 117 pcs betterwe plan to perform experiments with the three classifiers using the features of the metalearner that are not related to the predictions in order to check if the three classifiers would perform betterin this paper we have presented a metalearning approach to processing the scope of negation signalsits performance is evaluated in terms of percentage of correct scopes on three test setswith 6607 pcs on the abstracts corpus the system achieves 3207 of error reduction over current state of the art resultsthe architecture of the system is new for this problem with three classifiers and a metalearner that takes as input the output of the first classifiersthe classification task definition is also originalwe have shown that the system is portable to different corpora although performance fluctuates depending on the characteristics of the corporathe results per corpus are determined to a certain extent by the scores of the negation signals no and not that are very frequent and difficult to process in some text typesshorter scopes are easier to learn as reflected in the results of the clinical corpus where no is the most frequent negation signalwe have also shown that the metalearner performs better than the three first classifiers except for the negation signal no in clinical reports for which the svm classifier produces the highest scoresfuture research will deal with a more detailed analysis of the errors by each of the three initial classifiers compared to the errors of the metalearner in order to better understand why the results of the metalearner are higherwe also would like to perform feature analysis and test the system on general domain corporaour work was made possible through financial support from the university of antwerp we are grateful to four anonymous reviewers for their valuable comments and suggestions
W09-1105
a metalearning approach to processing the scope of negationfinding negation signals and their scope in text is an important subtask in information extractionin this paper we present a machine learning system that finds the scope of negation in biomedical textsthe system combines several classifiers and works in two phasesto investigate the robustness of the approach the system is tested on the three subcorpora of the bioscope corpus representing different text typesit achieves the best results to date for this task with an error reduction of 3207 compared to current state of the art resultswe describe a method for improving resolution of the scope of negation by combining igtree crf and support vector machines we pioneered the research on negation scope finding by formulating it as a chunking problem which classifies the words of a sentence as being inside or outside the scope of a negation signal
design challenges and misconceptions in named entity recognition we analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust ner system in particular we address issues such as the representation of text chunks the inference approach needed to combine local ner decisions the sources of prior knowledge and how to use them within an ner system in the process of comparing several solutions to these challenges we reach some surprising conclusions as well as develop an system that achieves 908 on the conll2003 ner shared task the best reported result for this dataset natural language processing applications are characterized by making complex interdependent decisions that require large amounts of prior knowledgein this paper we investigate one such application named entity recognition figure 1 illustrates the necessity of using prior knowledge and nonlocal decisions in nerin the absence of mixed case information it is difficult to understand that blinker is a personlikewise it is not obvious that the last mention of wednesday is an organization an ner system could take advantage of the fact that blinker is also mentioned later in the text as the easily identifiable reggie blinkerit is also useful to know that udinese is a soccer club and the expression both wednesday and udinese implies that wednesday and udinese should be assigned the same labelthe above discussion focuses on the need for external knowledge resources and the need for nonlocal features to leverage the multiple occurrences of named entities in the textwhile these two needs have motivated some of the research in ner in the last decade several other fundamental decisions must be madethese include what model to use for sequential inference how to represent text chunks and what inference algorithm to usedespite the recent progress in ner the effort has been dispersed in several directions and there are no published attempts to compare or combine the recent advances leading to some design misconceptions and less than optimal performancein this paper we analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust ner systemwe find that bilou representation of text chunks significantly outperforms the widely adopted biosurprisingly naive greedy inference performs comparably to beamsearch or viterbi while being considerably more computationally efficientwe analyze several approaches for modeling nonlocal dependencies proposed in the literature and find that none of them clearly outperforms the others across several datasetshowever as we show these contributions are to a large extent independent and as we show the approaches can be used together to yield better resultsour experiments corroborate recently published results indicating that word class models learned on unlabeled text can significantly improve the performance of the system and can be an alternative to the traditional semisupervised learning paradigmcombining recent advances we develop a publicly available ner system that achieves 908 f1 score on the conll2003 ner shared task the best reported result for this datasetour system is robust it consistently outperforms all publicly available ner systems on all three datasetsner system should be robust across multiple domains as it is expected to be applied on a diverse set of documents historical texts news articles patent applications webpages etctherefore we have considered three datasets conll03 shared task data muc7 data and a set of webpages we have annotated manuallyin the experiments throughout the paper we test the ability of the tagger to adapt to new test domainsthroughout this work we train on the conll03 data and test on the other datasets without retrainingthe differences in annotation schemes across datasets created evaluation challengeswe discuss the datasets and the evaluation methods belowthe conll03 shared task data is a subset of reuters 1996 news corpus annotated with 4 entity types perorg loc miscit is important to notice that both the training and the development datasets are news feeds from august 1996 while the test set contains news feeds from december 1996the named entities mentioned in the test dataset are considerably different from those that appear in the training or the development setas a result the test dataset is considerably harder than the development setevaluation following the convention we report phraselevel f1 scorethe muc7 dataset is a subset of the north american news text corpora annotated with a wide variety of entities including people locations organizations temporal events monetary units and so onsince there was no direct mapping from temporal events monetary units and other entities from muc7 and the misc label in the conll03 dataset we measure performance only on perorg and locevaluation there are several sources of inconsistency in annotation between muc7 and conll03for example since the muc7 dataset does not contain the misc label in the sentence balloon called the virgin global challenger the expression virgin global challenger should be labeled as misc according to conll03 guidelineshowever the gold annotation in muc7 is balloon called the org virgin global challengerthese and other annotation inconsistencies have prompted us to relax the requirements of finding the exact phrase boundaries and measure performance using tokenlevel f1webpages we have assembled and manually annotated a collection of 20 webpages including personal academic and computerscience conference homepagesthe dataset contains 783 entities evaluation the named entities in the webpages were highly ambiguous and very different from the named entities seen in the training datafor example the data included sentences such as hear o israel the lord our god the lord is one we could not agree on whether o israel should be labeled as org loc or persimilarly we could not agree on whether god and lord is an org or perthese issues led us to report tokenlevel entityidentification f1 score for this datasetthat is if a named entity token was identified as such we counted it as a correct prediction ignoring the named entity typein this section we introduce the baseline ner system and raise the fundamental questions underlying robust and efficient designthese questions define the outline of this paperner is typically viewed as a sequential prediction problem the typical models include hmm crf and sequential application of perceptron or winnow that is let x be an input sequence and y be the output sequencethe sequential prediction problem is to estimate the probabilities p where k l and m are small numbers to allow tractable inference and avoid overfittingthis conditional probability distribution is estimated in ner using the following baseline set of features previous two predictions yi1 and yi2 current word xi xi word type prefixes and suffixes of xi tokens in the window c capitalization pattern in the window c conjunction of c and yi1most ner systems use additional features such as pos tags shallow parsing information and gazetteerswe discuss additional features in the following sectionswe note that we normalize dates and numbers that is 1232008 becomes date 1980 becomes dddd and 2123254751 becomes ddddddddddthis allows a degree of abstraction to years phone numbers etcour baseline ner system uses a regularized averaged perceptron systems based on perceptron have been shown to be competitive in ner and text chunking we specify the model and the features with the lbj modeling languagewe now state the four fundamental design decisions in ner system which define the structure of this paperkey design decisions in an ner systemin this section we compare the performance of several inference algorithms greedy lefttoright decoding viterbi and beamsearchit may appear that beamsearch or viterbi will perform much better than naive greedy lefttoright decoding which can be seen as beamsearch of size onethe viterbi algorithm has the limitation that it does not allow incorporating some of the nonlocal features which will be discussed later therefore we cannot use it in our end systemhowever it has the appealing quality of finding the most likely assignment to a secondorder model and since the baseline features only have second order dependencies we have tested it on the baseline configurationtable 1 compares between the greedy decoding beamsearch with varying beam size and viterbi both for the system with baseline features and for the end system surprisingly the greedy policy performs well this phenmenon was also observed in the pos tagging task the implications are subtlefirst due to the secondorder of the model the greedy decoding is over 100 times faster than viterbithe reason is that with the bilou encoding of four ne types each token can take 21 states to tag a token the greedy policy requires 21 comparisons while the viterbi requires 213 and this analysis carries over to the number of classifier invocationsfurthermore both beamsearch and viterbi require transforming the predictions of the classifiers to probabilities as discussed in incurring additional time overheadsecond this result reinforces the intuition that global inference over the secondorder hmm features does not capture the nonlocal properties of the taskthe reason is that the nes tend to be short chunks separated by multiple outside tokensthis separation breaks the viterbi decision process to independent maximization of assignment over short chunks where the greedy policy performs wellon the other hand dependencies between isolated named entity chunks have longerrange dependencies and are not captured by secondorder transition features therefore requiring separate mechanisms which we discuss in section 5another important question that has been studied extensively in the context of shallow parsing and was somewhat overlooked in the ner literature is the representation of text segments related works include voting between several representation schemes lexicalizing the schemes and automatically searching for best encoding however we are not aware of similar work in the ner settingsdue to space limitations we do not discuss all the representation schemes and combining predictions by votingwe focus instead on two most popular schemes bio and bilouthe bio scheme suggests to learn classifiers that identify the beginning the inside and the outside of the text segmentsthe bilou scheme suggests to learn classifiers that identify the beginning the inside and the last tokens of multitoken chunks as well as unitlength chunksthe bilou scheme allows to learn a more expressive model with only a small increase in the number of parameters to be learnedtable 2 compares the end systems performance with bio and bilouexamining the results we reach two conclusions choice of encoding scheme has a big impact on the system performance and the less used bilou formalism significantly outperforms the widely adopted bio tagging schemewe use the bilou scheme throughout the paperthe key intuition behind nonlocal features in ner has been that identical tokens should have identical label assignmentsthe sample text discussed in the introduction shows one such example where all occurrences of blinker are assigned the per labelhowever in general this is not always the case for example we might see in the same document the word sequences australia and the bank of australiathe first instance should be labeled as loc and the second as orgwe consider three approaches proposed in the literature in the following sectionsbefore continuing the discussion we note that we found that adjacent documents in the conll03 and the muc7 datasets often discuss the same entitiestherefore we ignore document boundaries and analyze global dependencies in 200 and 1000 token windowsthese constants were selected by hand after trying a small number of valueswe believe that this approach will also make our system more robust in cases when the document boundaries are not given used features that aggregate for each document the context tokens appear insample features are the longest capitilized sequence of words in the document which contains the current token and the token appears before a company marker such as ltd elsewhere in textin this work we call this type of features context aggregation featuresmanually designed context aggregation features clearly have low coverage therefore we used the following approachrecall that for each token instance xi we use as features the tokens in the window of size two around it ci when the same token type t appears in several locations in the text say xi1 xi2 xin for each instance xis in addition to the context features cis we also aggregate the context across all instances within 200 tokens context aggregation as done above can lead to excessive number of features used the intuition that some instances of a token appear in easilyidentifiable contextstherefore they apply a baseline ner system and use the resulting predictions as features in a second level of inferencewe call the technique twostage prediction aggregationwe implemented the tokenmajority and the entitymajority features discussed in however instead of document and corpus majority tags we used relative frequency of the tags in a 1000 token windowboth context aggregation and twostage prediction aggregation treat all tokens in the text similarlyhowever we observed that the named entities in the beginning of the documents tended to be more easily identifiable and matched gazetteers more oftenthis is due to the fact that when a named entity is introduced for the first time in text a canonical name is used while in the following discussion abbreviated mentions pronouns and other references are usedto break the symmetry when using beamsearch or greedy lefttoright decoding we use the fact that when we are making a prediction for token instance xi we have already made predictions y1 yi1 for token instances x1 xi1when making the prediction for token instance xi we record the label assignment distribution for all token instances for the same token type in the previous 1000 wordsthat is if the token instance is australia and in the previous 1000 tokens the token type australia was twice assigned the label lorg and three times the label youloc then the prediction history feature will be table 3 summarizes the resultssurprisingly no single technique outperformed the others on all datasetsthe extended prediction history method was the best on conll03 data and muc7 test setcontext aggregation was the best method for muc7 development set and twostage prediction was the best for webpagesnonlocal features proved less effective for muc7 test set and the webpagessince the named entities in webpages have less context this result is expected for the webpageshowever we are unsure why muc7 test set benefits from nonlocal features much less than muc7 development setour key conclusion is that no single approach is better than the rest and that the approaches are complimentary their combination is the most stable and best performingas we have illustrated in the introduction ner is a knowledgeintensive taskin this section we discuss two important knowledge resources gazetteers and unlabeled textrecent successful semisupervised systems have illustrated that unlabeled text can be used to improve the performance of ner systemsin this work we analyze a simple technique of using word clusters generated from unlabeled text which has been shown to improve performance of dependency parsing chinese word segmentation and ner the technique is based on word class models pioneered by which hierarchically the approach is related but not identical to distributional similarity and for example since the words friday and tuesday appear in similar contexts the brown algorithm will assign them to the same clustersuccessful abstraction of both as a day of the week addresses the data sparsity problem common in nlp tasksin this work we use the implementation and the clusters obtained in from running the algorithm on the reuters 1996 dataset a superset of the conll03 ner datasetwithin the binary tree produced by the algorithm each word can be uniquely identified by its path from the root and this path can be compactly represented with a bit stringpaths of different depths along the path from the root to the word provide different levels of word abstractionfor example paths at depth 4 closely correspond to pos tagssince word class models use large amounts of unlabeled data they are essentially a semisupervised technique which we use to considerably improve the performance of our systemin this work we used path prefixes of length 4610 and 20when brown clusters are used as features in the following sections it implies that all features in the system which contain a word form will be duplicated and a new set of features containing the paths of varying length will be introducedfor example if the system contains the feature concatenation of the current token and the system prediction on the previous word four new features will be introduced which are concatenations of the previous prediction and the 461020 length pathrepresentations of the current wordan important question at the inception of the ner task was whether machine learning techniques are necessary at all and whether simple dictionary lookup would be sufficient for good performanceindeed the baseline for the conll03 shared task was essentially a dictionary lookup of the entities which appeared in the training data and it achieves 7191 f1 score on the test set it turns out that while problems of coverage and ambiguity prevent straightforward lookup injection of gazetteer matches as features in machinelearning based approaches is critical for good performance given these findings several approaches have been proposed to automatically extract comprehensive gazetteers from the web and from large collections of unlabeled text with limited impact on nerrecently have successfully constructed high quality and high coverage gazetteers from wikipediain this work we use a collection of 14 highprecision lowrecall lists extracted from the web that cover common names countries monetary units temporal expressions etcwhile these gazetteers have excellent accuracy they do not provide sufficient coverageto further improve the coverage we have extracted 16 gazetteers from wikipedia which collectively contain over 15m entitiesoverall we have 30 gazetteers and matches against each one are weighted as a separate feature in the system we also note that we have developed a technique for injecting nonexact string matching to gazetteers which has marginally improved the performance but is not covered in the paper due to space limitationsin the rest of this section we discuss the construction of gazetteers from wikipediawikipedia is an open collaborative encyclopedia with several attractive properties it is kept updated manually by it collaborators hence new entities are constantly added to it wikipedia contains redirection pages mapping several variations of spelling of the same name to one canonical entryfor example suker is redirected to an entry about davor ˇsuker the croatian footballer the entries in wikipedia are manually tagged with categoriesfor example the entry about the microsoft in wikipedia has the following categories companies listed on nasdaq cloud computing vendors etcboth and used the freetext description of the wikipedia entity to reason about the entity typewe use a simpler method to extract high coverage and high quality gazetteers from wikipediaby inspection of the conll03 shared task annotation guidelines and of the training set we manually aggregated several categories into a higherlevel concept when a wikipedia entry was tagged by one of the categories in the table it was added to the corresponding gazetteertable 4 summarizes the results of the techniques for injecting external knowledgeit is important to note that although the world class model was learned on the superset of conll03 data and although the wikipedia gazetteers were constructed based on conll03 annotation guidelines these features proved extremely good on all datasetsword class models discussed in section 61 are computed offline are available online1 and provide an alternative to traditional semisupervised learningit is important to note that the word class models and the gazetteers and independednt and accumulativefurthermore despite the number and the gigantic size of the extracted gazetteers the gazeteers alone are not sufficient for adequate performancewhen we modified the conll03 baseline to include gazetteer matches the performance went up from 7191 to 823 on the conll03 test set below our baseline systems result of 8365when we have injected the gazetteers into our system the performance went up to 8722word class model and nonlocal features further improve the performance to 9057 by more than 3 f1 pointsas a final experiment we have trained our system both on the training and on the development set which gave us our best f1 score of 908 on the conll03 data yet it failed to improve the performance on other datasetstable 5 summarizes the performance of the systemnext we have compared the performance of our system to that of the stanford ner tagger across the datasets discussed abovewe have chosen to compare against the stanford tagger because to the best of our knowledge it is the best publicly available system which is trained on the same datawe have downloaded the stanford ner tagger and used the strongest provided model trained on the conll03 data with distributional similarity featuresthe results we obtained on the conll03 test set were consistent with what was reported in our goal was to compare the performance of the taggers across several datasetsfor the most realistic comparison we have presented each system with a raw text and relied on the systems sentence splitter and tokenizerwhen evaluating the systems we matched against the gold tokenization ignoring punctuation markstable 6 summarizes the resultsnote that due to differences in sentence splitting tokenization and evaluation these results are not identical to those reported in table 5also note that in this experiment we have used tokenlevel accuracy on the conll dataset as wellfinally to complete the comparison to other systems in table 7 we summarize the best results reported for the conll03 dataset in literaturewe have presented a simple model for ner that uses expressive features to achieve new state of the art performance on the named entity recognition taskwe explored four fundamental design decisions text chunks representation inference algorithm using nonlocal features and external knowledgewe showed that bilou encoding scheme significantly outperforms bio and that surprisingly a conditional model that does not take into account interactions at the output level performs comparably to beamsearch or viterbi while being considerably more efficient computationallywe analyzed several approaches for modeling nonlocal dependencies and found that none of them clearly outperforms the others across several datasetsour experiments corroborate recently published results indicating that word class models learned on unlabeled text can be an alternative to the traditional semisupervised learning paradigmner proves to be a knowledgeintensive task and it was reassuring to observe that knowledgedriven techniques adapt well across several domainswe observed consistent performance gains across several domains most interestingly in webpages where the named entities had less context and were different in nature from the named entities in the training setour system significantly outperforms the current state of the art and is available to download under a research licenseapendix wikipedia gazetters categories 1people people births deathsextracts 494699 wikipedia titles and 382336 redirect links2organizations cooperatives federations teams clubs departments organizations organisations banks legislatures record labels constructors manufacturers ministries ministers military units military formations universities radio stations newspapers broadcasters political parties television networks companies businesses agenciesextracts 124403 titles and 130588 redirects3locations airports districts regions countries areas lakes seas oceans towns villages parks bays bases cities landmarks rivers valleys deserts locations places neighborhoodsextracts 211872 titles and 194049 redirects4named objects aircraft spacecraft tanks rifles weapons ships firearms automobiles computers boatsextracts 28739 titles and 31389 redirects5art work novels books paintings operas playsextracts 39800 titles and 34037 redirects6films films telenovelas shows musicalsextracts 50454 titles and 49252 redirects7songs songs singles albumsextracts 109645 titles and 67473 redirects8events playoffs championships races competitions battlesextracts 20176 titles and 15182 redirects
W09-1119
design challenges and misconceptions in named entity recognitionwe analyze some of the fundamental design challenges and misconceptions that underlie the development of an efficient and robust ner systemin particular we address issues such as the representation of text chunks the inference approach needed to combine local ner decisions the sources of prior knowledge and how to use them within an ner systemin the process of comparing several solutions to these challenges we reach some surprising conclusions as well as develop an ner system that achieves 908 f1 score on the conll2003 ner shared task the best reported result for this datasetwe use the iobes notation to represent ne mentions with label sequences thereby ner is formalized as a multi class classification problem in which a given token is classified into iobes labelswe investigate design challenges for named entity recognition and showed that other design choices such as the representation of output labels and using features built on external knowledge are more important than the learning model itselfwe have shown for the conll2003 shared task that greedy decoding is competitive to the widely used viterbi algorithm while being over 100 times faster at the same time
learning the scope of hedge cues in biomedical texts identifying hedged information in biomedical literature is an important subtask in information extraction because it would be misleading to extract speculative information as factual information in this paper we present a machine learning system that finds the scope of hedge cues in biomedical texts the system is based on a similar system that finds the scope of negation cues we show that the same scope finding approach can be applied to both negation and hedging to investigate the robustness of the approach the system is tested on the three subcorpora of the bioscope corpus that represent different text types research on information extraction of biomedical texts has grown in the recent yearsmost work concentrates on finding relations between biological entities like genes and proteins determining which information has been hedged in biomedical literature is an important subtask of information extraction because extracted information that falls in the scope of hedge cues cannot be presented as factual informationit should be discarded or presented separately with lower confidencethe amount of hedged information present in texts cannot be understimatedvincze et al report that 1770 of the sentences in the abstracts section of the bioscope corpus and 1944 of the sentences in the full papers section contain hedge cueslight et al estimate that 11 of sentences in medline abstracts contain speculative fragmentsszarvas reports that 3241 of gene names mentioned in the hedge classification dataset described in medlock and briscoe appears in a speculative sentencein this paper we present a machine learning system that finds the scope of hedge cues in biomedical textsfinding the scope of a hedge cue means determining at sentence level which words in the sentence are affected by the hedge cuethe system combines several classifiers and works in two phases in the first phase hedge cues are identified and in the second phase the full scope of these hedge cues is foundthis means that for a sentence like the one in example taken from the bioscope corpus the system performs two actions first it detects that suggest might and or are hedge signals second it detects that suggest has as its scope expression of cjun jun b and jun d genes might be involved in terminal granulocyte differentiation or in regulating granulocyte functionality that might has as its scope be involved in terminal granulocyte differentiation or in regulating granulocyte functionality and that or has as its scope in regulating granulocyte functionality these results suggest that expression of cjun jun b and jun d genes might be involved in terminal granulocyte differentiation or in regulating granulocyte functionality contrary to current practice to only detect modality our system also determines the part of the sentence that is hedgedwe are not aware of other systems that perform this taskthe system is based on a similar system that finds the scope of negation cues we show that the system performs well for this task and that the same scope finding approach can be applied to both negation and hedgingto investigate the robustness of the approach the system is tested on three subcorpora of the bioscope corpus that represent different text typesalthough the system was developed and tested on biomedical text the same approach can also be applied to text from other domainsthe paper is organised as followsin section 2 we summarise related workin section 3 we describe the corpus on which the system has been developedin section 4 we introduce the task to be performed by the system which is described in section 5results are presented and discussed in section 6finally section 7 puts forward some conclusionshedging has been broadly treated from a theoretical perspectivethe term hedging is originally due to lakoff who introduces it in relation to prototype theorypalmer defines a term related to hedging epistemic modality which expresses the speakers degree of commitment to the truth of a propositionsaurı et al research the modality of events which expresses the speakers degree of of commitment to the events being referred to in a textthey treat a wide spectrum of modal types and present the codification of modality information with the specification language timeml which allows to mark modality cues at a lexical level and at a syntactic levelas for research that focuses specifically on scientific texts with descriptive purposes hyland describes hedging in scientific research articles proposing a pragmatic classification of hedge expressions based on an exhaustive analysis of a corpusthe catalogue of hedging cues includes modal auxiliaries epistemic lexical verbs epistemic adjectives adverbs and nounsadditionally it includes also a variety of nonlexical cueslight et al analyse the use of speculative language in medline abstactsthey studied the expression of levels of belief and annotated a corpus of abstracts in order to check if the distinction between high speculative low speculative and definite sentences could be made reliablythey found that the speculative vs definite distinction was reliable but the distinction between low and high speculative was notthompson et al report on a list of words and phrases that express modality in biomedical texts and put forward a categorisation schemethe list and the scheme are validated by annotating 202 medline abstractssome nlp applications incorporate modality informationfriedman et al develop a medical text processor that translates clinical information in patient documents into controlled vocabulary termsthe system uses a semantic grammar that consists of rules that specify wellformed semantic patternsthe extracted findings are assigned one of five types of modality information no low certainty moderate certainty high certainty and cannot evaluatedi marco and mercer use hedging information to classify citationsthey observe that citations appear to occur in sentences marked with hedging cueswork on hedging in the machine learning field has as a goal to classify sentences into speculative or definite medlock and briscoe provide a definition of what they consider to be hedge instances and define hedge classification as a weakly supervised machine learning taskthe method they use to derive a learning model from a seed corpus is based on iteratively predicting labels for unlabeled training samplesthey report experiments with svms on a dataset that they make publicly available1the experiments achieve a recallprecision break even point of 076they apply a bagofwords approach to sample representationmedlock presents an extension of this work by experimenting with more features lemmas and bigramsexperiments show that the pos representation does not yield significant improvement over the results in medlock and briscoe whereas with a lemma representation the system achieves a peak performance of 08 bep and with bigrams of 082 bepszarvas follows medlock and briscoe in classifying sentences as being speculative or nonspeculativeszarvas develops a maxent system that incorporates bigrams and trigrams in the feature representation and performs a complex feature selection procedure in order to reduce the number of keyword candidatesit achieves up to 085 bep and 8508 f1 by using an external dictionarykilicoglu and bergler apply a linguistically motivated approach to the same clasification task by using knowledge from existing lexical resources and incorporating syntactic patternsadditionally hedge cues are weighted by automatically assigning an information gain measure and by assigning weights semi automatically depending on their types and centrality to hedgingthe system achieves results of 085 bepas mentioned earlier we are not aware of research that has focused on learning the scope of hedge signals inside or outside of the biomedical domain which makes a direct comparison with the approaches described here impossiblethe system has been developed using the bioscope corpus 2 a freely available resource that consists of medical and biological textsin the corpus every sentence is annotated with information about negation and speculationthe annotation indicates the boundaries of the scope and the keywords as shown in abovein the annotation scopes are extended to the biggest syntactic unit possible so that scopes have the maximal length and the speculation cue is always included in the scopethe bioscope corpus consists of three parts clinical freetexts biological full papers and biological paper abstracts from the genia corpus table 1 shows statistics about the corporahedge cues are represented by one or more tokens as shows where the hedge cues that appear in the three corpora are listedthe complete list of all hedge cues comprises 176 cuesin the same corpora the number of negation cues is lower 38 apparent apparently appear assume can consider consistent with could either indicate likely may no evidence not or perhaps possible possibly presumably probable probably should suggestion support think unclear whether would 35 hedge cues that occur in the clinical reports subcorpus do not occur in the abstracts subcorpus and 34 hedge cues that appear in the papers subcorpus do not appear in the abstracts subcorpusonly 1590 of the total of hedge cues appear in the three subcorporathe most frequent hedge cues in the abstracts subcorpus are may appear and or in the papers subcorpus suggest may and might and in the clinical subcorpus or suggest and evaluate for the texts have been processed with the genia tagger a bidirectional inference based tagger that analyzes english sentences and outputs the base forms partofspeech tags chunk tags and named entity tags in a tabseparated formatadditionally we converted the annotation about scope of negation into a tokenpertoken representation following the standard format of the 2006 conll shared task where sentences are separated by a blank line and fields are separated by a single tab charactera sentence consists of a sequence of tokens each one starting on a new linewe model this task in the same way that we modelled the task for finding the scope of negation ie as two consecutive classification tasks a first one that consists of classifying the tokens of a sentence as being at the beginning of a hedge signal inside or outsidethis allows the system to find multiword hedge cuesthe second classification task consists of classifying the tokens of a sentence as being the first element of the scope the last or neitherthis happens as many times as there are hedge cues in the sentencethe two classification tasks are implemented using supervised machine learning methods trained on part of the annotated corpusin this phase a classifier predicts for all tokens in a sentence whether a token is the first token of a hedge cue inside a hedge cue or outside of it for sentence the system assigns the bcue class to indicate the icue class to that and the ocue class to the rest of tokens these results indicate that a component or components of nfat have the potential to reconstitute nf the instances represent all tokens in the corpus and they have features about the token lemma word partofspeech and iob3 chunk tag and features about the token context word pos and iob chunk tag of 3 tokens to the right and 3 to the leftwe use igtree as implemented in timbl we also experimented with ib1 but it produced lower resultsthe classifier was parameterised by using gain ratio for feature weightingaccording to the gain ratio scores the most informative features are the lemma and word of the token in focus followed by the word of the token to the right and of the token to the leftwe performed two experimentsin one the test file is preprocessed using a list of hedge cues extracted from the training corpusthe list comprises the following hedge cues listed in instances with these hedge cues are directly assigned their classthe classifier predicts the class of the rest of tokensin the other experiment we do not preprocess the test file appear apparent apparently believe either estimate hypothesis hypothesize if imply likely may might or perhaps possible possibly postulate potential potentially presumably probably propose putative should seem speculate suggest support suppose suspect think uncertain unclear unkwown unlikely whether would in this phase three classifiers predict for all tokens in the sentence whether a token is the first token in the scope sequence the last or neither for the sentence in 3 the classifiers assign the class fscope to indicate lscope to nf and none to the rest of tokensa fourth classifier is a metalearner that uses the predictions of the three classifiers to predict the scope classesan instance represents a pair of a hedge cue and a token from the sentencethis means that all tokens in a sentence are paired with all hedge cues that occur in the sentencehedge cues are those that have been classified as such in the previous phaseonly sentences that have hedge cues are selected for this phasethe three object classifiers that provide input to the metalearner were trained using the following machine learning methods conditional random fileds as implemented in crf051 crfs define a conditional probability distribution over label sequences given a particular observation sequence rather than a joint distribution over label and observation sequences and are reported to avoid the label bias problem of hmms and other learning approachesthe memorybased learning algorithm was parameterised in this case by using overlap as the similarity metric gain ratio for feature weighting using 7 knearest neighbors and weighting the class vote of neighbors as a function of their inverse linear distancethe svm was parameterised in the learning phase for classification cost factor of 1 and biased hyperplane and it used a linear kernel functionthe crfs classifier used regularization algorithm l2 for training the hyperparameter and the cutoff threshold of features were set to 1we have used the same features used for the system that finds the scope of negationthe features of the first three classifers are the fourth classifier a metalearner is also a crfs as implemented in crfthe features of this classifier are hedge cues in the bioscope corpus always scope over a consecutive block of tokens including the cue token itselfhowever the classifiers only predict the first and last element of the scopewe need to process the output of the classifers in order to build the complete sequence of tokens that constitute the scopewe apply the following postprocessing if one token has been predicted as first and one as last the sequence is formed by the tokens between first and last if one token has been predicted as first and none has been predicted as last the sequence is formed by the token predicted as first if one token has been predicted as last and none as first the sequence will start at the hedge cue and it will finish at the token predicted as last if one token has been predicted as first and more than one as last the sequence will end with the first token predicted as last after the token predicted as first if there is one if one token has been predicted as last and more than one as first the sequence will start at the hedge signal if no token has been predicted as first and more than one as last the sequence will start at the hedge cue and will end at the first token predicted as last after the hedge signalthe results provided for the abstracts part of the corpus have been obtained by performing 10fold cross validation experiments whereas the results provided for papers and clinical reports have been obtained by training on the full abstracts subcorpus and testing on the papers and clinical reports subcorpusthe latter experiment is therefore a test of the robustness of the system when applied to different text types within the same domainthe evaluation is made using the precision and recall measures and their harmonic mean fscorewe report micro f1in the hedge finding task a hedge token is correctly classified if it has been classified as being at the beginning or inside the hedge signalwe also evaluate the percentage of hedge cues that have been correctly identifiedin the scope finding task a token is correctly classified if it has been correctly classified as being inside or outside of the scope of all the hedge cues that there are in the sentencethis means that when there is more than one hedge cue in the sentence the token has to be correctly assigned a class for as many hedge signals as there areadditionally we evaluate the percentage of correct scopes a scope is correct if all the tokens in the sentence have been assigned the correct scope class for a specific hedge signalthe evaluation in terms of precision and recall measures takes as unit a token whereas the evaluation in terms of pcs takes as unit a scopean informed baseline system has been created by tagging as hedge cues the tokens with the words listed in abovethe list has been extracted from the training corpusthe results are shown in table 2the fact that the results are lower for the papers and clinical subcorpora can be explained by the fact that the list of cues has been extracted from the training corpustable 3 shows the results of the systemthe results of the system for abstracts and papers are higher than baseline but for clinical they are lowerthis is due to the fact that in the baseline system the hedge cue or that accounts for 2453 of the hedge cues is 100 correct whereas the system achieves only 072 of correct predictionsthe score obtained by or is also the reason why the system produces lower results for the clinical subcorpusin the abstracts subcorpus the hedge cue that has the biggest proportion of false positives is orof the 1062 accurrences of or in 8832 of the cases or is not a hedge cuethe system that uses preprocessing produces 938 false positives and 4 false negatives whereas the other system produces 21 false positives and 108 false negativesin the papers subcorpus the hedge cues if or can indicate and estimate because 6738 of the false positivesin the clinical subcorpus the hedge cues evidence evidence of no and appear because 8827 of the false positivesin contrast with the abstracts subcorpus the hedge cue or has only 5 false positives and scores an f1 of 9910so in the clinical corpus or is not ambiguous whereas in the abstracts subcorpus it is very ambiguousan example of or as hedge cue in the clinical subcorpus is shown in an example of or as hedge cue in the abstracts subcorpus is shown in and as a non cue in macrophages induction of nfkb occurred only if the cells were previously infected with hiv1compared to negation cues hedge cues are more varied and more ambiguousboth the system without and with preprocessing for negation finding performed better than the hedge finding systeman informed baseline system has been created by calculating the average length of the scope to the right of the hedge cue in each corpus and tagging that number of tokens as scope tokenswe take the scope to the right for the baseline because it is much more frequent than the scope to the left as is shown by the statistics contained in table 1 of section 3baseline results are presented in table 5the low pcs for the three subcorpora indicates that finding the scope of hedge cues is not a trivial taskthe fact that despite a very low pcs precision recall and f1 are relatively high indicates that these measures are in themselves not reliable to evaluate the performance of the systemthe upperbound results of the metalearner system assuming gold standard identification of hedge cues are shown in table 6the percentage of correct scopes has been measured in two ways pcs measures the proportion of correctly classified tokens in the scope sequence whereas pcs2 measures the proportion of nouns and verbs that are correctly classifed in the scope sequencethis less strict way of computing correctness is motivated by the fact that being able to determine the concepts and relations that are speculated is the most important use of the hedge scope finderresults show that the system achieves a high percentage of fully correct scopes and that although performance is lower for the papers and clinical corpora the system is portabletable 7 shows the results of the negation scope finding system also with gold standard negation cuesthe comparison of results shows that for abstracts and papers the scores are higher for the hedge system which means that the system can be used for finding both types of scopethe results of the hedge system with predicted hedge cues are presented in table 8the hedge cues have been predicted by the system without the preprocessing step presented in subsection 61in terms of pcs which is a scope based measure results are considerably higher than baseline results whereas in terms of precision recall and f1 which are token based measures results are lowerevaluating the system in terms of a more relaxed measure does not reflect a significant increase in its performancethis suggests that when a scope is incorrectly predicted main content tokens are also incorrectly left out of the scope or addedresults also show that the system based on predicted hedge cues performs lower for all corpora which is also a trend observed for the negation scope finding systemthe difference in performance for abstracts and papers follows the same trends as in the negation system whereas the drop in performance for the clinical subcorpus is biggerthis can be explained by the results obtained in the cues finding phase where the clinical subcorpus obtained only 4192 f1however gold standard results show that if the hedge cues are identified then the system is portabletable 9 shows the pcs results per hedge cuethe cues that get better scores in the clinical and papers subcorpora are cues that appear in the abstracts subcorpus and get a good scorecues that occur in the clinical subcorpus and do not occur in the abstracts subcorpus get 000 score or close to 000 whereas cues that appear in both subcorpora tend to get a similar or better score in the clinical subcorpusthis is a trend that we also observed in the negation scope finding systemas with that system we also observed that the papers subcorpus tends to get lower scores than the abstracts subcorpusthe results of the system based on gold standard hedge cues showed that the system can be applied to negation scope finding and hedge scope finding but these results show that the results of the second phase of the system depend on the results of the first phase of the system and that finding hedge cues is a domain dependent taskthe cues that are not present in the training data cannot be learned in the test data and the same applies to their scopethis observation is consistent with the observation that the portability of hedge classifiers is limited made by szarvas in this paper we have presented a metalearning approach to processing the scope of hedge cues based on a system that finds the scope of negation cueswe have shown that the same system can find both the scope of negation and hedge cuesthe performance of the system is evaluated in terms of percentage of correct scopes on three text typesin the hedge finding phase the system achieves an f1 of 8477 in the abstracts subcorpusexisting systems that classify sentences as speculative or not reach an 8500 bepalthough the tasks are different we consider that the results of our system are competitivein the scope finding phase the system that uses predicted hedge cues achieves 6555 pcs in the abstracts corpus which is very similar to the result obtained by the negation scope finding system with predicted negation cues however the results for the papers and clinical subcorpora are considerably lower than the results for the abstracts subcorpus in the two phasesin the case of the negation scope finding system the evaluation on the clinical subcorpus yielded a 423 pcs higher result whereas in the case of the hedge scope finding system the results are almost 3000 pcs lower confirming the observation that the portability of hedge classifers is limitedfuture research will focus on trying to improve the first phase of the system and anlysing errors in depth in order to get insights into how to get a better performanceour work was made possible through financial support from the university of antwerp we are thankful to three anonymous reviewers for their valuable comments
W09-1304
learning the scope of hedge cues in biomedical textsidentifying hedged information in biomedical literature is an important subtask in information extraction because it would be misleading to extract speculative information as factual informationin this paper we present a machine learning system that finds the scope of hedge cues in biomedical textsthe system is based on a similar system that finds the scope of negation cueswe show that the same scope finding approach can be applied to both negation and hedgingto investigate the robustness of the approach the system is tested on the three subcorpora of the bioscope corpus that represent different text typeswe develop a scope detector following a supervised sequence labeling approachwe present a metalearning system that finds the scope of hedge cues in biomedical textswe use shallow syntactic features
overview of bionlprsquo09 shared task on event extraction the paper presents the design and implementation of the bionlp09 shared task and reports the final results with analysis the shared task consists of three subtasks each of which addresses biomolecular event extraction at a different level of specificity the data was developed based on the genia event corpus the shared task was run over 12 weeks drawing initial interest from 42 teams of these teams 24 submitted final results the evaluation results are encouraging indicating that stateoftheart performance is approaching a practically applicable level and revealing some remaining challenges the history of text mining shows that shared tasks based on carefully curated resources such as those organized in the muc trec and ace events have significantly contributed to the progress of their respective fieldsthis has also been the case in biotmexamples include the trec genomics track jnlpba lll and biocreative while the first two addressed bioir and bioner respectively the last two focused on bioie seeking relations between biomoleculeswith the emergence of ner systems with performance capable of supporting practical applications the recent interest of the biotm community is shifting toward iesimilarly to lll and biocreative the bionlp09 shared task also addresses bioie but takes a definitive step further toward finergrained iewhile lll and biocreative focus on a rather simple representation of relations of biomolecules ie proteinprotein interactions the bionlp task concerns the detailed behavior of biomolecules characterized as biomolecular events the difference in focus is motivated in part by different applications envisioned as being supported by the ie methodsfor example biocreative aims to support curation of ppi databases such as mint for a long time one of the primary tasks of bioinformaticsthe bionlp task aims to support the development of more detailed and structured databases eg pathway or gene ontology annotation databases which are gaining increasing interest in bioinformatics research in response to recent advances in molecular biologyas the first shared task of its type the bionlp task aimed to define a bounded welldefined bioevent extraction task considering both the actual needs and the state of the art in biotm technology and to pursue it as a communitywide effortthe key challenge was in finding a good balance between the utility and the feasibility of the task which was also limited by the resources availablespecial consideration was given to providing evaluation at diverse levels and aspects so that the results can drive continuous efforts in relevant directionsthe paper discusses the design and implementation of the bionlp task and reports the results with analysisto focus efforts on the novel aspects of the event extraction task is was assumed that named entity recognition has already been performed and the task was begun with a given set of gold protein annotationthis is the only feature of the task setting that notably detracts from its realismhowever given that stateoftheart protein annotation methods show a practically applicable level of performance ie88 fscore we believe the choice is reasonable and has several advantages including focus on event extraction and effective evaluation and analysistable 1 shows the event types addressed in the bionlp taskthe event types were selected from the genia ontology with consideration given to their importance and the number of annotated instances in the genia corpusthe selected event types all concern protein biology implying that they take proteins as their themethe first three types concern protein metabolism ie protein production and breakdownphosphorylation is a representative protein modification event and localization and binding are representative fundamental molecular eventsregulation represents regulatory events and causal relationsthe last five are universal but frequently occur on proteinsfor the biological interpretation of the event types readers are referred to gene ontology and the genia ontologythe failure of p65 translocation to the nucleus as shown in table 1 the theme or themes of all events are considered primary arguments that is arguments that are critical to identifying the eventfor regulation events the entity or event stated as the because of the regulation is also regarded as a primary argumentfor some event types other arguments detailing of the events are also defined from a computational point of view the event types represent different levels of complexitywhen only primary arguments are considered the first five event types require only unary arguments and the task can be cast as relation extraction between a predicate and an argument the binding type is more complex in requiring the detection of an arbitrary number of argumentsregulation events always take a theme argument and when expressed also a because argumentnote that a regulation event may take another event as its theme or because a unique feature of the bionlp task compared to other event extraction tasks egacein the bionlp task events are expressed using three different types of entitiestextbound entities are represented as text spans with associated class informationthe tentities include event triggers protein references and references to other entities a tentity is represented by a pair and assigned an id with the prefix t egt1t3 in figure 1an event is expressed as an ntuple of typed tentities and has a id with prefix e ege1an event modification is expressed by a pair and has an id with prefix m egm1the bionlp task targets semantically rich event extraction involving the extraction of several different classes of informationto facilitate evaluation on different aspects of the overall task the task is divided to three subtasks addressing event extraction at different levels of specificitytask 1core event detection detection of typed textbound events and assignment of given proteins as their primary argumentstask 2event enrichment recognition of secondary arguments that further specify the events extracted in task 1task 3negationspeculation detection detection of negations and speculation statements concerning extracted eventstask 1 serves as the backbone of the shared task and is mandatory for all participantstask 2 involves the recognition of entity type tentities and assignment of those as secondary event argumentstask 3 addresses the recognition of negated or speculatively expressed events without specific binding to textan example is given in fig1the bionlp task data were prepared based on the genia event corpusthe data for the training and development sets were derived from the publicly available event corpus and the data for the test set from an unpublished portion of the corpustable 2 shows statistics of the data setsfor data preparation in addition to filtering out irrelevant annotations from the original genia corpus some new types of annotation were added to make the event annotation more appropriate for the purposes of the shared taskthe following sections describe the key changes to the corpusthe named entity annotation of the genia corpus has been somewhat controversial due to differences in annotation principles compared to other biomedical ne corporafor instance the ne annotation in the widely applied genetag corpus does not differentiate proteins from genes while genia annotation doessuch differences have caused significant inconsistency in methods and resources following different annotation schemesto remove or reduce the inconsistency genetagstyle ne annotation which we term geneorgeneproduct annotation has been added to the genia corpus with appropriate revision of the original annotationfor details we refer to the ne annotation used in the bionlp task data is based on this annotationthe genia event annotation was made based on the genia event ontology which uses a loose typing system for the arguments of each event classfor example in figure 2 it is expressed that the binding event involves two proteins traf2 and cd40 and that in the case of cd40 its cytoplasmic domain takes part in the bindingwithout constraints on the type of theme arguments the following two annotations are both legitimate the two can be seen as specifying the same event at different levels of specificity1although both alternatives are reasonable the need to have consistent training and evaluation data requires a consistent choice to be made for the shared taskthus we fix the types of all nonevent primary arguments to be proteins for genia event annotations involving themes other than proteins additional argument types were introduced for example as follows note that the protein cd40 and its domain cytoplasmic domain are associated by argument numberingto resolve issues related to the mapping between proteins and related entities systematically we introduced partial static relation annotation for relations such as partwhole drawing in part on similar annotation of the bioinfer corpus for details of this part of the revision process we refer to figure 2 shows some challenging casesin the site gata motifs is not identified as an argument of the binding event because the protein containing it is not statedin among the two sites of the gene gmcsf only the more specific one pebp2 is annotatedalternative names for the same object are frequently introduced in biomedical texts typically through appositionthis is illustrated in figure 3 where the two expressions b cell transcription factor and bsap are in apposition and refer to the same proteinconsequently in this case the following two annotations represent the same event in the genia event corpus only one of these is annotated with preference given to shorter names over longer descriptive onesthus of the above example events the latter would be annotatedhowever as both express the same event in the shared task evaluation either alternative was accepted as correct extraction of the eventin order to implement this aspect of the evaluation expressions of equivalent entities were annotated as follows eq the equivalent entity annotation in the revised genia corpus covers also cases other than simple apposition illustrated in figure 3a frequent case in biomedical literature involves use of the slash symbol to state synonymsthe slash symbol is ambiguous as it is used also to indicate dimerized proteinsin the case of p50p50 the two p50 are annotated as equivalent because they represent the same proteins at the same statenote that although rare also explicitly introduced aliases are annotated as in figure 3for the evaluation the participants were given the test data with gold annotation only for proteinsthe evaluation was then carried out by comparing the annotation predicted by each participant to the gold annotationfor the comparison equality of annotations is defined as described in section 41the evaluation results are reported using the standard recallprecisionfscore metrics under different criteria defined through the equalitiesequality of events is defined as follows event equality equality holds between any two events when the event types are the same the event triggers are the same and the arguments are fully matcheda full matching of arguments between two events means there is a perfect 1to1 mapping between the two sets of argumentsequality of individual arguments is defined as follows argument equality equality holds between any two arguments when the role types are the same and both are tentities and equality holds between them or both are events and equality holds between themdue to the condition event equality is defined recursively for events referring to eventsequality of tentities is defined as follows tentity equality equality holds between any two tentities when the entity types are the same and the spans are the sameany two text spans and are the same iff beg1 beg2 and end1 end2note that the event triggers are also tentities thus their equality is defined by the tentity equalityvarious evaluation modes can be defined by varying equivalence criteriain the following we describe three fundamental variants applied in the evaluationstrict matching the strict matching mode requires exact equality as defined in section 41as some of its requirements may be viewed as unnecessarily precise practically motivated relaxed variants described in the following are also appliedapproximate span matching the approximate span matching mode is defined by relaxing the requirement for text span matching for tentitiesspecifically a given span is equivalent to a gold span if it is entirely contained within an extension of the gold span by one word both to the left and to the right that is beg1 ebeg2 and end1 eend2 where is the given span and is the extended gold spanapproximate recursive matching in strict matching for a regulation event to be correct the events it refers to as theme or because must also be be strictly correctthe approximate recursive matching mode is defined by relaxing the requirement for recursive event matching so that an event can match even if the events it refers to are only partially correctspecifically for partial matching only theme arguments are considered events can match even if referred events differ in nontheme argumentsthe bionlp task was held for 12 weeks from the sample data release to the final submissionit included 5 weeks of system design period with sample data 6 weeks of system development period with training and development data and a 1 week test periodthe system development period was originally planned for 5 weeks but extended by 1 week due to the delay of the training data release and the revisiontable 3 shows key dates of the scheduleto allow participants to focus development efforts on novel aspects of event extraction we prepared publicly available bionlp resources readily available for the shared taskseveral fundamental bionlp tools were provided through youcompare 2 which included tools for tokenization sentence segmentation partofspeech tagging chunking and syntactic parsingparticipants were also provided with the syntactic analyses created by a selection of parserswe applied two mainstream penn treebank phrase structure parsers the bikel parser3 implementing collins parsing model and trained on ptb and the reranking parser of with the selftrained biomedical parsing model of 4we also applied the gdep5 native dependency parser trained on the genia treebank and a version of the cc ccg deep parser6 adapted to biomedical text the text of all documents was segmented and tokenized using the genia sentence splitter and the genia tagger provided by youcomparethe same segmentation was enforced for all parsers which were run using default settingsboth the native output of each parser and a representation in the popular stanford dependency format were providedthe sd representation was created using the stanford tools7 to convert from the ptb scheme the custom conversion introduced by for the cc ccg parser and a simple formatonly conversion for gdepin total 42 teams showed interest in the shared task and registered for participation and 24 teams submitted final resultsall 24 teams participated in the obligatory task 1 six in each of tasks 2 and 3 and two teams completed all the three taskstable 4 shows a profile of the 22 final teams excepting two who wished to remain anonymousa brief examination on the team organization shows a computer science background to be most frequent among participants with less frequent participation from bioinformaticians biologists and liguists this may be attributed in part to the fact that the event extraction task required complex computational modelingthe role of computer scientists may be emphasized in part due to the fact that the task was novel to most participants requiring particular efforts in framework design and implementation and computational resourcesthis also suggests there is room for improvement from more input from biologiststhe final evaluation results of task 1 are shown in table 5the results on the five event types involving only a single primary theme argument are shown in one merged class simple eventthe broad performance range indicates even the extraction of simple events is not a trivial taskhowever the topranked systems show encouraging performance achieving or approaching 70 fscorethe performance ranges for binding and regulation events show their extraction to be clearly more challengingit is interesting that while most systems show better performance for binding over regulation events the systems concordu and utdbcls are better for regulation showing somewhat reduced performance for binding eventsthis is in particular contrast to the following two systems vibghent and utokyo which show far better performance for binding than regulation eventsas one possible explanation we find that the latter two differentiate binding events by their number of themes while the former two give no specific treatment to multitheme binding eventssuch observations and comparisons are a clear benefit of a communitywide shared tasktable 6 shows the evaluation results for the teams who participated in task 2the all column shows the overall performance of the systems for task 2 while the all second args column shows the performance of finding only the secondary argumentsthe evaluation results show considerable differences between the criteriafor example the system team 24 shows performance comparable to the top ranked system in finding secondary arguments although its overall performance for task 2 is more limitedtable 6 also shows the three systems utdbcls team 24 and cnbmadrid show performance at a practical level in particular in finding specific sites of phosphorylationas shown in table 7 the performance range for task 3 is very low although the representation of the task is as simple as the simple eventswe attribute the reason to the fact that task 3 is the only task of which the annotation is not bound to textual clue thus no textbound annotation was providedfigure 4 shows a scatter plot of the performance of the participating systems during the system development periodthe performance evaluation comes from the log of the online evaluation system on the development datait shows the best performance and the average performance of the participating systems were trending upwards up until the deadline of final submission which indicates there is still much potential for improvementtable 8 shows experimental results of a system ensemble using the final submissionsfor the experiments the top 310 systems were chosen and the output of each system treated as a weighted vote8three weighting schemes were used equal weights each vote equally averaged weights each vote by the overall fscore of the system event type weights each vote by the fscore of the system for the specific event typethe best score 5596 was obtained by the event type weighting scheme showing a 4 unit improvement over the best individual systemwhile using the final scores for weighting uses data that would not be available in practice similar weighting could likely be obtained eg using performance on the development datathe experiment demonstrates that an fscore better than 55 can be achieved simply by combining the strengths of the systemsmeeting with the communitywide participation the bionlp shared task was successful in introducing finegrained event extraction to the domainthe evaluation results of the final submissions from the participants are both promising and encouraging for the future of this approach to ieit has been revealed that stateoftheart performance in event extraction is approaching a practically applicable level for simple events and also that there are many remaining challenges in the extraction of complex eventsa brief analysis suggests that the submitted data together with the system descriptions are rich resources for finding directions for improvementsfinally the experience of the shared task participants provides an invaluable basis for cooperation in facing further challengesthis work was partially supported by grantinaid for specially promoted research and genome network project
W09-1401
overview of bionlprsquo09 shared task on event extractioneven though considerable attention has been given to semantic orientation of words and the creation of large polarity lexicons research in emotion analysis has had to rely on limited and small emotion lexiconsin this paper we show how we create a highquality moderatesized emotion lexicon using mechanical turkin addition to questions about emotions evoked by terms we show how the inclusion of a word choice question can discourage malicious data entry help identify instances where the annotator may not be familiar with the target term and help obtain annotations at sense level we perform an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speechwe identify which emotions tend to be evoked simultaneously by the same term and show that certain emotions indeed go hand in handthe bionlp 09 shared task on event extraction the first large scale evaluation of biomedical event extraction systems drew the participation of 24 groups and established a standard event representation scheme and datasetsthe bionlp 09 shared task is the first shared task that provided a consistent data set and evaluation tools for extraction of such biological relations
emotions evoked by common words and phrases using mechanical turk to create an emotion lexicon even though considerable attention has been given to semantic orientation of words and the creation of large polarity lexicons research in emotion analysis has had to rely on limited and small emotion lexicons in this paper we show how we create a highquality moderatesized emotion lexicon using mechanical turk in addition to questions about emotions evoked by terms we show how the inclusion of a word choice question can discourage malicious data entry help identify instances where the annotator may not be familiar with the target term and help obtain annotations at sense level we perform an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speech we identify which emotions tend to be evoked simultaneously by the same term and show that certain emotions indeed go hand in hand when analyzing text automatically detecting emotions such as joy sadness fear anger and surprise is useful for a number of purposes including identifying blogs that express specific emotions towards the topic of interest identifying what emotion a newspaper headline is trying to evoke and devising automatic dialogue systems that respond appropriately to different emotional states of the useroften different emotions are expressed through different wordsfor example delightful and yummy indicate the emotion of joy gloomy and cry are indicative of sadness 26 shout and boiling are indicative of anger and so ontherefore an emotion lexicona list of emotions and words that are indicative of each emotionis likely to be useful in identifying emotions in textwords may evoke different emotions in different contexts and the emotion evoked by a phrase or a sentence is not simply the sum of emotions conveyed by the words in it but the emotion lexicon will be a useful component for any sophisticated emotion detecting algorithmthe lexicon will also be useful for evaluating automatic methods that identify the emotions evoked by a wordsuch algorithms may then be used to automatically generate emotion lexicons in languages where no such lexicons existas of now highquality highcoverage emotion lexicons do not exist for any language although there are a few limitedcoverage lexicons for a handful of languages for example the wordnet affect lexicon for six basic emotions and the general inquirer which categorizes words into a number of categories including positive and negative semantic orientationamazon has an online service called mechanical turk that can be used to obtain a large amount of human annotation in an efficient and inexpensive manner 1 however one must define the task carefully to obtain annotations of high qualityseveral checks must be placed to ensure that random and erroneous annotations are discouraged rejected and reannotatedin this paper we show how we compiled a moderatesized english emotion lexicon by manual annotation through amazons mechanical turk servicethis dataset which we will call emolez is many times as large as the only other known emotion lexicon wordnet affect lexiconmore importantly the terms in this lexicon are carefully chosen to include some of the most frequent nouns verbs adjectives and adverbsbeyond unigrams it has a large number of commonly used bigramswe also include some words from the general inquirer and some from wordnet affect lexicon to allow comparison of annotations between the various resourceswe perform an extensive analysis of the annotations to answer several questions that have not been properly addressed so farfor instance how hard is it for humans to annotate words with the emotions they evokewhat percentage of commonly used terms in each part of speech evoke an emotionare emotions more commonly evoked by nouns verbs adjectives or adverbsis there a correlation between the semantic orientation of a word and the emotion it evokeswhich emotions tend to go together that is which emotions are evoked simultaneously by the same termthis work is intended to be a pilot study before we create a much larger emotion lexicon with tens of thousands of termswe focus on the emotions of joy sadness anger fear trust disgust surprise and anticipation argued by many to be the basic and prototypical emotions complex emotions can be viewed as combinations of these basic emotionswordnet affect lexicon has a few hundred words annotated with the emotions they evoke2 it was created by manually identifying the emotions of a few seed words and then marking all their wordnet synonyms as having the same emotionthe general inquirer has 11788 words labeled with 182 categories of word tags including positive and negative semantic orientation3 it also has certain other affect categories such as pleasure arousal feeling and pain but these have not been exploited to a significant degree by the natural language processing communitywork in emotion detection can be roughly classified into that which looks for specific emotion denoting words that which determines tendency of terms to cooccur with seed words whose emotions are known that which uses handcoded rules and that which uses machine learning and a number of emotion features including emotion denoting words much of this recent work focuses on six emotions studied by ekman these emotions joy sadness anger fear disgust and surprise are a subset of the eight proposed in plutchik we focus on the plutchik emotions because the emotions can be naturally paired into oppositesjoy sadness angerfear trustdisgust and anticipationsurprisenatural symmetry apart we believe that prior work on automatically computing wordpair antonymy can now be leveraged in automatic emotion detectionin the subsections below we present the challenges in obtaining highquality emotion annotation how we address those challenges how we select the target terms and the questionnaire we created for the annotatorswords used in different senses can evoke different emotionsfor example the word shout evokes a different emotion when used in the context of admonishment than when used in give me a shout if you need any help getting human annotations on word senses is made complicated by decisions about which senseinventory to use and what level of granularity the senses must haveon the one hand we do not want to choose a finegrained senseinventory because then the number of wordsense combinations will become too large and difficult to easily distinguish and on the other hand we do not want to work only at the word level because when used in different senses a word may evoke different emotionsyet another challenge is how best to convey a word sense to the annotatorlong definitions will take time to read and limit the number of annotations we can obtain for the same amount of resourcesfurther we do not want to bias the annotator towards an emotion through the definitionwe want the users to annotate a word only if they are already familiar with it and know its meaningsand lastly we must ensure that malicious and erroneous annotations are rejectedin order to overcome the challenges described above before asking the annotators questions about what emotions are evoked by a target term we first present them with a word choice problem pertaining to the targetthey are provided with four different words and asked which word is closest in meaning to the targetthis single question serves many purposesthrough this question we convey the word sense for which annotations are to be provided without actually providing annotators with long definitionsif an annotator is not familiar with the target word and still attempts to answer questions pertaining to the target or is randomly clicking options in our questionnaire then there is a 75 chance that they will get the answer to this question wrong and we can discard all responses pertaining to this target term by the annotator we generated these word choice problems automatically using the macquarie thesaurus published thesauri such as rogets and macquarie divide the vocabulary into about a thousand categories which may be interpreted as coarse sensesif a word has more than one sense then it can be found in more than one thesaurus categoryeach category also has a head word which best captures the meaning of the categorymost of the target terms chosen for annotation are restricted to those that are listed in exactly one thesaurus categorythe word choice question for a target term is automatically generated by selecting the following four alternatives the head word of the thesaurus category pertaining to the target term and three other head words of randomly selected categories the four alternatives are presented to the annotator in random orderonly a small number of the words in the wordnet affect lexicon are listed in exactly one thesaurus category and so we included target terms that occurred in two thesaurus categories as wellfor these questions we listed head words from both the senses as two of the alternatives depending on the alternative chosen we can thus determine the sense for which the subsequent emotion responses are provided by the annotatorin order to generate an emotion lexicon we first identify a list of words and phrases for which we want human annotationswe chose the macquarie thesaurus as our source pool for unigrams and bigramsany other published dictionary would have worked well toohowever apart from over 57000 commonly used english word types the macquarie thesaurus also has entries for more than 40000 commonly used phrasesfrom this list of unigrams and bigrams we chose those that occur frequently in the google ngram corpus specifically we chose the 200 most frequent ngrams in the following categories noun unigrams noun bigrams verb unigrams verb bigrams adverb unigrams adverb bigrams adjective unigrams adjective bigrams words in the general inquirer that are marked as having a negative semantic orientation words in general inquirer that are marked as having a positive semantic orientationwhen selecting these sets we ignored terms that occurred in more than one macquarie thesaurus categorylastly we chose all words from each of the six emotion categories in the wordnet affect lexicon that had at most two senses in the thesaurus the first and second column of table 1 list the various sets of target terms as well as the number of terms in each set for which annotations were requestedemolexuni stands for all the unigrams taken from the thesaurusemolexbi refers to all the bigramsemolexgi are all the words taken from the general inquireremolexwal are all the words taken from the wordnet affect lexiconan entity submitting a task to mechanical turk is called the requestera requester first breaks the task into small independently solvable units called hits and uploads them on the mechanical turk websitethe requester specifies the compensation that will be paid for solving each hitthe people who provide responses to these hits are called turkersthe requester also specifies the number of different turkers that are to annotate each hitthe annotation provided by a turker for a hit is called an assignmentwe created mechanical turk hits for each of the terms specified in table 1each hit has a set of questions all of which are to be answered by the same personwe requested five different assignments for each hit different hits may be attempted by different turkers and a turker may attempt as many hits as they wishbelow is an example hit for the target word startletitle emotions evoked by words reward per hit 004 questions 5 to 11 are similar to 4 except that joy is replaced with one of the other seven emotions sadness fear anger trust disgust surprise anticipation before going live the survey was approved by the ethics committee at the national research council canadathe first set of emotion annotations on mechanical turk were completed in about nine daysthe turkers spent a minute on average to answer the questions in a hitthis resulted in an hourly pay of slightly more than 2once the assignments were collected we used automatic scripts to validate the annotationssome assignments were discarded because they failed certain tests a subset of the discarded assignments were officially rejected because instructions were not followedabout 500 of the 10880 assignments included at least one unanswered questionthese assignments were discarded and rejectedmore than 85 of the remaining assignments had the correct answer for the word choice questionthis was a welcome result showing that largely the annotations were done in a responsible mannerwe discarded all assignments that had the wrong answer for the word choice questionif an annotator obtained an overall score that is less than 6667 on the word choice questions then we assumed that contrary to instructions hits for words not familiar to the annotator were attemptedwe discarded and rejected all assignments by such annotators hits pertaining to all the discarded assignments were uploaded for a second time on mechanical turk and the validation process was repeatedafter the second round we had three or more valid assignments for 2081 of the 2176 target termswe will refer to this set of assignments as the master setwe create the emotion lexicon from this master set containing 9892 assignments from about 1000 turkers who attempted 1 to 450 assignments eachabout 100 of them provided 20 or more assignments each the master set has on average about 475 assignments for each of the 2081 target termsthe different emotion annotations for a target term were consolidated by determining the majority class of emotion intensitiesfor a given term emotion pair the majority class is that intensity level that is chosen most often by the turkers to represent the degree of emotion evoked by the wordties are broken by choosing the stronger intensity leveltable 2 lists the percent of 2081 target terms assigned a majority class of no weak moderate and strong emotionfor example it tells us that 76 of the target terms strongly evoke joythe table also presents an average of the numbers in each column observe that the percentages for individual emotions do not vary greatly from the averagethe last row lists the percent of target terms that evoke some emotion at the various intensity levelswe calculated this using the intensity level of the strongest emotion expressed by each targetobserve that 301 of the target terms strongly evoke at least one of the eight basic emotionseven though we asked turkers to annotate emotions at four levels of intensity practical nlp applications often require only two levelsevoking particular emotion or not for each target termemotion pair we convert the fourlevel annotations into twolevel annotations by placing all no and weakintensity assignments in the nonevocative bin all moderate and strongintensity assignments in the evocative bin and then choosing the bin with the majority assignmentstable 3 gives percent of target terms considered to be evocativethe last row in the table gives the percentage of terms evocative of some emotion table 4 shows how many terms in each category are evocative of the different emotionstable 4 shows that a sizable percent of nouns verbs adjectives and adverbs are evocativeadverbs and adjectives are some of the most emotion inspiring terms and this is not surprising considering that they are used to qualify a noun or a verbanticipation trust and joy come through as the most common emotions evoked by terms of all four parts of speechthe emolexwal rows are particularly interesting because they serve to determine how much the turker annotations match annotations in the wordnet affect lexicon the most common turkerdetermined emotion for each of these rows is marked in boldobserve that wal anger terms are mostly marked as anger evocative joy terms as joy evocative and so onthe emolexwal rows also indicate which emotions get confused for which or which emotions tend to be evoked simultaneously by a termobserve that anger terms tend also to be evocative of disgustsimilarly fear and sadness go together as do joy trust and anticipationthe emolexgi rows rightly show that words marked as negative in the general inquirer mostly evoke negative emotions observe that the percentages for trust and joy are much loweron the other hand positive words evoke anticipation joy and trustin order to analyze how often the annotators agreed with each other for each termemotion pair we calculated the percentage of times the majority class has size 5 size 4 size 3 and size 2observe that for more than 50 of the terms at least four annotators agree with each othertable 5 presents these agreement valuessince many nlp systems may rely only on two intensity values we also calculate agreement at that level observe that for more than 50 of the terms all five annotators agree with each other and for more than 80 of the terms at least four annotators agreethis shows a high degree of agreement on emotion annotations despite no real control over the educational background and qualifications of the annotatorswe consolidate the semantic orientation annotations in a manner identical to the process for emotion annotationstable 7 lists the percent of 2081 target terms assigned a majority class of no weak moderate and strong semantic orientationfor example it tells us that 16 of the target terms are strongly negativethe last row in the table lists the percent of target terms that have some semantic orientation at the various intensity levelsobserve that 35 of the target terms are strongly evaluative just as in the case for emotions practical nlp applications often require only two levels of semantic orientationhaving particular semantic orientation or not or not for each target termemotion pair we convert the fourlevel semantic orientation annotations into twolevel ones just as we did for the emotionstable 8 gives percent of target terms considered to be evaluativethe last row in the table gives the percentage of terms evaluative with respect to some semantic orientation table 9 shows how many terms in each category are positively and negatively evaluativeobserve in table 9 that across the board a sizable number of terms are evaluative with respect to some semantic orientationinterestingly unigram nouns have a markedly lower proportion of negative terms and a much higher proportion of positive termsit may be argued that the default semantic orientation of noun concepts is positive and that usually it takes a negative adjective to make the phrase negativethe emolexgi rows in the two tables show that words marked as having a negative semantic orientation in the general inquirer are mostly marked as negative by the turkersand similarly the positives in gi are annotated as positiveagain this is confirmation that the quality of annotation obtained is highthe emolexwal rows show that anger disgust fear and sadness terms tend not to have a positive semantic orientation and are mostly negativein contrast and expectedly the joy terms are positivethe surprise terms are more than twice as likely to be positive than negativein order to analyze how often the annotators agreed with each other for each termemotion pair we calculated the percentage of times the majority class has size 5 size 4 size 3 and size 2table 10 presents these agreement valuesobserve that for more than 50 of the terms at least four annotators agree with each othertable 11 gives agreement values at the twointensity levelobserve that for more than 50 of the terms all five annotators agree with each other and for more than 80 of the terms at least four annotators agreewe showed how mechanical turk can be used to create a highquality moderatesized emotion lexicon for a very small cost notably we used automatically generated word choice questions to detect and reject erroneous annotations and to reject all annotations by unqualified turkers and those who indulge in malicious data entrywe compared a subset of our lexicon with existing gold standard data to show that the annotations obtained are indeed of high qualitya detailed analysis of the percent of 2081 terms for which the majority class size was 3 4 and 5 lexicon revealed insights into how prevalent emotion bearing terms are among common unigrams and bigramswe also identified which emotions tend to be evoked simultaneously by the same termthe lexicon is available for free download4 since this pilot experiment with about 2000 target terms was successful we will now obtain emotion annotations for tens of thousands of english termswe will use the emotion lexicon to identify emotional tone of larger units of text such as newspaper headlines and blog postswe will also use it to evaluate automatically generated lexicons such as the polarity lexicons by turney and littman and mohammad et al we will explore the variance in emotion evoked by nearsynonyms and also how common it is for words with many meanings to evoke different emotions in different sensesthis research was funded by the national research council canada thanks to diana inkpen and diman ghazi for early discussions on emotionthanks to joel martin for encouragement and supportthanks to norm vinson and the ethics committee at nrc for examining guiding and approving the surveyand last but not least thanks to the more than 1000 anonymous people who answered the emotion survey with diligence and care
W10-0204
emotions evoked by common words and phrases using mechanical turk to create an emotion lexiconeven though considerable attention has been given to semantic orientation of words and the creation of large polarity lexicons research in emotion analysis has had to rely on limited and small emotion lexiconsin this paper we show how we create a highquality moderatesized emotion lexicon using mechanical turkin addition to questions about emotions evoked by terms we show how the inclusion of a word choice question can discourage malicious data entry help identify instances where the annotator may not be familiar with the target term and help obtain annotations at sense level we perform an extensive analysis of the annotations to better understand the distribution of emotions evoked by terms of different parts of speechwe identify which emotions tend to be evoked simultaneously by the same term and show that certain emotions indeed go hand in handwe focus on emotion evoked by common words and phraseswe explore the use of mechanical turk to build the lexicon based on human judgmentwe create a crowd sourced term emotion association lexicon consisting of associations of over 10000 wordsense pairs with eight emotions joy sadness anger fear trust disgust surprise and anticipation argued to be the basic and prototypical emotions
creating speech and language data with amazonrsquos mechanical turk in this paper we give an introduction to using amazons mechanical turk crowdsourc ing platform for the purpose of collecting data for human language technologies wesurvey the papers published in the naacl 2010 workshop 24 researchers participated in the workshops shared task to create data for speech and language applications with 100 this paper gives an overview of the naacl2010 workshop on creating speech and language datawith amazons mechanical turka number of recent papers have evaluated the effectiveness of us ing mechanical turk to create annotated data for natural language processing applicationsthe lowcost scalable workforce available through mechan ical turk and other crowdsourcing sites opens new possibilities for annotating speech and text and has the potential to dramatically changehow we create data for human language technolo giesopen questions include what kind of researchis possible when the cost of creating annotated train ing data is dramatically reducedwhat new tasks should we try to solve if we do not limit ourselves to reusing existing training and test setscan complex annotation be done by untrained annotatorshowcan we ensure high quality annotations from crowd sourced contributorsto begin addressing these questions we orga nized an openended 100 shared taskresearchers were given 100 of credit on amazon mechanicalturk to spend on an annotation task of their choosingthey were required to write a short paper de scribing their experience and to distribute the datathat they createdthey were encouraged to ad dress the following questions how did you conveythe task in terms that were simple enough for non experts to understandwere nonexperts as good as expertswhat did you do to ensure qualityhow quickly did the data get annotatedwhat is the cost per labelresearchers submitted a 1 page proposalto the workshop organizers that described their in tended experiments and expected outcomesthe organizers selected proposals based on merit and awarded 100 credits that were generously provided by amazon mechanical turkin total 35 credits were awarded to researchersshared task participants were given 10 days to run experiments between the distribution of the credit and the initial submission deadline30 papers were submitted to the shared task track of which 24 were accepted14 papers were submitted to the generaltrack of which 10 were accepted giving a 77 ac ceptance rate and a total of 34 papersshared taskparticipants were required to provide the data col lected as part of their experimentsall of the shared task data is available on the workshop websiteamazons mechanical turk1 is an online marketplace for workamazons tag line for mechani cal turk is artificial artificial intelligence and thename refers to a historical hoax from the 18th cen 1httpwwwmturkcom 1 1 12 24 48 820 2040 40 1 hit 15 510 1020 2050 50100 100200 200500 5001k 1k5k 5k hits 1 15 510 1020 2050 50100 100200 200 5 17 22 26 19 10 3 100 1 6 9 13 19 17 12 13 5 4 1 100 11 36 22 15 11 4 2 0 100 0 5 10 15 20 25 30 1 12 24 48 820 40 hours spent on mechanical turk per week 0 5 10 15 20 1 hit 510 2050 100200 5001k 5k hits number of hits completed per week 0 10 20 30 40 1 510 2050 100200 weekly income from mechanical turk figure 1 time spent hits completed and amount earned from a survey of 1000 turkers by ipeirotis tury where a chessplaying automaton appeared tobe able to beat human opponents using a mecha nism but was in fact controlled by a person hidinginside the machinethese hint at the the primary focus of the web service which is to get people to per form tasks that are simple for humans but difficult for computersthe basic unit of work on mturk is even called a human intelligence task amazons web service provides an easy way to pay people small amounts of money to perform hitsanyone with an amazon account can either submit hits or work on hits that were submitted by othersworkers are referred to as turkersand people designing the hits are called requestersrequesters set the amount that they will pay for each item that is completedpayments are frequently as low as 001turkers are free to select whichever hits interest them and to disregard hits that they find uninteresting or which they deem pay too littlebecause of its focus on tasks requiring human in telligence mechanical turk is obviously applicable to the field of natural language processingsnow et al used mechanical turk to inexpensively collect labels for several nlp tasks including wordsense disambiguation word similarity textual en tailment and temporal ordering of eventssnow et al had two exciting findingsfirst they showed that a strong correlation between nonexpert and expertannotators can be achieved by combining the judgments of multiple nonexperts for instance by voting on each label using 10 different turkerscor relation and accuracy of labeling could be furtherimproved by weighting each turkers vote by cal ibrating them on a small amount of gold standarddata created by expert annotatorssecond they col lected a staggering number of labels for a very small amount of moneythey collected 21000 labels for just over 25turkers put in over 140 hours worth why do you complete tasks in mturkus india to spend free time fruitfully and get cash 70 60 for primaryincome purposes 15 27 for secondaryincome purposes pocket change 60 37 to kill time 33 5 the tasks are fun 40 20 currently unemployed or part time work 30 27 table 1 motivations for participating on mechanical turk from a survey of 1000 turkers by ipeirotis of human effort to generate the labelsthe amount of participation is surprisingly high given the small paymentturker demographics given the amount of work that can get done for so little it is natural to ask who would contribute so much work for so little pay and whythe answers to these questions are often mysterious becauseamazon does not provide any personal informa tion about turkers ipeirotis elucidates some of the reasons by presenting a demographic analysis of turkershe built a profile of 1000 turkers by posting a survey tomturk and paying 010 for people to answer questions about their reasons for participating on me chanical turk the amount that they earn each weekand how much time they spend as well as demo graphic information like country of origin gender age education level and household incomeone suspicion that people often have when theyfirst hear about mturk is that it is some sort of dig ital sweatshop that exploits workers in third world countrieshowever ipeirotis reports that nearly half 2 of the turkers who answered his survey were from the united states with the next largest group coming from india and the remaining 19 spread between 66 other countriestable 1 gives the survey results for questions relating to why people participate on mechanicalturkit shows that most usbased workers use me chanical turk for secondary income purposes but that the overwhelming majority of them use it to spend their time more fruitfully the economic downturn mayhave increased participation with 30 of the us based turkers reporting that they are unemployedor underemployedthe public radio show mar ketplace recently interviewed unemployed turkersit reports that they earn a little in come but that they do not earn enough to make a livingfigure 1 confirms this giving a break down of how much time people spend on mechanical turk each week how many hits they complete and how much money they earnmost turkers spend less than 8 hours per week on mechanical turk and earn less than 10 per week through the siteipeirotis reports that just over half of turkers have a college educationdespite being reasonably well educated it is important to keep in mind that turkers do not have training in specialized subjects like nlpbecause the turkers are nonexperts and because the payments are generally so low quality control is an important consideration when creating data with mturkamazon provides three mechanisms to help en sure quality requesters have the option of rejecting the work of individual turkers in which case they are not paid2 turkers can also be blocked from doing future work for a requester2since the results are downloadable even if they are rejected this could allow unscrupulous requesters to abuse turkers by rejecting all of their work even if it was done wellturkers have message boards at httpwwwturkernationcomwhere they discuss requestersthey even have a firefox plu gin called turkopticon that let us them see ratings of how goodthe requesters are in terms of communicating with turkers be ing generous and fair and paying promptlyrequesters can specify that each hit should be redundantly completed by several different turkersthis allows higher quality labels to be selected for instance by taking the majority labelrequesters can require that all workers meeta particular set of qualifications such as suffi cient accuracy on a small test set or a minimum percentage of previously accepted submissionsamazon provides two qualifications that a requester can use by defaultthese are past hit approval rate and locationthe location qualifica tion allows the requester to have hits done only byresidents of a certain country additionally requesters can design custom qualification tests that turkers must complete before working on a particular hitthese can be created through the mturk api and can either be graded manually or automaticallyan important qualification that is not among amazons default qualifications is language skillsone might design a qualification test to determine a turkers ability to speak arabic or farsi before allowing them to do part of speech tagging in those languages for instancethere are several reasons that poor quality data might be generatedthe task may be too complex orthe instructions might not be clear enough for turk ers to followthe financial incentives may be too low for turkers to act conscientiously and certain hit designs may allow them to simply randomly click instead of thinking about the taskmason and watts present a study of financial incentives on mechanical turk and find counterintuitively thatincreasing the amount of compensation for a partic ular task does not tend to improve the quality of theresultsanecdotally we have observed that some times there is an inverse relationship between the amount of payment and the quality of work because it is more tempting to cheat on highpaying hits ifyou do not have the skills to complete themfor ex ample a number of turkers tried to cheat on an urdu to english translation hit by cuttingandpastingthe urdu text into an online machine translation sys tem because we were paying the comparatively high amount of 13 31 designing hits for quality controlwe suggest designing your hits in a way that will deter cheating or that will make cheating obvioushit design is part of the art of using mturkit cannot be easily quantified but it has a large impact on the outcomefor instance we reduced cheating on our translation hit by changing the design so that we displayed images of the urdu sentences instead of text which made it impossible to copyandpaste into an mt system for anyone who could not type in arabic scriptanother suggestion is to include information within the data that you upload to mturk that will not be displayed to the turkers but will be useful to you when reviewing the hitsfor example we include machine translation output along with the source sentencesalthough this is not displayed to turkers when we review the turkerstranslations we compare them to the mt outputthis allows us to reject translations that are identical to the mt or which are just random sentences that are unrelated to the original urduwe also use a javascript3 to gather the ip addresses of the turkers and do geolocation to look up their locationturkers in pakistan require less careful scrutiny since they are more likely to be bilingual urdu speakers than those in romania for instancecrowdflower4 provides an interface for design ing hits that includes a phase for the requester toinput gold standard data with known labelsinsert ing items with known labels alongside items which need labels allows a requester to see which turkers are correctly replicating the gold standard labels andwhich are notthis is an excellent ideaif it is possi ble to include positive and negative controls in your hits then do soturkers who fail the controls can be blocked and their labels can be excluded from the final data setcrowdflowergenerated hits even display a score to the turkers to give them feedback on how well they are doingthis provides training for turkers and discourages cheating3httpwikigithubcomcallisonburch mechanical_turk_workshopgeolocation 4httpcrowdflowercom 32 iterative improvements on mturkanother class of quality control on mechanical turk is through iterative hits that build on the output of previous hitsthis could be used to have turkersjudge whether the results from a previous hit con formed to the instructions and whether it is of high qualityalternately the second set of turkers couldbe used to improve the quality of what the first turkers createdfor instance in a translation task a sec ond set of usbased turkers could edit the english produced by nonnative speakerscastingwords5 a transcription company that usesturker labor employs this strategy by having a first pass transcription graded and iteratively improvedin subsequent passeslittle et al even de signed an api specifically for running iterative tasks on mturk6although it is hard to define a set of best practicesthat applies to all hits or even to all nlp hits we recommend the following guidelines to requestersfirst and foremost it is critical to convey instruc tions appropriately for nonexpertsthe instructions should be clear and conciseto calibrate whetherthe hit is doable you should first try the task your self and then have a friend from outside the field try itthis will help to ensure that the instructions are clear and to calibrate how long each hit will take if possible you should insert positive and nega tive controls so that you can quickly screen out bad turkersthis is especially important for hits thatonly require clicking buttons to completeif pos sible you should include a small amount of gold standard data in each hitthis will allow you todetermine which turkers are good but will also al low you weight the turkers if you are combiningthe judgments of multiple turkersif you are having turkers evaluate the output of systems then ran domize the order that the systems are shown inwhen publishing papers that use mechanical turk as a source of training data or to evaluate the outputof an nlp system report how you ensured the qual ity of your datayou can do this by measuring the 5httpcastingwordscom 6httpgroupscsailmiteduuidturkit 4interannotator agreement of the turkers against ex perts on small amounts of gold standard data or by stating what controls you used and what criteria youused to block bad turkersfinally whenever possi ble you should publish the data that you generate on mechanical turk alongside your paper so that other people can verify itin the past two years several papers have published about applying mechanical turk to a diverse set ofnatural language processing tasks including cre ating questionanswer sentence pairs evaluating machine translation qual ity and crowdsouring translations paraphrasing nounnoun compouds for se meval human evaluation oftopic models and speech tran scription others haveused mturk for novel research directions like nonsimulated active learning for nlp tasks such as sen timent classification or doingquixotic things like doing humanintheloop min imum error rate training for machine translation some projects have demonstrated the super scalability of crowdsourced effortsdeng et al used mturk to construct imagenet an anno tated image database containing 32 million that arehierarchically categorized using the wordnet ontol ogy because mechanical turkallows researchers to experiment with crowdsourc ing by providing small incentives to turkers other successful crowdsourcing efforts like wikipedia or games with a purpose also share something in common with mturkthe workshop included a shared task in which participants were provided with 100 to spend on me chanical turk experimentsparticipants submitted a 1 page proposal in advance describing their intended use of the fundsselected proposals were provided 100 seed money to which many participants added their own fundsas part of their participation each team submitted a workshop paper describing theirexperiments as well as the data collected and de scribed in the paperdata for the shared papers is available at the workshop website7this section describes the variety of data types ex plored and collected in the shared taskof the 24 participating teams most did not exceed the 100 that they were awarded by a significant amounttherefore the variety and extent of data described inthis section is the result of a minimal 2400 invest mentthis achievement demonstrates the potential for mturks impact on the creation and curation of speech and language corpora61 traditional nlp tasksan established core set of computational linguistictasks have received considerable attention in the nat ural language processing communitythese include knowledge extraction textual entailment and word sense disambiguationeach of these tasks requires a large and carefully curated annotated corpus to train and evaluate statistical modelsmany of the shared task teams attempted to create new corpora for these tasks at substantially reduced costs using mturkparent and eskenazi produce new corpora for the task of word sense disambiguationthe study used mturk to create unique word definitions for 50 words which turkers then also mapped onto existing definitionssentences containing these 50words were then assigned to unique definitions ac cording to word sensemadnani and boydgraber measured theconcept of transitivity of verbs in the style of hop per and thompson a theory that goes beyond simple grammatical transitivity whether verbs take objects or not to capture the amount of action indicated by a sentencevideos that portrayedverbs were shown to turkers who described the ac tions shown in the videoadditionally sentencescontaining the verbs were rated for aspect affirma tion benefit harm kinesis punctuality and volitionthe authors investigated several approaches for elic iting descriptions of transitivity from turkerstwo teams explored textual entailment taskswang and callisonburch created data for 7httpsitesgooglecomsite amtworkshop2010 5recognizing textual entailment they submit ted 600 text segments and asked turkers to identifyfacts and counterfacts given the provided textthe resulting collection includes 790 facts and 203 counterfactsnegri and mehdad created a bilingual entailment corpus using english and spanish entail ment pairs where the hypothesis and text come from different languagesthe authors took a publicly available english rte data set and created an englishspanish equivalent by having turkers translating the hypotheses into spanishthe authors include a timeline of their progress complete with total cost over the 10 days that they ran the experimentsin the area of natural language generation heil man and smith explored the potential of mturk for ranking of computer generated questions about provided textsthese questions can be used to test reading comprehension and understanding60 wikipedia articles were selected for each of which20 questions were generatedturkers provided 5 ratings for each of the 1200 questions creating a sig nificant corpus of scored questionsfinally gordon et al relied on mturk to evaluate the quality and accuracy of automatically extracted common sense knowledge fromnews and wikipedia articlesfactoids were pro vided by the knext knowledge extraction system62 speech and visionwhile mturk naturally lends itself to text tasks several teams explored annotation and collection ofspeech and image datawe note that one of the pa pers in the main track described tools for collecting such data two teams used mturk to collect text annotations on speech datamarge et al identified easy and hard sections of meeting speech to transcribe and focused data collection on difficult segmentstranscripts were collected on 48 audio clips from4 different speakers as well as other types of an notationskunath and weinberger collectedratings of accented english speech in which nonnative speakers were rated as either arabic mandarin or russian native speakersthe authors ob tained multiple annotations for each speech sample and tracked the native language of each annotator allowing for an analysis of rating accuracy between native english and nonnative english annotatorsnovotney and callisonburch used mturk to elicit new speech samplesas part of aneffort to increase the accessibility of public knowl edge such as wikipedia the team prompted turkers to narrate wikipedia articlesthis required turkers to record audio files and upload theman additionalhit was used to evaluate the quality of the narra tionsa particularly creative data collection approach asked turkers to create handwriting samples and then to submit images of their writing turkers were asked to submit handwrittenshopping lists or weather descrip tions in either arabic or spanishsubsequent turkers provided a transcription and atranslationthe team collected 18 images per lan guage 2 transcripts per image and 1 translation per transcript63 sentiment polarity and biastwo papers investigated the topics of sentiment po larity and biasmellebeek et al used severalmethods to obtain polarity scores for spanish sen tences expressing opinions about automative topicsthey evaluated three hits for collecting such data and compared results for quality and expressivenessyano et al evaluated the political bias of blogpostsannotators labeled 1000 sentences to deter mine biased phrases in political blogs from the 2008 election seasonknowledge of the annotators own biases allowed the authors to study how bias differs on the different ends of the political spectrum64 information retrievallarge scale evaluations requiring significant humanlabor for evaluation have a long history in the in formation retrieval community grady and lease study four factors that influence turker performance on a document relevance search taskthe authors present some negative results on how these factors influence data collectionfor further work on mturk and information retrieval readers are encouraged to see the sigir 2010 workshop on crowdsourcing for search evaluation8 8httpwwwischoolutexaseducse2010 callhtm 6 65 information extractioninformation extraction seeks to identify specific types of information in natural languagesthe ie papers in the shared tasks focused on new domains and genres as well as new relation typesthe goal of relation extraction is to identify rela tions between entities or terms in a sentence such asborn in or religiongormley et al automat ically generate potential relation pairs in sentences by finding relation pairs appearing in news articles as given by a knowledge basethey ask turkers ifa sentence supports a relation does not support a re lation or whether the relation makes sensethey collected close to 2500 annotations for 17 different person relation typesthe other ie papers explored new genres and domainsfinin et al obtained named entity an notations for several hundred twitter messagesthey conducted experiments using both mturk and crowd floweryetisgenyildiz et al explored medical named entity recognitionthey selected100 clinical trial announcements from clinicaltrialsgov 4 annotators for each of the 100 announce ments identified 3 types of medical entities medical conditions medications and laboratory test66 machine translationthe most popular shared task topic was machinetranslation mt is a data hungry task that re lies on huge corpora of parallel texts between two languagesperformance of mt systems dependson the size of training corpora so there is a con stant search for new and larger data setssuch data sets are traditionally expensive to produce requiring skilled translatorsone of the advantages to mturk is the diversity of the turker population making it an especially attractive source of mt datashared task papers in mt explored the full range of mt tasks including alignments parallel corpus creation paraphrases and bilingual lexiconsgao and vogel create alignments in a 300 sentence chineseenglish corpus both ambati and vogel and bloodgood and callisonburch explore thepotential of mturk in the creation of mt paral lel corpora for evaluation and trainingbloodgoodand callisonburch replicate the nist 2009 urdu english test set of 1792 sentences paying only 010 a sentence a substantially reduced price than the typical annotator costthe result is a data set that isstill effective for comparing mt systems in an eval uationambati and vogel create corpora with 100 sentences and 3 translations per sentence for all the language pairs between english spanish urdu and teluguthis demonstrates the feasibility of creating cheap corpora for high and low resource languagestwo papers focused on the creation and evaluation of paraphrasesdenkowski et al generated and evaluated 728 paraphrases for arabic english translationmturk was used to identify correct and fix incorrect paraphrasesover 1200 high quality paraphrases were createdbuzek et al evaluated error driven paraphrases formtin this setting paraphrases are used to sim plify potentially difficult to translate segments of textturkers identified 1780 error regions in 1006 englishchinese sentencesturkers provided 4821 paraphrases for these regionsexternal resources can be an important part of an mt systemirvine and klementiev created lexicons for low resource languagesthey evaluated translation candidates for 100 english words in 32 languages and solicited translations for 10 additional languageshiggins et al expanded namelists in arabic by soliciting common arabic nicknamesthe 332 collected nicknames were primar ily provided by turkers in arab speaking countries india and the united states finally zaidan and ganitkevitch explored how mturk could be used to directly improve an mtgrammareach rule in an urdu to english translation system was characterized by 12 featuresturkers were provided examples for which their feedback was used to rescore grammar productions di rectlythis approach shows the potential of fine tuning an mt system with targeted feedback from annotatorslooking ahead we cannot help but wonder what i am pact mturk and crowdsourcing will have on thespeech and language research communitykeeping in mind niels bohrs famous exhortation pre 7 diction is very difficult especially if its about the futurewe attempt to draw some conclusions and predict future directions and impact on the fieldsome have predicted that access to low cost highly scalable methods for creating language andspeech annotations means the end of work on unsupervised learningmany a researcher has advocated his or her unsupervised learning approach because of annotation costshowever if 100 exam ples for any task are obtainable for less than 100why spend the time and effort developing often infe rior unsupervised methodssuch a radical change is highly debatable in fact one of this papers authors is a strong advocate of such a position while the other disagrees perhaps because he himself works on unsupervised methodscertainly we can agree that the potential exists for a change in focus in a number of waysin natural language processing data drives re searchthe introduction of new large and widelyaccessible data sets creates whole new areas of re searchthere are many examples of such impact the most famous of which is the penn treebank which has 2910 citations in google scholar and is the single most cited paper on the acl anthology network other examples include the conll named entity corpus with 348 citationson google scholar the imdb movie reviews senti ment data with 894 citations and the amazon sentiment multidomain data with 109 citations mturk means that creating similar data sets is now much cheaper and easier than ever beforeit is highly likely that new mturk produced data sets will achieve prominenceand have significant impactadditionally the cre ation of shared data means more comparison and evaluation against previous workprogress is madewhen it can be demonstrated against previous ap proaches on the same datathe reduction of data cost and the rise of independent corpus producers likely means more accessible datamore than a new source for cheap data mturk isa source for new types of dataseveral of the pa pers in this workshop collected information about the annotators in addition to their annotationsthiscreates potential for studying how different user demographics understand language and allow for targeting specific demographics in data creationbe yond efficiencies in cost mturk provides access to a global user population far more diverse than those provided by more professional annotation settingsthis will have a significant impact on low resource languages as corpora can be cheaply built for a much wider array of languagesas one example irvineand klementiev collected data for 42 languages without worrying about how to find speakers of such a wide variety of languagesaddition ally the collection of arabic nicknames requires a diverse and numerous arabic speaking population in addition to extending into new languages mturk also allows for the creation of evaluation sets in new genres and domains which was the focus of two papers in this workshop we ex pect to see new research emphasis on low resource languages and new domains and genresanother factor is the change of data type and itsimpact on machine learning algorithmswith pro fessional annotators great time and care are paid to annotation guidelines and annotator trainingthese are difficult tasks with mturk which favors simple intuitive annotations and little trainingmany papersapplied creative methods of using simpler annota tion tasks to create more complex data setsthis process can impact machine learning in a numberof waysrather than a single gold standard annotations are now available for many userslearning across multiple annotations may improve sys tems additionally even withefforts to clean up mturk annotations we can ex pect an increase in noisy examples in datathis will push for new more robust learning algorithms that are less sensitive to noiseif we increase the size of the data tenfold but also increase the noise can learning still be successfulanother learning area of great interest is active learning which has long relied on simulated user experimentsnew workevaluated active learning methods with real users us ing mturk finally the composition ofcomplex data set annotations from simple user in puts can transform the method by which we learncomplex outputscurrent approaches expect exam ples of labels that exactly match the expectation ofthe systemcan we instead provide lower level sim 8 pler user annotations and teach systems how to learn from these to construct complex outputthis would open more complex annotation tasks to mturka general trend in research is that good ideascome from unexpected placesmajor transformations in the field have come from creative new ap proachesconsider the penn treebank an ambitious and difficult project of unknown potentialsuch large changes can be uncommon since they are often associated with high cost as was the penn treebankhowever mturk greatly reduces these costs en couraging researchers to try creative new tasksforexample in this workshop tong et al col lected handwriting samples in multiple languagestheir creative data collection may or may not have a significant impact but it is unlikely that it would have been tried had the cost been very highfinally while obtaining new data annotationsfrom mturk is cheap it is not trivialworkshop par ticipants struggled with how to attract turkers howto price hits hit design instructions cheating de tection etc no doubt that as work progresses so will a communal knowledge and experience of how to use mturkthere can be great benefit in new toolkits for collecting language data using mturk and indeed some of these have already started to emerge 9acknowledgementsthanks to sharon chiarella of amazons mechan ical turk for providing 100 credits for the shared task and to crowdflower for allowing free use of their tool to workshop participantsresearch funding was provided by the nsf under grant iis0713448 by the european commis sion through the euromatrixplus project and by the darpa gale program under contract nohr00110620001the views and findings are the authorsalone
W10-0701
creating speech and language data with amazonrsquos mechanical turkin this paper we give an introduction to using amazons mechanical turk crowdsourcing platform for the purpose of collecting data for human language technologieswe survey the papers published in the naacl2010 workshop24 researchers participated in the workshops shared task to create data for speech and language applications with 100we experiment with the use of amazon mechanical turk to create and evaluate human language data we provide an overview of various tasks for which mturk has been used and offer a set of best practices for ensuring highquality data
findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation this paper presents the results of the wmt10 and metricsmatr10 shared which included a translation task a system combination task and an evaluation task we conducted a largescale manual evaluation of 104 machine translation systems and 41 system combination entries we used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics this year we also investigated increasing the number of human judgments by hiring nonexpert annotators through amazons this paper presents the results of the shared tasks of the joint workshop on statistical machine translation and metrics for machine translation which was held at acl 2010this builds on four previous wmt workshops and one previous metricsmatr meeting there were three shared tasks this year a translation task between english and four other european languages a task to combine the output of multiple machine translation systems and a task to predict human judgments of translation quality using automatic evaluation metricsthe performance on each of these shared task was determined after a comprehensive human evaluationthere were a number of differences between this years workshop and last years workshop crease the statistical significance of our findingswe discuss the feasibility of using nonexperts evaluators by analyzing the cost volume and quality of nonexpert annotations clearer results for system combination this year we excluded google translations from the systems used in system combinationin last years evaluation the large margin between google and many of the other systems meant that it was hard to improve on when combining systemsthis year the system combinations perform better than their component systems more often than last yearthe primary objectives of this workshop are to evaluate the state of the art in machine translation to disseminate common test sets and public training data with published performance numbers and to refine evaluation methodologies for machine translationas with past years all of the data translations and human judgments produced for our workshop are publicly available2 we hope they form a valuable resource for research into statistical machine translation system combination and automatic evaluation of translation qualitythe workshop examined translation between english and four other languages german spanish french and czechwe created a test set for each language pair by translating newspaper articleswe additionally provided training data and two baseline systemsthe test data for this years task was created by hiring people to translate news articles that were drawn from a variety of sources from middecember 2009a total of 119 articles were selected in roughly equal amounts from a variety of czech english french german and spanish news sites3 czech idnescz ihnedcz lidovky french les echos spanish el mundo abces cinco dias english bbc economist washington post times of london german frankfurter rundschau spiegel the translations were created by the professional translation agency ceet4all of the translations were done directly and not via an intermediate languageas in past years we provided parallel corpora to train translation models monolingual corpora to train language models and development sets to tune parameterssome statistics about the training materials are given in figure 1to lower the barrier of entry for newcomers to the field we provided two open source toolkits for phrasebased and parsingbased statistical machine translation we received submissions from 33 groups from 29 institutions as listed in table 1 a 50 increase over last years shared taskwe also evaluated 2 commercial off the shelf mt systems and two online statistical machine translation systemswe note that these companies did not submit entries themselvesthe entries for the online systems were done by translating the test data via their web interfacesthe data used to train the online systems is unconstrainedit is possible that part of the reference translations that were taken from online news sites could have been included in the online systems language modelsin total we received 153 primary system submissions along with 28 secondary submissionsthese were made available to participants in the system combination shared taskbased on feedback that we received on last years system combination task we provided two additional resources to participants to use as a dev set for system combinationthese were translated by all participating sites and distributed to system combination participants along with reference translationstable 2 lists the 9 participants in the system combination taskas with past workshops we placed greater emphasis on the human evaluation than on the automatic evaluation metric scoresit is our contention is defined to be a rank label in the ranking task an edited sentence in the editing task and a yesno judgment in the judgment task that automatic measures are an imperfect substitute for human assessment of translation qualitytherefore we define the manual evaluation to be primary and use the human judgments to validate automatic metricsmanual evaluation is time consuming and it requires a large effort to conduct it on the scale of our workshopwe distributed the workload across a number of people including sharedtask participants interested volunteers and a small number of paid annotatorsmore than 120 people participated in the manual evaluation5 with 89 people putting in more than an hours worth of effort and 29 putting in more than four hoursa collective total of 337 hours of labor was invested6 we asked people to evaluate the systems output in two different ways the total number of judgments collected for the different modes of annotation is given in table 3in all cases the output of the various translation systems were judged on equal footing the output of system combinations was judged alongside that of the individual system and the constrained and unconstrained systems were judged togetherranking translations relative to each other is a reasonably intuitive taskwe therefore kept the instructions simple rank translations from best to worst relative to the other choices 5we excluded data from three errant annotators identified as followswe considered annotators completing at least 3 screens whose p with others is less than 033out of seven such annotators four were affiliated with shared task teamsthe other three had no apparent affiliation and so we discarded their data less than 5 of the total data6whenever an annotator appears to have spent more than ten minutes on a single screen we assume they left their station and left the window open rather than actually needing more than ten minutesin those cases we assume the time spent to be ten minuteseach screen for this task involved judging translations of three consecutive source segmentsfor each source segment the annotator was shown the outputs of five submissionsfor each of the language pairs there were more than 5 submissionswe did not attempt to get a complete ordering over the systems and instead relied on random selection and a reasonably large sample size to make the comparisons fairrelative ranking is our official evaluation metricindividual systems and system combinations are ranked based on how frequently they were judged to be better than or equal to any other systemthe results of this are reported in section 4appendix a provides detailed tables that contain pairwise comparisons between systemswe were interested in determining the inter and intraannotator agreement for the ranking task since a reasonable degree of agreement must exist to support our process as a valid evaluation setupto ensure we had enough data to measure agreement we purposely designed the sampling of source segments shown to annotators so that items were likely to be repeated both within an annotators assigned tasks and across annotatorswe did so by assigning an annotator a batch of 20 screens that were to be completed in full before generating new screens for that annotatorwithin each batch the source segments for nine of the 20 screens were chosen from a small pool of 60 source segments instead of being sampled from the larger pool of 1000 source segments designated for the ranking task7 the larger pool was used to choose source segments for nine other screens as for the remaining two screens they were chosen randomly from the set of eighteen screens already chosenfurthermore in the two local repeat screens the system choices were also preservedheavily sampling from a small pool of source segments ensured we had enough data to measure interannotator agreement while purposely making 10 of each annotators screens repeats of previously seen sets in the same batch ensured wethe sentence ranking taskin this task p is 0333 had enough data to measure intraannotator agreementwe measured pairwise agreement among annotators using the kappa coefficient which is defined as where p is the proportion of times that the annotators agree and p is the proportion of time that they would agree by chancefor interannotator agreement for the ranking tasks we calculated p by examining all pairs of systems which had been judged by two or more judges and calculated the proportion of time that they agreed that a b a b or a b intraannotator agreement was computed similarly but we gathered items that were annotated on multiple occasions by a single annotatortable 4 gives k values for interannotator and intraannotator agreementthese give an indication of how often different judges agree and how often single judges are consistent for repeated judgments respectivelythe exact interpretation of the kappa coefficient is difficult but according to landis and koch 0 2 is slight 2 4 is fair 4 6 is moderate 6 8 is substantial and the rest is almost perfectbased on these interpretations the agreement for sentencelevel ranking is moderate for interannotator agreement and substantial for intraannotator agreementthese levels of agreement are higher than in previous years partially due to the fact that that year we randomly included the references along the system outputsin general judges tend to rank the reference as the best translation so people have stronger levels of agreement when it is includedthat said even when comparisons involving reference are excluded we still see an improvement in agreement levels over last yearin addition to simply ranking the output of systems we also had people edit the output of mt systemswe did not show them the reference translation which makes our editbased evaluation different from the humantargeted translation edit rate measure used in the darpa gale program rather than asking people to make the minimum number of changes to the mt output in order capture the same meaning as the reference we asked them to edit the translation to be as fluent as possible without seeing the referenceour hope was that this would reflect peoples understanding of the outputthe instructions given to our judges were as follows correct the translation displayed making it as fluent as possibleif no corrections are needed select no corrections needed if you cannot understand the sentence well enough to correct it select unable to correct a screenshot is shown in figure 2this year judges were shown the translations of 5 consecutive source sentences all produced by the same machine translation systemin last years wmt evaluation they were shown only one sentence at a time which made the task more difficult because the surrounding context could not be used as an aid to understandingsince we wanted to prevent judges from seeing the reference before editing the translations we split the test set between the sentences used in the ranking task and the editing task moreover annotators edited only a single systems output for one source sentence to ensure that their understanding of it would not be influenced by another systems outputhalfway through the manual evaluation period we stopped collecting edited translations and instead asked annotators to do the following instructions you are shown several machine translation outputsyour task is to edit each translation to make it as fluent as possibleit is possible that the translation is already fluentin that case select no corrections neededif you cannot understand the sentence well enough to correct it select unable to correctthe sentences are all from the same articleyou can use the earlier and later sentences to help understand a confusing sentenceyour edited translations the machine translations the shortage of snow in mountain the shortage of snow in mountain worries the hoteliers worries the hoteliers correct reset edited no corrections needed unable to the deserted tracks are not the deserted tracks are not putting down problem only at the exploitants of skilift putting down problem only at the exploitants of skilift correct reset edited no corrections needed unable to the lack of snow deters the people the lack of snow deters the people to reserving their stays at the ski in the hotels and pension to reserving their stays at the ski in the hotels and pension correct reset edited no corrections needed unable to thereby is always possible to thereby is always possible to track free bedrooms for all the dates in winter including christmas and nouvel an track free bedrooms for all the dates in winter including christmas and nouvel anindicate whether the edited translations represent fully fluent and meaningequivalent alternatives to the reference sentencethe reference is shown with context the actual sentence is boldin addition to edited translations unedited items that were either marked as acceptable or as incomprehensible were also shownjudges gave a simple yesno indication to each itemwe used the results of the manual evaluation to analyze the translation quality of the different systems that were submitted to the workshopin our analysis we aimed to address the following questions table 5 shows the best individual systemswe define the best systems as those which had no other system that was statistically significantly better than them under the sign test at p 01multiple systems are listed as the winners for many language pairs because it was not possible to draw a statistically significant difference between the systemsthere is no individual system clearly outperforming all other systems across the different language pairswith the exception of frenchenglish and englishfrench one can observe that topperforming constrained systems did as well as the unconstrained system onlinebtable 6 shows the best combination systemsfor all language directions except spanishenglish one can see that the system combination runs outperform the individual systems and that in most cases the differences are statistically significantwhile this is to be expected system combination is not guaranteed to improve performance as some of the lower ranked combination runs show which are outperformed by individual systemsalso note that except for czechenglish translation the online systems onlinea and onlineb where not included for the system combination runs understandability our hope is that judging the acceptability of edited output as discussed in section 3 gives some indication of how often a systems output was understandablefigure 3 gives the percentage of times that each systems edited output was judged to be acceptable this style of manual evaluation is experimental and should not be taken to be authoritativesome caveats about this measurein addition to allowing the analysis of subjective translation quality measures for different systems the judgments gathered during the manual evaluation may be used to evaluate how well the automatic evaluation metrics serve as a surrogate to the manual evaluation processesnist began running a metrics for machine translation challenge and presented their findings at a workshop at amta this year we conducted a joint metricsmatr and wmt workshop with nist running the shared evaluation task and analyzing the resultsin this years shared evaluation task 14 different research groups submitted a total of 26 different automatic metrics for evaluation indicates a constrained win no other constrained system is statistically betterfor all pairwise comparisons between systems please check the appendixsystem combinations are listed in the order of how often their translations were ranked higher than or equal to any other systemties are broken by direct comparisonwe show the best individual systems alongside the system combinations since the goal of combination is to produce better quality translation than the component systems indicates an individual system that none of the system combinations beat by a statistically significant margin at plevel01note onlinea and onlineb were not included among the systems being combined in the system combination shared tasks except in the czechenglish and englishczech conditions where onlineb was includedtable 6 official results for the wmt10 system combination task based on the human evaluation these numbers also include judgments of the systems output when it was marked either incomprehensible or acceptable and left uneditednote that the reference translation was edited alongside the system outputserror bars show one positive and one negative standard deviation for the systems in that language pair a shorter description when concatenated before compression than when concatenated after compressionmtncd does not require any language specific resourcesdue to a processing issue inherent to the metric the scores reported were generated excluding the first segment of each documentalso a separate issue was found for the mtmncd metric and according to the developer the scores reported here would like change with a correction of the issuebabblequest international8 der by various meansthe former is assessed by matching word forms at linguistic levels including surface form stem sense and semantic similarity and further by weighting the informativeness of both matched and unmatched wordsthe latter is quantified in term of the discordance of word position and word sequence between an mt output and its referencedue to a version discrepancy of the metric final scores for atecd21 differ from those reported here but only minimallycarnegie mellon university capturing the two most important elements of translation qualitythis simple combined metric only has one parameter which makes its scores easy to interpretit is also fast to run and languageindependentit uses kendalls tau permutationdue to installation issues the reported submitted scores for these two metrics have not been verified to produce identical scores at nistharbin institute of technology china the model resembles a conditional random field but performs regression instead of classificationit is trained on arabic chinese and urdu data from the mteval 2008 datasetdue to installation issues the reported scores for this metric have not been verified to produce identical scores at nistscores were not submitted along with this metric and due to installation issues were not produced at nist in time to be included in this reportuniversity politecnica de catalunyauniversity de barcelona lationsthe basic elements consist of content words and various combinations of syntacticallyrelated wordsa variety of transformations are performed to allow flexible matching so that words and syntactic constructions conveying similar content in different manners may be matchedthe transformations cover synonymy preposition vs noun compounding differences in tenses etcbewte was originally created for summarization evaluation and is englishspecificscores were not submitted for bewte the runtime required for this metric to process the wmt10 data set prohibited the production of scores in time for publicationthe results reported here are preliminary a final release of results will be published on the wmt10 website before july 15 2010metric developers submitted metrics for installation at nist they were also asked to submit metric scores on the wmt10 test set along with their metricsnot all developers submitted scores and not all metrics were verified to produce the same scores as submitted at nist in time for publicationany such caveats are reported with the description of the metrics abovethe results reported here are limited to a comparison of metric scores on the full wmt10 test set with human assessments on the humanassessed subsetan analysis comparing the human assessments with the automatic metrics run only on the humanassessed subset will follow at a later datethe wmt10 system output used to generate the reported metric scores was found to have improperly escaped characters for a small number of segmentswhile we plan to regenerate the metric scores with this issue resolved we do not expect this to significantly alter the results given the small number of segments affectedthe tables in appendix b list the metric scores for the language pairs processed by each metricthese first four tables present scores for translations out of english into czech french german and spanishin addition to the metric scores of the submitted metrics identified above we also present the ranking of the system as determined by the human assessments and the metrics scores for two popular baseline metrics bleu as calculated by nists mteval software12 and the nist scorefor each method of system measurement the absolute highest score is identified by being outlined in a boxsimilarly the remaining tables in appendix b list the metric scores for the submitted metrics and the two baseline metrics and the ranking based on the human assessments for translations into english from czech french german and spanishas some metrics employ languagespecific resources not all metrics produced scores for all language pairsit is noticeable that system combinations are often among those achieving the highest scoresto assess the performance of the automatic metrics we correlated the metrics scores with the human rankings at the system levelwe assigned a consolidated humanassessment rank to each system based on the number of times that the given systems translations were ranked higher than or equal to the translations of any other system in the manual evaluation of the given language pairwe then compared the ranking of systems by the human assessments to that provided by the automatic metric system level scores on the complete wmt10 test set for each language pair using spearmans p rank correlation coefficientthe correlations are shown in table 7 for translations to english and table 8 out of english with baseline metrics listed at the bottomthe highest correlation for each language pair and the highest overall average are boldedoverall correlations are higher for translations to english than compared to translations from englishfor all language pairs there are a number of new metrics that yield noticeably higher correlations with human assessments than either of the two included baseline metricsin particular bleu performed in the bottom half of the intoenglish and outofenglish directionsthe method employed to collect human judgments of rank preferences at the segment level produces a sparse matrix of decision pointsit is unclear whether attempts to normalize the segment level rankings to 0010 values representing the relative rank of a system per segment given the number of comparisons it is involved with is properan intuitive display of how well metrics mirror the human judgments may be shown via a confusion matrixwe compare the human ranks to the ranks as determined by a metricbelow we show an example of the confusion matrix for the svmrank metric which had the highest summed diagonal for all segments translated into englishthe numbers provided are percentages of the total countthe summed diagonal constitutes 3901 of all counts in this example matrixthe largest cell is the 11 ranking cell we included the reference translation as a system in this analysis which is likely to lead to a lot of agreement on the highest rank between humans and automatic metricsno allowances for ties were made in this analysisthat is if a human ranked two system translations the same this analysis expects the metrics to provide the same score in order to get them both correctfuture analysis could relax this constraintas not all human rankings start with the highest possible rank of 1 we set the highest automatic metric rank to the highest human rank and shifted the lower metric ranks down accordinglytable 9 shows the summed diagonal percentages of the total count of all datapoints for all metrics that wmt10 scores were available for both combined for all languages to english and separately for each language into englishthe results are ordered by the highest percentage for the summed diagonal on all languages to english combinedthere are quite noticeable changes in ranking of the metrics for the separate language pairs further analysis into the reasons for this will be necessarywe plan to also analyze metric performance for translation into englishin this section we analyze the data that we collected data by posting the ranking task on amazons mechanical turk although we did not use this data when creating the official results our hope was that it may be useful in future workshops in two waysfirst if we find that it is possible to obtain a sufficient amount of data of good quality then we might be able to reduce the time commitment expected from the system developers in future evaluationssecond the additional collected labels might enable us to detect significant differences between systems that would otherwise be insignificantly different using only the data from the volunteers to that end we prepared 600 ranking sets for each of the eight language pairs with each set containing five mt outputs to be ranked using the same interface used by the volunteerswe posted the data to mturk and requested for each one five redundant assignments from different workershad all the 5 x 8 x 600 24000 assignments been completed we would have obtained 24000 x 5 120000 additional rank labels compared to the 37884 labels we collected from the volunteers in actuality we collected closer to 55000 rank labels as we discuss shortlyto minimize the amount of data that is of poor quality we placed two requirements that must be satisfied by any worker before completing any of our tasksfirst we required that a worker have an existing approval rating of at least 85second we required a worker to reside in a country where the target language of the task can be assumed to be the spoken languagefinally anticipating a large pool of workers located in the united states we felt it possible for us to add a third restriction for the toenglish language pairs which is that a worker must have had at least five tasks previously approved on mturk13 we organized the ranking sets in groups of 3 per screen with a monetary reward of 005 per screenwhen we created our tasks we had no expectation that all the assignments would be completed over the tasks lifetime of 30 daysthis was indeed the case especially for language pairs with a nonenglish target language due to workers being in short supply outside the usoverall we see that the amount of data collected from nonus workers is relatively small whereas the pool of usbased workers is much larger leading to much higher completion rates for language pairs with english as the target language this is in spite of the additional restriction we placed on us workers13we suspect that newly registered workers on mturk already start with an approval rating of 100 and so requiring a high approval rating alone might not guard against new workersit is not entirely clear if our suspicion is true but our past experiences with mturk usually involved a noticeably faster completion rate than what we experienced this time around indicating our suspicion might very well be correcttable 10 statistics for data collected on mturk for the ranking taskin total 55082 rank labels were collected across the eight language pairs each language pair had 600 sets and we requested each set completed by 5 different workerssince each set provides 5 labels we could have potentially obtained 600 x 5 x 5 15000 labels for each language pairthe label count row indicates to what extent that potential was met and the completed rows give a breakdown of redundancyfor instance the rightmost column indicates that in the czen group 20 of the 600 sets were completed by only one worker while 67 of the sets were completed by 5 workers with 100 of the sets completed at least oncethe total cost of this data collection effort was roughly 200with references 0539 0309 without references 0538 0307 table 11 inter and intraannotator agreement for the mturk workers on the sentence ranking task is 0333for comparison we repeat here the kappa coefficients of the experts taken from table 4it is encouraging to see that we can collect a large amount of rank labels from mturkthat said we still need to guard against data from bad workers who are either not being faithful and clicking randomly or who might simply not be competent enoughcase in point if we examine interand intraannotator agreement on the mturk data we see that the agreement rates are markedly lower than their expert counterpartsanother indication of the presence of bad workers is a low reference preference rate which we define as the proportion of time a reference translation wins a comparison when it appears in oneintuitively the rpr should be quite high since it is quite rare that an mt output ought to be judged better than the referencethis rate is 965 over the expert data but only 837 over the mturk datacompare this to a randomlyclicking rpr of 6667 also telling would be the rate at which mturk workers agree with expertsto ensure that we obtain enough overlapping data to calculate such a rate we purposely select onesixth14 of our ranking sets so that the fivesystem group is exactly one that has been judged by an expertthis way at least onesixth of the comparisons obtained from an mturk workers labels are comparisons for which we already have an expert judgmentwhen we calculate the rate of agreement on this data we find that mturk workers agree with the expert workers 532 of the time or k 0297 and when references are excluded the agreement rate is 500 or k 0249ideally we would want those values to be in the 0405 range since that is where the interannotator kappa coefficient lies for the expert annotatorswe can use the agreement rate with experts to identify mturk workers who are not performing the task as requiredfor each worker w of the 669 workers for whom we have such data we compute the workers agreement rate with the experts and from it a kappa coefficient kexp for that worker is 0333 kexp ranges between 05 and 10we sort the workers based on kexp in ascending order and examine properties of the mturk data as we remove the lowestranked workers one by one we first note that the amount of data we obtained from mturk is so large that we could afford to eliminate close to 30 of the labels and we would still have twice as much data than using the expert data alonewe also note that two workers in particular are likely responsible for the majority of the bad data since removing their data leads to noticeable jumps in the reference preference rate and the interannotator agreement rate indeed examining the data for those two workers we find that their rpr values are 557 and 519 which is a clear indication of random clicking15 looking again at those two curves shows degrading values as we continue to remove workers in large droves indicating a form of overfitting to agreement with experts it is therefore important if one were to filter out the mturk data by removing workers this way to choose a cutoff carefully so that no criterion is degraded dramaticallyin appendix a after reporting headtohead comparisons using only the expert data we also report headtohead comparisons using the expert 15in retrospect we should have performed this type of analysis as the data was being collected since such workers could have been identified early on and blocked data combined with the mturk data in order to be able to detect more significant differences between the systemswe choose the 300worker point as a reasonable cutoff point before combining the mturk data with the expert data based on the characteristics of the mturk data at that point a high reference preference rate high interannotator agreement and critically a kappa coefficient vs expert data of 0449 which is close to the expert interannotator kappa coefficient of 0439in the previous subsection we outlined an approach by which mturk data can be filtered out using expert datasince we were to combine the filtered mturk data with the expert data to obtain more significant differences it was reasonable to use agreement with experts to quantify the mturk workers competencyhowever we also would like to know whether it is feasible to use the mturk data aloneour aim here is not to boost the differences we see by examining expert data but to eliminate our reliance on obtaining expert data in the first placewe briefly examined some simple ways of filteringcombining the mturk data and measured the spearman rank correlations obtained from the mturk data as compared to the rankings obtained using the expert data and report them in table 12we first see that even when using the mturk data untouched we already obtain relatively high correlation with expert ranking this is especially true for the toenglish language pairs where we collected much more data than englishtoin fact the relationship between the amount of data and the correlation values is very strong and it is reasonable to expect the correlation numbers for englishto to catch up had more data been collectedwe also measure rank correlations when applying some simple methods of cleaningweighting mturk datathe first method is performing a simple vote whenever redundant comparisons are availablethe second method first removes labels from the 300 worst workers according to agreement with expertsthe third method first removes labels from the 62 worst workers according to their rprthe numbers 300 and 62 were chosen since those are the points at which the mturk data reaches the level of expert data in the interannotator agreement and rpr of the expertsthe fourth and fifth methods do not remove any data instead assigning weights to workers based on their agreement with experts and their rpr respectivelynamely for each worker the weight assigned by the fourth method is kep for that worker and the weight assigned by the fifth method is k for that workerexamining the correlation coefficients obtained from those methods we see mixed results and there is no clear winner among those methodsit is also difficult to draw any conclusion as to which method performs best whenhowever it is encouraging to see that the two rprbased methods perform wellthis is noteworthy since there is no need to use expert data to weight workers which means that it is possible to evaluate a worker using inherent builtin properties of that workers own data without resorting to making comparisons with other workers or with expertsas in previous editions of this workshop we carried out an extensive manual and automatic evaluation of machine translation performance for translating from european languages into english and vice versathe number of participants grew substantially compared to previous editions of the wmt workshop with 33 groups from 29 institutions participating in wmt10most groups participated in the translation task only while the system combination task attracted a somewhat smaller number of participants unfortunately fewer rulebased systems participated in this years edition of wmt compared to previous editionswe hope to attract more rulebased systems in future editions as they increase the variation of translation output and for some language pairs such as germanenglish tend to outperform statistical machine translation systemsthis was the first time that the wmt workshop was held as a joint workshop with nists metricsmatr evaluation initiativethis joint effort was very productive as it allowed us to focus more on the two evaluation dimensions manual evaluation of mt performance and the correlation between manual metrics and automated metricsthis year was also the first time we have introduced quality assessments by nonexpertsin previous years all assessments were carried out through peer evaluation exclusively consisting of developers of machine translation systems and thereby people who are used to machine translation outputthis year we have facilitated amazons mechanical turk to investigate two aspects of manual evaluation how stable are manual assessments across different assessor profiles and how reliable are quality judgments of nonexpert userswhile the intra and interannotator agreements between nonexpert assessors are considerably lower than for their expert counterparts the overall rankings of translation systems exhibit a high degree of correlation between experts and nonexpertsthis correlation can be further increased by applying various filtering strategies reducing the impact of unreliable nonexpert annotatorsas in previous years all data sets generated by this workshop including the human judgments system translations and automatic scores are publicly available for other researchers to analyze16this work was supported in parts by the euromatrixplus project funded by the european commission the gale program of the us defense advanced research projects agency contract nohr001106c0022 and the us national science foundation under grant iis0713448
W10-1703
findings of the 2010 joint workshop on statistical machine translation and metrics for machine translationthis paper presents the results of the wmt10 and metrics matr10 shared tasks which included a translation task a system combination task and an evaluation taskwe conducted a largescale manual evaluation of 104 machine translation systems and 41 system combination entrieswe used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metricsthis year we also investigated increasing the number of human judgments by hiring nonexpert annotators through amazon mechanical turkwe release the news test set in the 2010 workshop on statistical machine translation
a regression model of adjectivenoun compositionality in distributional semantics in this paper we explore the computational modelling of compositionality in distributional models of semantics in particular we model the semantic composition of pairs of adjacent english adjecand nouns from the national we build a vectorbased semantic space from a lemmatised version of the bnc where the most frequent an lemma pairs are treated as single tokens we then extrapolate three different models of compositionality a simple additive model a pointwisemultiplicative model and a partial least squares regression model we propose two evaluation methods for the implemented models our study leads to the conclusion that regressionbased models of compositionality generally outperform additive and multiplicative approaches and also show a number of advantages that make them very promising for future research wordspace vector models or distributional models of semantics are computational models that build contextual semantic representations for lexical items from corpus datadsms have been successfully used in the recent years for a number of different computational tasks involving semantic relations between words the theoretical foundation of dsms is to be found in the distributional hypothesis of meaning attributed to z harris which maintains that meaning is susceptible to distributional analysis and in particular that differences in meaning between words or morphemes in a language correlate with differences in their distribution while the vectorbased representation of word meaning has been used for a long time in computational linguistics the techniques that are currently used have not seen much development with regards to one of the main aspects of semantics in natural language compositionalityto be fair the study of semantic compositionality in dsms has seen a slight revival in the recent times cfwiddows mitchell lapata giesbrecht baroni lenci who propose various dsm approaches to represent argument structure subjectverb and verbobject coselectioncurrent approaches to compositionality in dsms are based on the application of a simple geometric operation on the basis of individual vectors which should in principle approximate the composition of any two given vectorson the contrary since the the very nature of compositionality depends on the semantic relation being instantiated in a syntactic structure we propose that the composition of vector representations must be modelled as a relationspecific phenomenonin particular we propose that the usual procedures from machine learning tasks must be implemented also in the search for semantic compositionality in dsmin this paper we present work in progress on the computational modelling of compositionality in a dataset of english adjectivenoun pairs extracted from the bncwe extrapolate three different models of compositionality a simple additive model a pointwisemultiplicative model and finally a multinomial multiple regression model by partial least squares regression previous work in the field has produced a small number of operations to represent the composition of vectorial representations of word meaningin particular given two independent vectors v1 and v2 the semantically compositional result v3 is modelled by in the dsm literature the additive model has become a de facto standard approach to approximate the composed meaning of a group of words as the sum of their vectors this has been successfully applied to documentbased applications such as the computation of document similarity in information retrievalmitchell lapata indicate that the various variations of the pointwisemultiplication model perform better than simple additive models in term similarity tasks widdows obtain results indicating that both the tensor product and the convolution product perform better than the simple additive modelfor the sake of simplifying the implementation of evaluation methods in this paper we will compare the first two approaches vector addition and vector pointwisemultiplication with regression modelling by partial least squareswe assume that the composition of meaning in dsms is a function mapping two or more independent vectors in a multidimensional space to a newly composed vector the same space and further we assume that semantic composition is dependent on the syntactic structure being instantiated in natural language1 assuming that each dimension in the starting vectors v1 and v2 is a candidate predictor and that each dimension in the composed vector v3 is a dependent variable vectorbased semantic compositionality can be formulated as a problem of multivariate multiple regressionthis is in principle a tractable problem that can be solved by standard machine learning techniques such as multilayer perceptrons or support vector machineshowever given that sequences of words tend to be of very low frequency suitable data sets will inevitably suffer the curse of dimensionality we will often have many more variables than observationspartial least squares regression is a multivariate regression technique that has been designed specifically to tackle such situations with high dimensionality and limited dataplsr is widely used in in unrelated fields such as spectroscopy medical chemistry brainimaging and marketing we use a generalpurpose vector space extracted from the british national corpuswe used the infomap software to collect cooccurrence statistics for lemmas within a rectangular 5l5r windowthe corpus was preprocessed to represent frequent adjectivenoun lemma pairs as a single token in the processed corpus it appears as a single entry nice_housethe corpus was also processed by stopword removalwe extracted a list of an candidate pairs with simple regexbased queries targeting adjacent sequences composed of detartan we filtered the candidate list by frequency obtaining 1380 different an pairsthe vector space was built with the 40000 most frequent tokens in the corpus the original dimensions were the 3000 most frequent content words in the bncthe vector space was reduced to the first 500 latent dimensions by svd as implemented by the infomap softwarethus the resulting space consists in a matrix with 40 000 x 500 dimensionswe then extracted the vector representation for each an candidate as well as for each independent constituent eg vectors for nice_house as well as for nice and house were savedthe resulting vector subspace was imported into the r statistical computing environment for the subsequent model building and evaluationin particular we produced our regression analysis with the pls package which implements plsr and a number of very useful functions for crossvalidation prediction error analysis etcby simply combining the vector representations of the independent adjectives and nouns in our dataset we built an additive prediction model and a simplified pointwise multiplicative prediction model for each candidate pairwe also fitted a plsr model using v1 and v2 as predictors and the corresponding observed pair v3 as dependent variablethe data were divided into a training set and a testing set the models parameters were estimated by performing 10fold crossvalidation during the training phasein what follows we briefly evaluate the three resulting models of compositionalityin order to evaluate the three models of compositionality that were built we devised two different procedures based on the euclidean measure of geometric distancethe first method draws a direct comparison of the different predicted vectors for each candidate an pair by computing the euclidean distance between the observed vector and the modelled predictionswe also inspect a general distance matrix for the whole compositionality subspace ie all the observed vectors and all the predicted vectorswe extract the 10 nearest neighbours for the 380 adjectivenoun pairs in the test set and look for the intended predicted vectors in each casethe idea here is that the best models should produce predictions that are as close as possible to the originally observed an vectorour second evaluation method uses the 10 nearest neighbours of each of the observed an pairs in the test set as goldstandard and compares them with the 10 nearest neighbours of each of the corresponding predictions as generated by the modelsthe aim is to assess if the predictions made by each model share any top10 neighbours with their corresponding goldstandardwe award 1 point for every shared neighbourwe calculated the euclidean distance between each observed an pair and the corresponding prediction made by each modelon general inspection it is clear that the approximation of an compositional vectors made by plsr is considerably closer than those produced by the additive and multiplicative models cftable 1we also computed in detail which of the three predicted composed vectors was closest to the corresponding observationto this effect we extracted the 10 nearest neighbours for each an pair in the test set using the whole compositionality subspace in 94 cases out of 380 the plsr intended prediction was the nearest neighbourcumulatively plsrs predictions were in the top10 nearest neighbour list in 219 out of 380 cases the other models performance in this test was negligible cftable 2overall 223 items in the test set had at least one predicted vector in the top10 list of these 219 were generated by plsr and the remaining 4 by the multiplicative modelsince the main use of dsms is to extract similar vectors from a multidimensional space we would like to test if the modelling of semantic compositionality is able to produce predictions that are as similar as possible to the originally observed dataa very desirable result would be if any predicted compositional an vector could be reliably used instead of the extracted bigramthis could only be achieved if a models predictions show a similar distributional behaviour with respect to the observed vectorto test this idea using our data we took the 10 nearest neighbours of each of the observed an pairs in the test set as gold standardthese gold neighbours were extracted from the observation testing subspace thus excluding any modelled predictionsthis is a very restrictive setting it means that the gold standard for each of the 380 test items is composed of the 10 nearest neighbours from the same 380 items we then extracted the 10 nearest neighbours for each of the three modelled predictions but this time the subspace included all predictions as well as all the original observations finally we tested if the predictions made by each model shared any top10 neighbours with their corresponding goldstandardwe awarded 1 point for every shared neighbourthe results obtained with these evaluation settings were very pooronly the additive model scored points although the performance was rather disappointing both the pointwise multiplicative model and the plsr model failed to retrieve any of the gold standard neighboursthis poor results can be attributed to the very restrictive nature of our gold standard and also to the asymmetrical composition of the compared data however given the that dsms are known for their ability to extract similar items from the same space we decided to relax our test settings by awarding points not only to shared neighbours but also to the same models predictions of those neighboursthus given a target neighbour such as good_deal in our second setting we awarded points not only to the gold standard good_deal but also to the predictions good_deal_add good_deal_mul and good_deal_plsr when evaluating each corresponding modelwith these settings the compared spaces become less asymmetrical the obtained results show a great improvement once again the additive model showed the best performance followed by plsrthe multiplicative models performance was negligiblewhile carrying out these experiments an unexpected fact became evidenteach of the models in turn produces predictions that are relatively close to each other regardless of the independent words that were used to calculate the compositional vectorsthis has the consequence that the nearest neighbour lists for each models predictions are by and large populated by items generated in the same model as shown in table 4neighbours of predictions from the multiplicative model are all multiplicativethe additive model has the most varied set of neighbours but the majority of them are additiveneighboursplsr shows a mixed behaviourhowever plsr produced neighbours that find their way into the neighbour sets of both the additive model and the observationsthese remarks point in the same direction every model is a simplified and specialised version of the original space somewhat more orderly than the observed data and may give different results depending on the task at stakeplsr is particularly efficient as generator of neighbours for real vectors a characteristic that could be applied to guess distributional synonyms of unseen an pairson the other hand the additive model is especially successful in attracting gold standard neighboursoverall even at this experimental stage plsr is clearly the model that produces the most consistent resultsthis paper proposed a novel method to model the compositionality of meaning in distributional models of semanticsthe method partial least squares regression is well known in other dataintensive fields of research but to our knowledge had never been put to work in computational distributional semanticsits main advantage is the fact that it is designed to approximate functions in problems of multivariate multiple regression where the number of observations is relatively small if compared to the number of variables we built a dsm targeting a type of semantic composition that has not been treated extensively in the literature before adjacent an pairsthe model built by plsr performed better than both a simple additive model and a multiplicative model in the first proposed evaluation methodour second evaluation test gave mixed results the best performance was obtained by the simple additive model with plsr coming in second placethis is work in progress but the results look very promisingfuture developments will certainly focus on the creation of better evaluation methods as well as on extending the experiments to other techniques another important issue that we still have not touched is the role played by lexical association in the prediction modelswe would like to make sure that we are not modelling the compositionality of noncompositional examplesa last word on the view of semantic compositionality suggested by our approachmodelling compositionality as a machine learning task implies that a great number of different types of composition may be learned from natural language samplesin principle any semantic relation instantiated by any syntactic structure could be learned if sufficient data is providedthis approach must be confronted with other linguistic phenomena also of greater complexity than just a set of bigramsfinally we might wonder if there is an upper limit to the number of compositionality functions that we need to learn in natural language or if there are types of functions that are more difficult or even impossible to learnthanks are due to marco baroni stefan evert roberto zamparelli and the three anonymous reviewers for their assistance and helpful comments
W10-2805
a regression model of adjectivenoun compositionality in distributional semanticsin this paper we explore the computational modelling of compositionality in distributional models of semanticsin particular we model the semantic composition of pairs of adjacent english adjectives and nouns from the british national corpuswe build a vectorbased semantic space from a lemmatised version of the bnc where the most frequent an lemma pairs are treated as single tokenswe then extrapolate three different models of compositionality a simple additive model a pointwisemultiplicative model and a partial least squares regression modelwe propose two evaluation methods for the implemented modelsour study leads to the conclusion that regressionbased models of compositionality generally outperform additive and multiplicative approaches and also show a number of advantages that make them very promising for future researchour approach to the semantic composition of adjectives with nouns draws on the classical analysis of adjectives within the montagovian tradition of formal semantic theory on which they are treated as higher order predicates and model adjectives as matrices of weights that are applied to noun vectorsour main innovation is to use the cooccurrence vectors of corpusobserved ans to train a supervised composition modelwe look at corpusharvested phrase vectors to learn composition functions that should derive such composite vectors automatically
driving semantic parsing from the worldrsquos response current approaches to semantic parsing the task of converting text to a formal meaning representation rely on annotated training data mapping sentences to logical forms providing this supervision is a major bottleneck in scaling semantic parsers this paper presents a new learning paradigm aimed at alleviating the supervision burden we develop two novel learning algorithms capable of predicting complex structures which only rely on a binary feedback signal based on the context of an external world in addition we reformulate the semantic parsing problem to reduce the dependency of the model on syntactic patterns thus allowing our parser to scale better using less supervision our results surprisingly show that without using any annotated meaning representations learning with a weak feedback signal is capable of producing a parser that is competitive with fully supervised parsers semantic parsing the process of converting text into a formal meaning representation is one of the key challenges in natural language processingunlike shallow approaches for semantic interpretation which often result in an incomplete or ambiguous interpretation of the natural language input the output of a semantic parser is a complete meaning representation that can be executed directly by a computer programsemantic parsing has mainly been studied in the context of providing natural language interfaces to computer systemsin these settings the target meaning representation is defined by the semantics of the underlying taskfor example providing access to databases a question posed in natural language is converted into a formal database query that can be executed to retrieve informationexample 1 shows a nl input query and its corresponding meaning representationexample 1 geoquery input text and output mr what is the largest state that borders texas largest previous works employ machine learning techniques to construct a semantic parserthe learning algorithm is given a set of input sentences and their corresponding meaning representations and learns a statistical semantic parser a set of rules mapping lexical items and syntactic patterns to their meaning representation and a score associated with each rulegiven a sentence these rules are applied recursively to derive the most probable meaning representationsince semantic interpretation is limited to syntactic patterns identified in the training data the learning algorithm requires considerable amounts of annotated data to account for the syntactic variations associated with the meaning representationannotating sentences with their mr is a difficult time consuming task minimizing the supervision effort required for learning is a major challenge in scaling semantic parsersthis paper proposes a new model and learning paradigm for semantic parsing aimed to alleviate the supervision bottleneckfollowing the observation that the target meaning representation is to be executed by a computer program which in turn provides a response or outcome we propose a response driven learning framework capable of exploiting feedback based on the responsethe feedback can be viewed as a teacher judging whether the execution of the meaning representation produced the desired response for the input sentencethis type of supervision is very natural in many situations and requires no expertise thus can be supplied by any usercontinuing with example 1 the response generated by executing a database query would be used to provide feedbackthe feedback would be whether the generated response is the correct answer for the input question or not in this case new mexico is the desired responsein response driven semantic parsing the learner is provided with a set of natural language sentences and a feedback function that encapsulates the teacherthe feedback function informs the learner whether its interpretation of the input sentence produces the desired responsewe consider scenarios where the feedback is provided as a binary signal correct 1 or incorrect 1this weaker form of supervision poses a challenge to conventional learning methods semantic parsing is in essence a structured prediction problem requiring supervision for a set of interdependent decisions while the provided supervision is binary indicating the correctness of a generated meaning representationto bridge this difference we propose two novel learning algorithms suited to the response driven settingfurthermore to account for the many syntactic variations associated with the mr we propose a new model for semantic parsing that allows us to learn effectively and generalize bettercurrent semantic parsing approaches extract parsing rules mapping nl to their mr restricting possible interpretations to previously seen syntactic patternswe replace the rigid inference process induced by the learned parsing rules with a flexible frameworkwe model semantic interpretation as a sequence of interdependent decisions mapping text spans to predicates and use syntactic information to determine how the meaning of these logical fragments should be composedwe frame this process as an integer linear programming problem a powerful and flexible inference framework that allows us to inject relevant domain knowledge into the inference process such as specific domain semantics that restrict the space of possible interpretationswe evaluate our learning approach and model on the well studied geoquery domain a database consisting of yous geographical information and natural language questionsour experimental results show that our model with response driven learning can outperform existing models trained with annotated logical formsthe key contributions of this paper are response driven learning for semantic parsing we propose a new learning paradigm for learning semantic parsers without any annotated meaning representationsthe supervision for learning comes from a binary feedback signal based a response generated by executing a meaning representationthis type of supervision signal is natural to produce and can be acquired from nonexpert usersnovel training algorithms two novel training algorithms are developed within the response driven learning paradigmthe training algorithms are applicable beyond semantic parsing and can be used in situations where it is possible to obtain binary feedback for a structured learning problemflexible semantic interpretation process we propose a novel flexible semantic parsing model that can handle previously unseen syntactic variations of the meaning representationthe goal of semantic parsing is to produce a function f x z that maps from the space natural language input sentences x to the space of meaning representations zthis type of task is usually cast as a structured output prediction problem where the goal is to obtain a model that assigns the highest score to the correct meaning representation given an input sentencehowever in the task of semantic parsing this decision relies on identifying a hidden intermediate representation that captures the way in which fragments of the text correspond to the meaning representationtherefore we formulate the prediction function as follows where b is a feature function that describes the relationships between an input sentence x alignment y and meaning representation z w is the weight vector which contains the parameters of the modelwe refer to the arg max above as the inference problemthe feature function combined with the nature of the inference problem defines the semantic parsing modelthe key to producing a semantic parser involves defining a model and a learning algorithm to obtain w in order to exemplify these concepts we consider the geoquery domaingeoquery contains a query language for a database of yous geographical factsfigure 1 illustrates concrete examples of the terminology introducethe input sentences x are natural language queries about yous geographythe meaning representations z are logical forms which can be executed on the database to obtain a response which we denote with r the alignment y captures the associations between x and zbuilding a semantic parser involves defining the model and a learning strategy to obtain weights associated with the modelwe defer discussion of our model until section 4 and first focus on our learning strategyprevious approaches to semantic parsing have assumed a fully supervised setting where a training set is available consisting of either input sentences and logical forms nl1 or input sentences logical forms and a mapping between their constituents nl1 given such training examples a weight vector w can be learned using structured learning methodsobtaining through annotation or other means this form of training data is an expensive and difficult process which presents a major bottleneck for semantic parsingto reduce the burden of annotation we focus on a new learning paradigm which uses feedback from a teacherthe feedback signal is binary and informs the learner whether a predicted logical form zˆ when executed on the target domain produces the desired response or outcomethis is a very natural method for providing supervision in many situations and requires no expertisefor example a user can observe the response and provide a judgementthe general form of the teachers feedback is provided by a function for the geoquery domain this amounts to whether the logical form produces the correct response r for the input sentencegeoquery has the added benefit that the teacher can be automated if we have a dataset consisting of input sentences and response pairs nl1feedback evaluates whether a logical form produces a response matching r we are now ready to present our learning with feedback algorithms that operate in situations where input sentences xln l1 and a teacher feedback mechanism feedback are availablewe do not assume the availability of logical formsin general a weight vector can be considered good if when used in the inference problem it scores the correct logical form and alignment higher than all other logical forms and alignments for a given input sentencethe intuition behind the direct approach is that the feedback function can be used to subsample the space of possible structures for a given input xthe feedback mechanism indicates whether the structure is good or bad using this x y z r intuition we can cast the problem of learning a weight vector for equation as a binary classification problem where we directly consider structures the feedback assigns 1 as positive examples and those assigned 1 as negativewe represent the input to the binary classifier as the feature vector 4b normalized by the size of the input sentence1 x and the label as the result from feedbackalgorithm 1 outlines the approach in detailthe first stage of the algorithm iterates over all the training input sentences and computes the best logical form z and alignment y by solving the inference problem the feedback function is queried and a training example for the binary predictor created using the normalized feature vector from the triple containing the sentence alignment and logical form as input and the feedback as the labelthis training example is added to the working set of training examples for this input sentence all the feedback training examples are used to train a binary classifier whose weight vector is used in the next iteration the algorithm repeats until no new unique training examples are added to any of the working sets for any input sentencealthough the number of possible training examples is very large in practice the algorithm is efficient and converges quicklynote that this approach is capable of using a wide variety of linear classifiers as the base learner a policy is required to specify the nature of the working set of training examples used for training the base classifierthis is pertinent in line 6 of the algorithmpossible policies include allowing duplicates in the working set disallowing duplicates or only allowing one example per input sentence we adopt the first approach in this paper2 there is important implicit information which the direct approach ignoresit is implicit that when the teacher indicates an input paired with an alignment and logical form is good that in order to repeat this behavior all other competing structures should be made suboptimal to leverage this implicit information we adopt a structured learning strategy in which we consider the prediction as the optimal structure and all others as suboptimalthis is in contrast to the direct approach where only structures that have explicitly received negative feedback are considered subopitmalwhen a structure is found with positive feedback it is added to the training pool for a structured learnerwe consider this approach aggressive as the structured learner implicitly considers all other structures as being suboptimalnegative feedback indicates that the structure should not be added to the training pool as it will introduce noise into the learning processalgorithm 2 outlines the learning in more detailas before y and z are predicted using the current weight vector and feedback received when positive feedback is received a new training instance for a structured learner is created from the input sentence and prediction this training instance replaces any previous instance for the input sentencewhen negative feedback is received the training pool sl is not updateda weight vector is learned using a structured learner where the training data s contains at most one example per input sentencein the first iteration of the outer loop the training data s will contain very few examplesin each subsequent iteration the newly learned weight vector allows the algorithm to acquire new examplesthis is repeated until no new examples are added or changed in s like the direct approach this learning framework is makes very few assumptions about the type of structured learner used as a base learner 3semantic parsing is the process of converting a natural language input into a formal logic representationthis process is performed by associating lexical items and syntactic patterns with logical fragments and composing them into a complete formulaexisting approaches rely on extracting a set of parsing rules mapping text constituents to a logical representation from annotated training data and applying them recursively to obtain the meaning representationadapting to new data is a major limitation of these approaches as they cannot handle inputs containing syntactic patterns which were not observed in the training datafor example assume the training data produced the following set of parsing rules example 2 typical parsing rules at test time the parser is given the sentences in example 3despite the lexical similarity in these examples the semantic parser will correctly parse the first sentence but fail to parse the second because the lexical items belong to different a syntactic category the ability to adapt to unseen inputs is one of the key challenges in semantic parsingseveral works have addressed this issue explicitly by manually defining syntactic transformation rules that can help the learned parser generalize betterunfortunately these are only partial solutions as a manually constructed rule set cannot cover the many syntactic variationsgiven the previous example we observe that it is enough to identify that the function capital and the constant const appear in the target mr since there is only a single way to compose these entities into a single formula capitalmotivated by this observation we define our meaning derivation process over the rules of the mr language and use syntactic information as a way to bias the mr construction processthat is our inference process considers the entire space of meaning representations irrespective of the patterns observed in the training datathis is possible as the mrs are defined by a formal language and formal grammar4 the syntactic information present in the natural language is used as soft evidence which guides the inference process to good meaning representationsthis formulation is a major shift from existing approaches that rely on extracting parsing rules from the training datain existing approaches the space of possible meaning representations is constrained by the patterns in the training data and syntactic structure of the natural language inputour formulation considers the entire space of meaning representations and allows the model to adapt to previously unseen data and always produce a semantic interpretation by using the patterns observed in the inputwe frame our semantic interpretation process as a constrained optimization process maximizing the objective function defined by equation 1 which relies on extracting lexical and syntactic features instead of parsing rulesin the remainder of this section we explain the components of our inference modelfollowing previous work we capture the semantics of the geoquery domain using a subset of firstorder logic consisting of typed constants and functionsthere are two types entities e in the domain and numeric values n functions describe a functional relationship over types a complete logical form is constructed through functional composition in our formalism this is performed by the substitution operatorfor example given the function next to and the expression const substitution replaces the occurrence of the free variable x with the expression resulting in a new logical form next todue to space limitations we refer the reader to for a detailed description of the geoquery domainrecall that the goal of semantic parsing is to produce the following function however given that y and z are complex structures it is necessary to decompose the structure into a set of smaller decisions to facilitate efficient inferencein order to define our decomposition we introduce additional notation c is a constituent in the input sentence x and d is the set of all function and constant symbols in the domainthe alignment y is defined as a set of mappings between constituents and symbols in the domain y where s e d we decompose the construction of an alignment and logical form into two types of decisions firstorder decisionsa mapping between constituents and logical symbols secondorder decisionsexpressing how logical symbols are composed into a complete logical interpretationfor example whether next to and state forms next to or statenote that for all possible logical forms and alignments there exists a onetoone mapping to these decisionswe frame the inference problem as an integer linear programming problem in which the firstorder decisions are governed by αcs a binary decision variable indicating that constituent c is aligned with logical symbol s and qcsdt capture the secondorder decisions indicating the symbol t is an argument to function s in addition we add constraints which leverage the typing information inherent in the domain to eliminate logical forms that are invalid in the geoquery domainfor example the function length only accepts river types as inputthe set of constraints are the flexibility of ilp has previously been advantageous in natural language processing tasks as it allows us to easily incorporate such constraintsthe inference problem defined in equation uses two feature functions b1 and b2firstorder decision features b1 determining if a logical symbol is aligned with a specific constituent depends mostly on lexical informationfollowing previous work we create a small lexicon mapping logical symbols to surface forms5 this lexicon is small and only used as a starting pointexisting approaches rely on annotated logical forms to extend the lexiconhowever in our setting we do not have access to annotated logical forms instead we rely on external knowledge to supply further informationwe add features which measure the lexical similarity between a constituent and a logical symbols surface forms two metrics are used stemmed word match and a similarity metric based on wordnet which allows our model to account for words not in the lexiconthe wordnet metric measures similarity based on synonymy hyponymy and meronymy in the case where the constituent is a preposition which are notorious for being ambiguous we add a feature that considers the current lexical context in addition to word similaritysecondorder decision features 4b2 determining how to compose two logical symbols relies on syntactic information in our model we use the dependency tree of the input sentencegiven a secondorder decision q1sdt the dependency feature takes the normalized distance between the head words in the constituents c and d a set of features also indicate which logical symbols are usually composed together without considering their alignment to textin this section we describe our experimental setup which includes the details of the domain resources and parameterswe evaluate our system on the geoquery domain as described previouslythe domain consists of a database and prolog query language for yous geographical factsthe corpus contains of 880 natural language queries paired with prolog logical form queries pairswe follow previous approaches and transform these queries into a functional representationwe randomly select 250 sentences for training and 250 sentences for testing6 we refer to the training set as response 250 indicating that each example x in this data set has a corresponding desired database response r we refer the testing set as query 250 where the examples only contain the natural language queries6our inference problem is less constrained than previous approaches thus we limit the training data to 250 examples due to scalability issueswe also prune the search space by limiting the number of logical symbol candidates per word precision and recall are typically used as evaluation metrics in semantic parsinghowever as our model inherently has the ability to map any input sentence into the space of meaning representations the trade off between precision and recall does not existthus we report accuracy the percentage of meaning representations which produce the correct responsethis is equivalent to recall in previous work feedback recall that our learning framework does not require meaning representation annotationshowever we do require a feedback function that informs the learner whether a predicted meaning representation when executed produces the desired response for a given input sentencewe automatically generate a set of natural language queries and response pairs by executing the annotated logical forms on the databaseusing this data we construct an automatic feedback function as described in section 3domain knowledge our learning approaches require an initial weight vector as inputin order to provide an initial starting point we initialize the weight vector using a similar procedure to the one used in to set weights for three features and a bias termthe weights were developed on the training set using the feedback function to guide our choicesunderlying learning algorithms in the direct approach the base linear classifier we use is a linear kernel support vector machine with squaredhinge lossin the aggressive approach we define our base structured learner to be a structural support vector machine with squaredhinge loss and use hamming distance as the distance functionwe use a custom implementation to optimize the objective function using the cuttingplane method this allows us to parrallelize the learning process by solving the inference problem for multiple training examples simultaneouslyour experiments are designed to answer three questions we first compare how well our model performs under four different learning regimesnolearn uses a manually initialized weight vectordirect and aggressive use the two response driven learning approaches where a feedback function but no logical forms are providedas an upper bound we train the model using a fully supervised approach where the input sentences are paired with hand annotated logical formstable 1 shows the accuracy of each setupthe model without learning gives a starting point with an accuracy of 222the response driven learning methods perform substantially better than the starting pointthe direct approach which uses a binary learner reaches an accuracy of 752 on the r250 data and 692 on the q250 datawhile the aggressive approach which uses a structured learner sees a bigger improvement reaching 824 and 732 respectivelythis is only 7 below the fully supervised upper bound of the modelto answer the second question we compare a supervised version of our model to existing semantic parsersthe results are in table 2although the numbers are not directly comparable due to different splits in the data7 we can see that with a similar number of logical forms for training our supervised approach outperforms existing systems while the aggressive approach remains competitive without using any logical formsour supervised model is still very competitive with other approaches which used considerably more annotated logical forms in the training phasein order to answer the third question we turn our attention to the differences between the two response driven learning approachesthe direct and aggressive approaches use binary feedback to learn however they utilize the signal differentlydirect uses the signal directly to learn a binary classifier capable of replicating the feedback whereas aggressive learns a structured predictor that can repeatedly obtain the logical forms for which positive feedback was receivedthus although the aggressive outperforms the direct approach the concepts each approach learns may be differentanalysis over the training data shows that in 668 examples both approaches predict a logical form that gives the correct answerwhile aggressive correctly answers an additional 16 which direct gets incorrectin the opposite direction direct correctly answers 88 that aggressive does notleaving only 84 of the examples that both approaches predict incorrect logical formsthis suggests that an approach which combines direct and aggressive may be able to improve even furtherfigure 2 shows the accuracy on the entire training data at each iteration of learningwe see that the aggressive approach learns to cover more of the training data and at a faster rate than directnote that the performance of the direct approach drops at the first iterationwe hypothesize this is due to imbalances in the binary feedback dataset in the first iterationlearning to map sentences to a meaning representation has been studied extensively in the nlp communityearly works employed inductive logic programming approaches to learn a semantic parsermore recent works apply statistical learning methods to the problemin the input to the learner consists of complete syntactic derivations for the input sentences annotated with logical expressionsother works try to alleviate the annotation effort by only taking sentence and logical form pairs to train the modelslearning is then defined over hidden patterns in the training data that associate logical symbols with lexical and syntactic elementsin this work we take an additional step towards alleviating the difficulty of training semantic parsers and present a world response based training protocolseveral recent works explore using an external world context as a supervision signal for semantic interpretationthese works operate in settings different to ours as they rely on an external world state that is directly referenced by the input textalthough our framework can also be applied in these settings we do not assume that the text can be grounded in a world statein our experiments the input text consists of generalized statements which describe some information need that does not correspond directly to a grounded world stateour learning framework closely follows recent work on learning from indirect supervisionthe direct approach resembles learning a binary classifier over a latent structure while the aggressive approach has similarities with work that uses labeled structures and a binary signal indicating the existence of good structures to improve structured prediction in this paper we tackle one of the key bottlenecks in semantic parsing providing sufficient supervision to train a semantic parserour solution is two fold first we present a new training paradigm for semantic parsing that relies on natural human level supervisionsecond we suggest a new model for semantic interpretation that does not rely on nl syntactic parsing rules but rather uses the syntactic information to bias the interpretation processthis approach allows the model to generalize better and reduce the required amount of supervisionwe demonstrate the effectiveness of our training paradigm and interpretation model over the geoquery domain and show that our model can outperform fully supervised systemsacknowledgements we are grateful to rohit kate and raymond mooney for their help with the geoquery datasetthanks to yee seng chan nick rizzolo shankar vembu and the three anonymous reviewers for their insightful commentsthis material is based upon work supported by the air force research laboratory under prime contract nofa875009c0181 and by darpa under the bootstrap learning programany opinions findings and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the afrl or darpa
W10-2903
driving semantic parsing from the worldrsquos responsecurrent approaches to semantic parsing the task of converting text to a formal meaning representation rely on annotated training data mapping sentences to logical formsproviding this supervision is a major bottleneck in scaling semantic parsersthis paper presents a new learning paradigm aimed at alleviating the supervision burdenwe develop two novel learning algorithms capable of predicting complex structures which only rely on a binary feedback signal based on the context of an external worldin addition we reformulate the semantic parsing problem to reduce the dependency of the model on syntactic patterns thus allowing our parser to scale better using less supervisionour results surprisingly show that without using any annotated meaning representations learning with a weak feedback signal is capable of producing a parser that is competitive with fully supervised parserswe train systems on question and answer pairs by automatically finding semantic interpretations of the questions that would generate the correct answers