Text
stringlengths
45
130k
Id
stringlengths
8
8
Summary
stringlengths
55
2.67k
tnt a statistical partofspeech tagger trigramsntags is an efficient statistical partofspeech tagger contrary to claims found elsewhere in the literature we argue that a tagger based on markov models performs at least as well as other current approaches including the maximum entropy framework a recent comparison has even shown that tnt performs significantly better for the tested corpora we describe the basic model of tnt the techniques used for smoothing and for handling unknown words furthermore we present evaluations on two corpora a large number of current language processing systems use a partofspeech tagger for preprocessingthe tagger assigns a partofspeech tag to each token in the input and passes its output to the next processing level usually a parserfurthermore there is a large interest in partofspeech tagging for corpus annotation projects who create valuable linguistic resources by a combination of automatic processing and human correctionfor both applications a tagger with the highest possible accuracy is requiredthe debate about which paradigm solves the partofspeech tagging problem best is not finishedrecent comparisons of approaches that can be trained on corpora have shown that in most cases statistical aproaches yield better results than finitestate rulebased or memorybased taggers they are only surpassed by combinations of different systems forming a quotvoting taggerquotamong the statistical approaches the maximum entropy framework has a very strong positionnevertheless a recent independent comparison of 7 taggers has shown that another approach even works better markov models combined with a good smoothing technique and with handling of unknown wordsthis tagger tnt not only yielded the highest accuracy it also was the fastest both in training and taggingthe tagger comparison was organized as a quotblackbox testquot set the same task to every tagger and compare the outcomesthis paper describes the models and techniques used by tnt together with the implementationthe reader will be surprised how simple the underlying model isthe result of the tagger comparison seems to support the maxime quotthe simplest is the bestquothowever in this paper we clarify a number of details that are omitted in major previous publications concerning tagging with markov modelsas two examples and give good overviews of the techniques and equations used for markov models and partofspeech tagging but they are not very explicit in the details that are needed for their applicationwe argue that it is not only the choice of the general model that determines the result of the tagger but also the various quotsmallquot decisions on alternativesthe aim of this paper is to give a detailed account of the techniques used in tntadditionally we present results of the tagger on the negra corpus and the penn treebank the penn treebank results reported here for the markov model approach are at least equivalent to those reported for the maximum entropy approach in for a comparison to other taggers the reader is referred to tnt uses second order markov models for partofspeech taggingthe states of the model represent tags outputs represent the wordstransition probabilities depend on the states thus pairs of tagsoutput probabilities only depend on the most recent categoryto be explicit we calculate for a given sequence of words w1 of length t t1 tr are elements of the tagset the additional tags t_1 to and t71 are beginningofsequence and endofsequence markersusing these additional tags even if they stem from rudimentary processing of punctuation marks slightly improves tagging resultsthis is different from formulas presented in other publications which just stop with a quotloose endquot at the last wordif sentence boundaries are not marked in the input tnt adds these tags if it encounters one of as a tokentransition and output probabilities are estimated from a tagged corpusas a first step we use the maximum likelihood probabilities p which are derived from the relative frequencies for all t1 t2 t3 in the tagset and w3 in the lexiconn is the total number of tokens in the training corpuswe define a maximum likelihood probability to be zero if the corresponding nominators and denominators are zeroas a second step contextual frequencies are smoothed and lexical frequences are completed by handling words that are not in the lexicon trigram probabilities generated from a corpus usually cannot directly be used because of the sparsedata problemthis means that there are not enough instances for each trigram to reliably estimate the probabilityfurthermore setting a probability to zero because the corresponding trigram never occured in the corpus has an undesired effectit causes the probability of a complete sequence to be set to zero if its use is necessary for a new text sequence thus makes it impossible to rank different sequences containing a zero probabilitythe smoothing paradigm that delivers the best results in tnt is linear interpolation of unigrams bigrams and trigramstherefore we estimate a trigram probability as follows p are maximum likelihood estimates of the probabilities and a1 a2 a3 1 so p again represent probability distributionswe use the contextindependent variant of linear interpolation ie the values of the as do not depend on the particular trigramcontrary to intuition this yields better results than the contextdependent variantdue to sparsedata problems one cannot estimate a different set of as for each trigramtherefore it is common practice to group trigrams by frequency and estimate tied sets of ashowever we are not aware of any publication that has investigated frequency groupings for linear interpolation in partofspeech taggingall groupings that we have tested yielded at most equivalent results to contextindependent linear interpolationsome groupings even yielded worse resultsthe tested groupings included a one set of as for each frequency value and b two classes on the two ends of the scale as well as several groupings in between and several settings for partitioning the classesthe values of a1 a2 and a3 are estimated by deleted interpolationthis technique successively removes each trigram from the training corpus and estimates best values for the as from all other ngrams in the corpusgiven the frequency counts for uni bi and trigrams the weights can be very efficiently determined with a processing time linear in the number of different trigramsthe algorithm is given in figure 1note that subtracting 1 means taking unseen data into accountwithout this subtraction the model would overfit the training data and would generally yield worse resultscurrently the method of handling unknown words that seems to work best for inflected languages is a suffix analysis as proposed in tag probabilities are set according to the word endingthe suffix is a strong predictor for word classes eg words in the wall street journal part of the penn treebank ending in able are adjectives in 98 of the cases the rest of 2 are nouns the probability distribution for a particular suffix is generated from all words in the training set that share the same suffix of some predefined maximum lengththe term suffix as used here means quotfinal sequence of characters of a wordquot which is not necessarily a linguistically meaningful suffixprobabilities are smoothed by successive abstractionthis calculates the probability of a tag t given the last m letters i of an n letter word pthe sequence of increasingly more general contexts omits more and more characters of the suffix such that p p are used for smoothingthe recursion formula is set a a2 a3 0 foreach trigram t1t2t3 with f 0 depending on the maximum of the following three values for i m 0 using the maximum likelihood estimates p from frequencies in the lexicon weights oi and the initialization for the markov model we need the inverse conditional probabilities p which are obtained by bayesian inversiona theoretical motivated argumentation uses the standard deviation of the maximum likelihood probabilities for the weights 0 this leaves room for interpretationwe use the longest suffix that we can find in the training set but at most 10 charactersthis is an empirically determined choice2 we use a contextindependent approach for 0 as we did for the contextual weights ait turned out to be a good choice to set all 0 to the standard deviation of the unconditioned maximum likelihood probabilities of the tags in the training corpus ie we set for all i 0 m 1 using a tagset of s tags and the average this usually yields values in the range 003 0103 we use different estimates for uppercase and lowercase words ie we maintain two different suffix tries depending on the capitalization of the wordthis information improves the tagging results4 another freedom concerns the choice of the words in the lexicon that should be used for suffix handlingshould we use all words or are some of them better suited than othersaccepting that unknown words are most probably infrequent one can argue that using suffixes of infrequent words in the lexicon is a better approximation for unknown words than using suffixes of frequent wordstherefore we restrict the procedure of suffix handling to words with a frequency smaller than or equal to some threshold valueempirically 10 turned out to be a good choice for this thresholdadditional information that turned out to be useful for the disambiguation process for several corpora and tagsets is capitalization informationtags are usually not informative about capitalization but probability distributions of tags around capitalized words are different from those not capitalizedthe effect is larger for english which only capitalizes proper names and smaller for german which capitalizes all nounswe use flags ci that are true if wi is a capitalized word and false otherwisethese flags are added to the contextual probability distributionsinstead of and equations to are updated accordinglythis is equivalent to doubling the size of the tagset and using different tags depending on capitalizationthe processing time of the viterbi algorithm can be reduced by introducing a beam searcheach state that receives a 6 value smaller than the largest 6 divided by some threshold value 0 is excluded from further processingwhile the viterbi algorithm is guaranteed to find the sequence of states with the highest probability this is no longer true when beam search is addednevertheless for practical purposes and the right choice of 0 there is virtually no difference between the algorithm with and without a beamempirically a value of 0 1000 turned out to approximately double the speed of the tagger without affecting the accuracythe tagger currently tags between 30000 and 60000 tokens per second on a pentium 500 running linuxthe speed mainly depends on the percentage of unknown words and on the average ambiguity ratewe evaluate the tagger performance under several aspectsfirst of all we determine the tagging accuracy averaged over ten iterationsthe overall accuracy as well as separate accuracies for known and unknown words are measuredsecond learning curves are presented that indicate the performance when using training corpora of different sizes starting with as few as 1000 tokens and ranging to the size of the entire corpus an important characteristic of statistical taggers is that they not only assign tags to words but also probabilities in order to rank different assignmentswe distinguish reliable from unreliable assignments by the quotient of the best and second best assignmentsl all assignments for which this quotient is larger than some threshold are regarded as reliable the others as unreliableas we will see below accuracies for reliable assignments are much higherthe tests are performed on partitions of the corpora that use 90 as training set and 10 as test set so that the test data is guaranteed to be unseen during trainingeach result is obtained by repeating the experiment 10 times with different partitions and averaging the single outcomesin all experiments contiguous test sets are usedthe alternative is a roundrobin procedure that puts every 10th sentence into the test setwe argue that contiguous test sets yield more realistic results because completely unseen articles are taggedusing the roundrobin procedure parts of an article are already seen which significantly reduces the percentage of unknown wordstherefore we expect even by definition this quotient is oo if there is only one possible tag for a given word higher results when testing on every 10th sentence instead of a contiguous set of 10in the following accuracy denotes the number of correctly assigned tags divided by the number of tokens in the corpus processedthe tagger is allowed to assign exactly one tag to each tokenwe distinguish the overall accuracy taking into account all tokens in the test corpus and separate accuracies for known and unknown tokensthe latter are interesting since usually unknown tokens are much more difficult to process than known tokens for which a list of valid tags can be found in the lexiconthe german negra corpus consists of 20000 sentences of newspaper texts that are annotated with partsofspeech and predicateargument structures it was developed at the saarland university in saarbriicken2part of it was tagged at the ims stuttgartthis evaluation only uses the partofspeech annotation and ignores structural annotationstagging accuracies for the negra corpus are shown in table 2figure 3 shows the learning curve of the tagger ie the accuracy depending on the amount of training datatraining length is the number of tokens used for trainingeach training length was tested ten times training and test sets were randomly chosen and disjoint results were averagedthe training length is given on a logarithmic scaleit is remarkable that tagging accuracy for known words is very high even for very small training corporathis means that we have a good chance of getting the right tag if a word is seen at least once during trainingaverage percentages of unknown tokens are shown in the bottom line of each diagramwe exploit the fact that the tagger not only determines tags but also assigns probabilitiesif there is an alternative that has a probability quotclose toquot that of the best assignment this alternative can be viewed as almost equally well suitedthe notion of quotclose toquot is expressed by the distance of probabilities and this in turn is expressed by the quotient of probabilitiesso the distance of the probabilities of a best tag tbest and an alternative tag tau is expressed by pp7 which is some value greater or equal to 1 since the best tag assignment has the highest probabilityfigure 4 shows the accuracy when separating assignments with quotients larger and smaller than the threshold as expected we find that accuracies for percentage known unknown overall unknowns acc acc acc a table 5 partofspeech tagging accuracy for the penn treebankthe table shows the percentage of unknown tokens separate accuracies and standard deviations for known and unknown tokens as well as the overall accuracy percentage known unknown overall unknowns acc acc acc reliable assignments are much higher than for unreliable assignmentsthis distinction is eg useful for annotation projects during the cleaning process or during preprocessing so the tagger can emit multiple tags if the best tag is classified as unreliablewe use the wall street journal as contained in the penn treebank for our experimentsthe annotation consists of four parts 1 a contextfree structure augmented with traces to mark movement and discontinuous constituents 2 phrasal categories that are annotated as node labels 3 a small set of grammatical functions that are annotated as extensions to the node labels and 4 partofspeech tags this evaluation only uses the partofspeech annotationthe wall street journal part of the penn treebank consists of approx50000 sentences tagging accuracies for the penn treebank are shown in table 5figure 6 shows the learning curve of the tagger ie the accuracy depending on the amount of training datatraining length is the number of tokens used for trainingeach training length was tested ten timestraining and test sets were disjoint results are averagedthe training length is given on a logarithmic scaleas for the negra corpus tagging accuracy is very high for known tokens even with small amounts of training datawe exploit the fact that the tagger not only determines tags but also assigns probabilitiesfigure 7 shows the accuracy when separating assignments with quotients larger and smaller than the threshold again we find that accuracies for reliable assignments are much higher than for unreliable assignmentsaverage partofspeech tagging accuracy is between 96 and 97 depending on language and tagset which is at least on a par with stateoftheart results found in the literature possibly betterfor the penn treebank reports an accuracy of 966 using the maximum entropy approach our much simpler and therefore faster hmm approach delivers 967this comparison needs to be reexamined since we use a tenfold crossvalidation and averaging of results while ratnaparkhi only makes one test runthe accuracy for known tokens is significantly higher than for unknown tokensfor the german newspaper data results are 87 better when the word was seen before and therefore is in the lexicon than when it was not seen before accuracy for known tokens is high even with very small amounts of training dataas few as 1000 tokens are sufficient to achieve 9596 accuracy for themit is important for the tagger to have seen a word at least once during trainingstochastic taggers assign probabilities to tagswe exploit the probabilities to determine reliability of assignmentsfor a subset that is determined during processing by the tagger we achieve accuracy rates of over 99the accuracy of the complement set is much lowerthis information can eg be exploited in an annotation project to give an additional treatment to the unreliable assignments or to pass selected ambiguities to a subsequent processing stepwe have shown that a tagger based on markov models yields stateoftheart results despite contrary claims found in the literaturefor example the markov model tagger used in the comparison of yielded worse results than all other taggersin our opinion a reason for the wrong claim is that the basic algorithms leave several decisions to the implementorthe rather large amount of freedom was not handled in detail in previous publications handling of start and endofsequence the exact smoothing technique how to determine the weights for context probabilities details on handling unknown words and how to determine the weights for unknown wordsnote that the decisions we made yield good results for both the german and the english corpusthey do so for several other corpora as wellthe architecture remains applicable to a large variety of languagesaccording to current tagger comparisons and according to a comparsion of the results presented here with those in the maximum entropy framework seems to be the only other approach yielding comparable results to the one presented hereit is a very interesting future research topic to determine the advantages of either of these approaches to find the reason for their high accuracies and to find a good combination of bothtnt is freely available to universities and related organizations for research purposes many thanks go to hans uszkoreit for his support during the development of tntmost of the work on tnt was carried out while the author received a grant of the deutsche forschungsgemeinschaft in the graduiertenkolleg kognitionswissenschaft saarbriickenlarge annotated corpora are the prerequisite for developing and testing partofspeech taggers and they enable the generation of highquality language modelstherefore i would like to thank all the people who took the effort to annotate the penn treebank the susanne corpus the stuttgarter referenzkorpus the negra corpus the verbmobil corpora and several othersand last but not least i would like to thank the users of tnt who provided me with bug reports and valuable suggestions for improvements
A00-1031
tnt a statistical partofspeech taggertrigramsntags is an efficient statistical partofspeech taggercontrary to claims found elsewhere in the literature we argue that a tagger based on markov models performs at least as well as other current approaches including the maximum entropy frameworka recent comparison has even shown that tnt performs significantly better for the tested corporawe describe the basic model of tnt the techniques used for smoothing and for handling unknown wordsfurthermore we present evaluations on two corporawe achieve the automated tagging of a syntacticstructurebased set of grammatical function tags including phrasechunk and syntacticrole modifiers trained in supervised mode from a tree bank of german
sentence reduction for automatic text summarization figure 2 sample sentence and parse tree we have an input sentence t dg b4a because both the human and the program made decisions two out of the five decisions agree so the rate is 25 the rate defined as man and the program have made the same decision success rate the total of edges along which both the human and the progam have made decisions 313 note that the edges along which only the human or the program has made a decision are not considered in the computation of success rate since there is no agreement issue in such cases 32 evaluation result in the evaluation we used 400 sentences in the corpus to compute the probabilities that a phrase is removed reduced or unchanged we tested the program on the rest 100 sentences using fivefold validation the program achieved an average success rate of 813 if we consider the baseline as removing all the prepositional phrases clauses toinfinitives and gerunds the baseline performance is 432 we also computed the success rate of program decisions on particular types of phrases for the decisions on removing or keeping a clause the system has a success rate of 781 for the decisions on removing or keeping a toinfinitive the system has a success rate of 852 we found out that the system has a low success rate on removing adjectives of noun phrases or removing adverbs of a sentence or a verb phrase one reason for this is that our probability model can hardly capture the dependencies between a particular adjective and the head noun since the training corpus is not large enough while the other sources of information including grammar or context information provide little evidence on whether an adjective or an adverb should be removed given that whether or not an adjective or an adverb is removed does not affect the conciseness of the sentence significantly and the system lacks of reliability in making such decisions we decide not to remove adjectives and adverbs on average the system reduced the length of the 500 sentence by 327 while humans reduced it by 418 the probabilities we computed from the training corpus covered 58 of instances in the test corpus when the corpus probability is absent for a case the system makes decisions based on the other two sources of knowledge some of the errors made by the system result from the errors by the syntactic parser we randomly checked 50 sentences and found that 8 of the errors made by the system are due to parsing errors there are two main reasons responsible for this relative low percentage of errors resulted from mistakes in parsing one reason is that we have taken some special measures to avoid errors introduced by mistakes in parsing for example pp attachment is a difficult problem in parsing and it is not rare that a pp is wrongly attached therefore we take this into account when marking the obligatory components using subcategorization knowledge from the lexicon we not only look at the pps that are attached to a verb phrase but also pps that are next to the verb phrase but not attached in case it is part of the verb phrase we also wrote a preprocessor to deal with particular structures that the parser often has problems with such as appositions the other reason is that parsing errors do not always result in reduction errors for example given a sentence quotthe spokesperson of the university said that quot although thatclause in the sentence may have a complicated structure and the parser gets it wrong the reduction system is not necessarily affected since it may decide in this case to keep thatclause as it is as humans often do so the parsing errors will not matter in this example 4 discussion and related work the reduction algorithm we present assumes generic summarization that is we want to generate a summary that includes the most important information in an article we can tailor the reduction system to queriesbased summarization in that case the task of the reduction is not to remove phrases that are extraneous in terms of the main topic of an article but phrases that are not very relevant to users queries we extended our sentence reduction program to querybased summarization by adding another step in the algorithm to measure the relevance of users queries to phrases in the sentence in the last step of reduction when the system makes the final decision the relevance of a phrase to the query is taken into account together with syntactic context and corpus information ideally the sentence reduction module should interact with other modules in a summarization system it should be able to send feedback to the extraction module if it finds that a sentence selected by the extraction module may be inappropriate it should also be able to interact with the modules that run after it such as the sentence combination module so that it can revise reduction decisions according to the feedback from these modules some researchers suggested removing phrases or clauses from sentences for certain applications proposed to remove phrases in sentences to produce a telegraphic text that can be used to provide audio scanning service for the blind proposed to remove clauses in sentences before indexing documents for information retrieval both studies removed phrases based only on their syntactic categories while the focus of our system is on deciding when it is appropriate to remove a phrase researchers worked on the text simplifica 314 tion problem which usually involves in simplifying text but not removing any phrases for example discussed simplifying newspaper text by replacing uncommon words with common words or replacing complicated syntactic structures with simpler structures to assist people with reading disabilities discussed text simplification in general the difference between these studies on text simplification and our system is that a text simplification system usually not from an original sentence although it may change its structure or words but our system removes extraneous phrases from the extracted sentences 5 conclusions and future work we present a novel sentence reduction system which removes extraneous phrases from sentences that are extracted from an article in text summarization the deleted phrases can be prepositional phrases clauses toinfinitives or gerunds and multiple phrases can be removed form a single sentence the focus of this work is on determining for a sentence in a particular context which phrases in the sentence are less important and can be removed our system makes intelligent reduction decisions based on multiple sources of knowledge including syntactic knowledge context and probabilities computed from corpus analysis we also created a corpus consisting of 500 sentences and their reduced forms produced by human professionals and used this corpus for training and testing the system the evaluation shows that 813 of reduction decisions made by the system agreed with those of humans in the future we would like to integrate our sentence reduction system with extractionbased summarization systems other than the one we have developed improve the performance of the system further by introducing other sources of knowledge necessary for reduction and explore other interesting applications of the reduction system acknowledgment this material is based upon work supported by the national science foundation under grant no iri 9619124 and iri 9618797 any opinions findings and conclusions or recommendations expressed in this material are those of the authors and do not current automatic summarizers usually rely on sentence extraction to produce summarieshuman professionals also often reuse the input documents to generate summaries however rather than simply extracting sentences and stringing them together as most current summarizers do humans often quoteditquot the extracted sentences in some way so that the resulting summary is concise and coherentwe analyzed a set of articles and identified six major operations that can be used for editing the extracted sentences including removing extraneous phrases from an extracted sentence combining a reduced sentence with other sentences syntactic transformation substituting phrases in an extracted sentence with their paraphrases substituting phrases with more general or specific descriptions and reordering the extracted sentences we call the operation of removing extraneous phrases from an extracted sentence sentence reductionit is one of the most effective operations that can be used to edit the extracted sentencesreduction can remove material at any granularity a word a prepositional phrase a gerund a toinfinitive or a clausewe use the term quotphrasequot here to refer to any of the above components that can be removed in reductionthe following example shows an original sentence and its reduced form written by a human professional original sentence when it arrives sometime next year in new tv sets the vchip will give parents a new and potentially revolutionary device to block out programs they do not want their children to seereduced sentence by humans the vchip will give parents a device to block out programs they do not want their children to seewe implemented an automatic sentence reduction systeminput to the reduction system includes extracted sentences as well as the original documentoutput of reduction are reduced forms of the extracted sentences which can either be used to produce summaries directly or be merged with other sentencesthe reduction system uses multiple sources of knowledge to make reduction decisions including syntactic knowledge context and statistics computed from a training corpuswe evaluated the system against the output of human professionalsthe program achieved a success rate of 813 meaning that 813 of reduction decisions made by the system agreed with those of humanssentence reduction improves the conciseness of automatically generated summaries making it concise and on targetit can also improve the coherence of generated summaries since extraneous phrases that can potentially introduce incoherece are removedwe collected 500 sentences and their corresponding reduced forms written by humans and found that humans reduced the length of these 500 sentences by 442 on averagethis indicates that a good sentence reduction system can improve the conciseness of generated summaries significantlyin the next section we describe the sentence reduction algorithm in detailsin section 3 we introduce the evaluation scheme used to access the performance of the system and present evaluation resultsin section 4 we discuss other applications of sentence reduction the interaction between reduction and other modules in a summarization system and related work on sentence simplicationfinally we the goal of sentence reduction is to quotreduce without major lossquot that is we want to remove as many extraneous phrases as possible from an extracted sentence so that it can be concise but without detracting from the main idea the sentence conveysideally we want to remove a phrase from an extracted sentence only if it is irrelevant to the main topicto achieve this the system relies on multiple sources of knowledge to make reduction decisionswe first introduce the resources in the system and then describe the reduction algorithm the corpusone of the key features of the system is that it uses a corpus consisting of original sentences and their corresponding reduced forms written by humans for training and testing purposethis corpus was created using an automatic program we have developed to automatically analyze humanwritten abstractsthe program called the decomposition program matches phrases in a humanwritten summary sentence to phrases in the original document the humanwritten abstracts were collected from the free daily news service quotcommunicationsrelated headlinesquot provided by the benton foundation the articles in the corpus are news reports on telecommunication related issues but they cover a wide range of topics such as law labor and company mergers database to dateit provides lexical relations between words including synonymy antonymy meronymy entailment or causation these lexical links are used to identify the focus in the local context the syntactic parserwe use the english slot grammar parser developed at ibm to analyze the syntactic structure of an input sentence and produce a sentence parse treethe esg parser not only annotates the syntactic category of a phrase it also annotates the thematic role of a phrase there are five steps in the reduction program step 1 syntactic parsingwe first parse the input sentence using the esg parser and produce the sentence parse treethe operations in all other steps are performed based on this parse treeeach following step annotates each node in the parse tree with additional information such as syntactic or context importance which are used later to determine which phrases can be considered extraneous and thus removedstep 2 grammar checkingin this step we determine which components of a sentence must not be deleted to keep the sentence grammaticalto do this we traverse the parse tree produced in the first step in topdown order and mark for each node in the parse tree which of its children are grammatically obligatorywe use two sources of knowledge for this purposeone source includes simple linguisticbased rules that use the thematic role structure produced by the esg parserfor instance for a sentence the main verb the subject and the object are essential if they exist but a prepositional phrase is not for a noun phrase the head noun is essential but an adjective modifier of the head noun is notthe other source we rely on is the largescale lexicon we described earlierthe information in the lexicon is used to mark the obligatory arguments of verb phrasesfor example for the verb quotconvincequot the lexicon has the following entry this entry indicates that the verb quotconvincequot can be followed by a noun phrase and a prepositional phrase starting with the preposition quotof it can also be followed by a noun phrase and a toinfinitive phrase this information prevents the system from deleting the quotofquot prepositional phrase or the toinfinitive that is part of the verb phraseat the end of this step each node in the parse tree including both leaf nodes and intermediate nodes is annotated with a value indicating whether it is grammatically obligatorynote that whether a node is obligatory is relative to its parent node onlyfor example whether a determiner is obligatory is relative to the noun phrase it is in whether a prepositional phrase is obligatory is relative to the sentence or the phrase it is instep 3 context informationin this step the system decides which components in the sentence are most related to the main topic being discussedto measure the importance of a phrase in the local context the system relies on lexical links between wordsthe hypothesis is that the more connected a word is with other words in the local context the more likely it is to be the focus of the local contextwe link the words in the extracted sentence with words in its local context if they are repetitions morphologically related or linked in wordnet through one of the lexical relationsthe system then computes an importance score for each word in the extracted sentence based on the number of links it has with other words and the types of linksthe formula for computing the context importance score for a word w is as follows here i represents the different types of lexical relations the system considered including repetition inflectional relation derivational relation and the lexical relations from wordnetwe assigned a weight to each type of lexical relation represented by li in the formularelations such as repetition or inflectional relation are considered more important and are assigned higher weights while relations such as hypernym are considered less important and assigned lower weightsnu in the formula represents the number of a particular type of lexical links the word w has with words in the local contextafter an importance score is computed for each word each phrase in the entence gets a score by adding up the scores of its children nodes in the parse treethis score indicates how important the phrase is in the local contextstep 4 corpus evidencethe program uses a corpus consisting of sentences reduced by human professionals and their corresponding original sentences to compute how likely humans remove a certain phrasethe system first parsed the sentences in the corpus using esg parserit then marked which subtrees in these parse trees were removed by humansusing this corpus of marked parse trees we can compute how likely a subtree is removed from its parent nodefor example we can compute the probability that the quotwhenquot temporal clause is removed when the main verb is quotgivequot represented as prob or the probability that the toinfinitive modifier of the head noun quotdevicequot is removed represented as probthese probabilities are computed using bayes rulefor example the probability that the quotwhenquot temporal clause is removed when the main verb is quotgivequot prob is computed as the product of prob and prob divided by prob besides computing the probability that a phrase is removed we also compute two other types of probabilities the probability that a phrase is reduced and the probability that a phrase is unchanged at all these corpus probabilities help us capture human practicefor example for sentences like quotthe agency reported that quot quotthe other source says that quot quotthe new study suggests that quot the thatclause following the sayverb in each sentence is very rarely changed at all by professionalsthe system can capture this human practice since the probability that thatclause of the verb say or report being unchanged at all will be relatively high which will help the system to avoid removing components in the thatclausethese corpus probabilities are computed beforehand using a training corpusthey are then stored in a table and loaded at running timestep 5 final decisionthe final reduction decisions are based on the results from all the earlier stepsto decide which phrases to remove the system traverses the sentence parse tree which now have been annotated with different types of information from earlier steps in the topdown order and decides which subtrees should be removed reduced or unchangeda subtree is removed only if it is not grammatically obligatory not the focus of the local context and has a reasonable probability of being removed by humansfigure 1 shows sample output of the reduction programthe reduced sentences produced by humans are also provided for comparisonwe define a measure called success rate to evaluate the performance of our sentence reduction programoriginal sentence when it arrives sometime next year in new tv sets the vchip will give parents a new and potentially revolutionary device to block out programs they do not want their children to seethe success rate computes the percentage of system reduction decisions that agree with those of humanswe compute the success rate in the following waythe reduction process can be considered as a series of decisionmaking process along the edges of a sentence parse treeat each node of the parse tree both the human and the program make a decision whether to remove the node or to keep itif a node is removed the subtree with that node as the root is removed as a whole thus no decisions are needed for the descendants of the removed nodeif the node is kept we consider that node as the root and repeat this processsuppose we have an input sentence which has a parse tree shown in figure 2suppose a human reduces the sentence to which can be translated to a series of decisions made along edges in the sentence parse tree as shown in figure 3the symbol quotyquot along an edge means the node it points to will be kept and quotnquot means the node will be removedsuppose the program reduces the sentence to which can be translated similarly to the annotated tree shown in figure 4we can see that along five edges t dg b4a because both the human and the program made decisionstwo out of the five decisions agree so the success rate is 25 the success rate is defined as of edges along which the human and the program have made the same decision success rate the total of edges along which both the human and the progam have made decisions note that the edges along which only the human or the program has made a decision are not considered in the computation of success rate since there is no agreement issue in such casesin the evaluation we used 400 sentences in the corpus to compute the probabilities that a phrase is removed reduced or unchangedwe tested the program on the rest 100 sentencesusing fivefold validation the program achieved an average success rate of 813if we consider the baseline as removing all the prepositional phrases clauses toinfinitives and gerunds the baseline performance is 432we also computed the success rate of program decisions on particular types of phrasesfor the decisions on removing or keeping a clause the system has a success rate of 781 for the decisions on removing or keeping a toinfinitive the system has a success rate of 852we found out that the system has a low success rate on removing adjectives of noun phrases or removing adverbs of a sentence or a verb phraseone reason for this is that our probability model can hardly capture the dependencies between a particular adjective and the head noun since the training corpus is not large enough while the other sources of information including grammar or context information provide little evidence on whether an adjective or an adverb should be removedgiven that whether or not an adjective or an adverb is removed does not affect the conciseness of the sentence significantly and the system lacks of reliability in making such decisions we decide not to remove adjectives and adverbson average the system reduced the length of the 500 sentence by 327 while humans reduced it by 418the probabilities we computed from the training corpus covered 58 of instances in the test corpuswhen the corpus probability is absent for a case the system makes decisions based on the other two sources of knowledgesome of the errors made by the system result from the errors by the syntactic parserwe randomly checked 50 sentences and found that 8 of the errors made by the system are due to parsing errorsthere are two main reasons responsible for this relative low percentage of errors resulted from mistakes in parsingone reason is that we have taken some special measures to avoid errors introduced by mistakes in parsingfor example pp attachment is a difficult problem in parsing and it is not rare that a pp is wrongly attachedtherefore we take this into account when marking the obligatory components using subcategorization knowledge from the lexicon we not only look at the pps that are attached to a verb phrase but also pps that are next to the verb phrase but not attached in case it is part of the verb phrasewe also wrote a preprocessor to deal with particular structures that the parser often has problems with such as appositionsthe other reason is that parsing errors do not always result in reduction errorsfor example given a sentence quotthe spokesperson of the university said that quot although thatclause in the sentence may have a complicated structure and the parser gets it wrong the reduction system is not necessarily affected since it may decide in this case to keep thatclause as it is as humans often do so the parsing errors will not matter in this examplethe reduction algorithm we present assumes generic summarization that is we want to generate a summary that includes the most important information in an articlewe can tailor the reduction system to queriesbased summarizationin that case the task of the reduction is not to remove phrases that are extraneous in terms of the main topic of an article but phrases that are not very relevant to users querieswe extended our sentence reduction program to querybased summarization by adding another step in the algorithm to measure the relevance of users queries to phrases in the sentencein the last step of reduction when the system makes the final decision the relevance of a phrase to the query is taken into account together with syntactic context and corpus informationideally the sentence reduction module should interact with other modules in a summarization systemit should be able to send feedback to the extraction module if it finds that a sentence selected by the extraction module may be inappropriate it should also be able to interact with the modules that run after it such as the sentence combination module so that it can revise reduction decisions according to the feedback from these modulessome researchers suggested removing phrases or clauses from sentences for certain applications proposed to remove phrases in sentences to produce a telegraphic text that can be used to provide audio scanning service for the blind proposed to remove clauses in sentences before indexing documents for information retrievalboth studies removed phrases based only on their syntactic categories while the focus of our system is on deciding when it is appropriate to remove a phraseother researchers worked on the text simplification problem which usually involves in simplifying text but not removing any phrasesfor example discussed simplifying newspaper text by replacing uncommon words with common words or replacing complicated syntactic structures with simpler structures to assist people with reading disabilities discussed text simplification in generalthe difference between these studies on text simplification and our system is that a text simplification system usually does not remove anything from an original sentence although it may change its structure or words but our system removes extraneous phrases from the extracted sentenceswe present a novel sentence reduction system which removes extraneous phrases from sentences that are extracted from an article in text summarizationthe deleted phrases can be prepositional phrases clauses toinfinitives or gerunds and multiple phrases can be removed form a single sentencethe focus of this work is on determining for a sentence in a particular context which phrases in the sentence are less important and can be removedour system makes intelligent reduction decisions based on multiple sources of knowledge including syntactic knowledge context and probabilities computed from corpus analysiswe also created a corpus consisting of 500 sentences and their reduced forms produced by human professionals and used this corpus for training and testing the systemthe evaluation shows that 813 of reduction decisions made by the system agreed with those of humansin the future we would like to integrate our sentence reduction system with extractionbased summarization systems other than the one we have developed improve the performance of the system further by introducing other sources of knowledge necessary for reduction and explore other interesting applications of the reduction systemthis material is based upon work supported by the national science foundation under grant noiri 9619124 and iri 9618797any opinions findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the national science foundation
A00-1043
sentence reduction for automatic text summarizationwe present a novel sentence reduction system for automatically removing extraneous phrases from sentences that are extracted from a document for summarization purposethe system uses multiple sources of knowledge to decide which phrases in an extracted sentence can be removed including syntactic knowledge context information and statistics computed from a corpus which consists of examples written by human professionalsreduction can significantly improve the conciseness of automatic summarieswe study a new method to remove extraneous phrase from sentences by using multiple source of knowledge to decide which phrase in the sentences can be removedin our approach decisions about which material to includedelete in the sentence summaries do not rely on relative frequency information on words but rather on probability models of subtree deletions that are learned from a corpus of parses for sentences and their summaries
advances in domain independent linear text segmentation this paper describes a method for linear text segmentation which is twice as accurate and over seven times as fast as the stateoftheart intersentence similarity is replaced by rank in the local context boundary locations are discovered by divisive clustering even moderately long documents typically address several topics or different aspects of the same topicthe aim of linear text segmentation is to discover the topic boundariesthe uses of this procedure include information retrieval summarization text understanding anaphora resolution language modelling and improving document navigation for the visually disabled this paper focuses on domain independent methods for segmenting written textwe present a new algorithm that builds on previous work by reynar the primary distinction of our method is the use of a ranking scheme and the cosine similarity measure in formulating the similarity matrixwe propose that the similarity values of short text segments is statistically insignificantthus one can only rely on their order or rank for clusteringexisting work falls into one of two categories lexical cohesion methods and multisource methods the former stem from the work of halliday and hasan they proposed that text segments with similar vocabulary are likely to be part of a coherent topic segmentimplementations of this idea use word stem repetition context vectors entity repetition semantic similarity word distance model and word frequency model to detect cohesionmethods for finding the topic boundaries include sliding window lexical chains dynamic programming agglomerative clustering and divisive clustering lexical cohesion methods are typically used for segmenting written text in a collection to improve information retrieval multisource methods combine lexical cohesion with other indicators of topic shift such as cue phrases prosodic features reference syntax and lexical attraction using decision trees and probabilistic models work in this area is largely motivated by the topic detection and tracking initiative the focus is on the segmentation of transcribed spoken text and broadcast news stories where the presentation format and regular cues can be exploited to improve accuracyour segmentation algorithm takes a list of tokenized sentences as inputa tokenizer and a sentence boundary disambiguation algorithm or eagle may be used to convert a plain text document into the acceptable input formatpunctuation and uninformative words are removed from each sentence using a simple regular expression pattern matcher and a stopword lista stemming algorithm is then applied to the remaining tokens to obtain the word stemsa dictionary of word stem frequencies is constructed for each sentencethis is represented as a vector of frequency countslet fii denote the frequency of word j in sentence ithe similarity between a pair of sentences 1 y for short text segments the absolute value of sim is unreliablean additional occurrence of a common word causes a disproportionate increase in sim unless the denominator is largethus in the context of text segmentation where a segment has typically 100 informative tokens one can only use the metric to estimate the order of similarity between sentences eg a is more similar to b than c furthermore language usage varies throughout a documentfor instance the introduction section of a document is less cohesive than a section which is about a particular topicconsequently it is inappropriate to directly compare the similarity values from different regions of the similarity matrixin nonparametric statistical analysis one compares the rank of data sets when the qualitative behaviour is similar but the absolute quantities are unreliablewe present a ranking scheme which is an adaptation of that described in the contrast of the image has been adjusted to highlight the image featureseach value in the similarity matrix is replaced by its rank in the local regionthe rank is the number of neighbouring elements with a lower similarity valuefigure 2 shows an example of image ranking using a 3 x 3 rank mask with output range 0 8for segmentation we used a 11 x 11 rank maskthe output is expressed as a ratio are to circumvent normalisation problems of elements with a lower value to demonstrate the effect of image ranking the process was applied to the matrix shown in figure 1 to produce figure 32notice the contrast has been improved significantlyfigure 4 illustrates the more subtle effects of our ranking scheme r is the rank of which is a sine wave with decaying mean amplitude and frequency the final process determines the location of the topic boundariesthe method is based on reynar maximisation algorithm a text segment is defined by two sentences i j this is represented as a square region along the diagonal of the rank matrixlet sij denote the sum of the rank values in a segment and aij 2 be the inside areab b1 1974 is a list of in coherent text segments sk and ak refers to the sum of rank and area of segment k in bd is the inside density of b ak to initialise the process the entire document is placed in b as one coherent text segmenteach step of the process splits one of the segments in bthe split point is a potential boundary which maximises d figure 5 shows a working examplethe number of segments to generate in is determined automaticallyden is the inside density of n segments and sd den den1 is the gradientfor a document with b potential boundaries b steps of divisive clustering generates d d and bd od an unusually large reduction in 6d suggests the optiinal clustering has been obtained3 the running time of each step is dominated by the computation of skgiven sij is constant our algorithm precomputes all the values to improve speed performancethe procedure computes the values along diagonals starting from the main diagonal and works towards the cornerthe method has a complexity of order 1715n2let rij refer to the rank value in the rank matrix r and s to the sum of rank matrixgiven r of size n x 77 s is computed in three steps figure 8 shows the result of applying this procedure to the rank matrix in figure 5the definition of a topic segment ranges from complete stories to summaries given the quality of an algorithm is task dependent the following experiments focus on the relative performanceour evaluation strategy is a variant of that described in and the tdt segmentation task we assume a good algorithm is one that finds the most prominent topic boundariesan artificial test corpus of 700 samples is used to assess the accuracy and speed performance of segmentation algorithmsa sample is a concatenation of ten text segmentsa segment is the first n sentences of a randomly selected document from the brown corpusa sample is characterised by the range of n the corpus was generated by an automatic procedure5table 1 presents the corpus statistics p pp pp speed performance is measured by the average number of cpu seconds required to process a test sample6segmentation accuracy is measured by the error metric proposed in low error probability indicates high accuracyother performance measures include the popular precision and recall metric fuzzy pr and edit distance the problems associated with these metrics are discussed in five degenerate algorithms define the baseline for the experimentsb does not propose any boundariesb reports all potential boundaries as real boundariesb partitions the sample into regular segmentsb randomly selects any number of boundaries as real boundariesb randomly selects b boundaries as real boundariesthe accuracy of the last two algorithms are computed analyticallywe consider the status of in potential boundaries as a bit string the terms p awl p in equation 6 corresponds to p and p 1 pequation 7 8 and 9 gives the general form of p b and berm respectivelytable 2 presents the experimental resultsthe values in row two and three four and five are not actually the samehowever their differences are insignificant according to the kolmogorovsmirnov or kstest we compare three versions of the texttiling algorithm h94 is hearst c implementation with default parametersh94 uses the recommended parameters k 6 w 20h94 is my implementation of the algorithmexperimental result shows h94 and h94 are more accurate than h94we suspect this is due to the use of a different stopword list and stemming algorithmfive versions of reynar optimisation algorithm were evaluatedr98 and r98 are exact implementations of his maximisation and minimisation algorithmr98 is my version of the maximisation algorithm which uses the cosine coefficient instead of dot density for measuring similarityit incorporates the optimisations described in section 34r98 is the modularised version of r98 for experimenting with different similarity measuresr98 uses a variant of kozima semantic similarity measure to compute block similarityword similarity is a function of word cooccurrence statistics in the given documentwords that belong to the same sentence are considered to be relatedgiven the cooccurrence frequencies f the transition probability matrix t is computed by equation 10equation 11 defines our spread activation scheme s denotes the word similarity matrix x is the number of activation steps and norm converts a matrix y into a transition matrix x 5 was used in the experimentexperimental result shows the cosine coefficient and our spread activation method improved segmentation accuracythe speed optimisations significantly reduced the execution timewe compare three versions of segmenter k98 is the original perl implementation of the algorithm k98 is my implementation of the algorithmk98 is a version of k98 which uses a document specific chain breaking strategythe distribution of link distances are used to identify unusually long linksthe threshold is a function p c x vt of the mean p and variance we found c 1 works well in practicetable 5 summarises the experimental resultsk98 performed performed significantly better than k98this is due to the use of a different partofspeech tagger and shallow parserthe difference in speed is largely due to the programming languages and term clustering strategiesour chain breaking strategy improved accuracy with k98 two versions of our algorithm were developed c99 and c99the former is an exact implementation of the algorithm described in this paperthe latter is given the expected number of topic segments for fair comparison with r98both algorithms used a 11 x 11 ranking maskthe first experiment focuses on the impact of our automatic termination strategy on c99 c99 is marginally more accurate than c99this indicates our automatic termination strategy is effective but not optimalthe minor reduction in speed performance is acceptablethe second experiment investigates the effect of different ranking mask size on the performance of c99 execution time increases with mask sizea 1 x 1 ranking mask reduces all the elements in the rank matrix to zerointerestingly the increase in ranking mask size beyond 3 x 3 has insignificant effect on segmentation accuracythis suggests the use of extrema for clustering has a greater impact on accuracy than linearising the similarity scores experimental result shows our algorithm c99 is more accurate than existing algorithmsa twofold increase in accuracy and sevenfold increase in speed was achieved with r98if one disregards segmentation accuracy h94 has the best algorithmic performance c99 k98 and r98 are all polynomial time algorithmsthe significance of our results has been confirmed by both ttest and kstesta segmentation algorithm has two key elements a clustering strategy and a similarity measureour results show divisive clustering is more precise than sliding window and lexical chains for locating topic boundariesfour similarity measures were examinedthe cosine coefficient and dot density measure yield similar resultsour spread activation based semantic measure improved accuracythis confirms that although kozima approach is computationally expensive it does produce more precise segmentationthe most significant improvement was due to our ranking scheme which linearises the cosine coefficientour experiments demonstrate that given insufficient data the qualitative behaviour of the cosine measure is indeed more reliable than the actual valuesalthough our evaluation scheme is sufficient for this comparative study further research requires a large scale task independent benchmarkit would be interesting to compare c99 with the multisource method described in using the tdt corpuswe would also like to develop a linear time and multisource version of the algorithmthis paper has benefitted from the comments of mary mcgee wood and the anonymous reviewersthanks are due to my parents and department for making this work possible jeffrey reynar for discussions and guidance on the segmentation problem hideki kozima for help on the spread activation measure minyen kan and marti hearst for their segmentation algorithms daniel oram for references to image processing techniques magnus rattray and stephen marsland for help on statistics and mathematics
A00-2004
advances in domain independent linear text segmentationthis paper describes a method for linear text segmentation which is twice as accurate and over seven times as fast as the stateoftheart intersentence similarity is replaced by rank in the local contextboundary locations are discovered by divisive clusteringwe design an artificial dataset built by concatenating short pieces of texts that have been extracted from the brown corpus
a simple approach to building ensembles of naive bayesian classifiers for word sense disambiguation this paper presents a corpusbased approach to word sense disambiguation that builds an ensemble of naive bayesian classifiers each of which is based on lexical features that represent cooccurring words in varying sized windows of context despite the simplicity of this approach empirical results disamthe widely studied nouns show that such an ensemble achieves accuracy rivaling the best previously published results word sense disambiguation is often cast as a problem in supervised learning where a disambiguator is induced from a corpus of manually sensetagged text using methods from statistics or machine learningthese approaches typically represent the context in which each sensetagged instance of a word occurs with a set of linguistically motivated featuresa learning algorithm induces a representative model from these features which is employed as a classifier to perform disambiguationthis paper presents a corpusbased approach that results in high accuracy by combining a number of very simple classifiers into an ensemble that performs disambiguation via a majority votethis is motivated by the observation that enhancing the feature set or learning algorithm used in a corpusbased approach does not usually improve disambiguation accuracy beyond what can be attained with shallow lexical features and a simple supervised learning algorithmfor example a naive bayesian classifier is based on a blanket assumption about the interactions among features in a sensetagged corpus and does not learn a representative modeldespite making such an assumption this proves to be among the most accurate techniques in comparative studies of corpusbased word sense disambiguation methodologies these studies represent the context in which an ambiguous word occurs with a wide variety of featureshowever when the contribution of each type of feature to overall accuracy is analyzed shallow lexical features such as cooccurrences and collocations prove to be stronger contributors to accuracy than do deeper linguistically motivated features such as partofspeech and verbobject relationshipsit has also been shown that the combined accuracy of an ensemble of multiple classifiers is often significantly greater than that of any of the individual classifiers that make up the ensemble in natural language processing ensemble techniques have been successfully applied to part ofspeech tagging and parsing when combined with a history of disambiguation success using shallow lexical features and naive bayesian classifiers these findings suggest that word sense disambiguation might best be improved by combining the output of a number of such classifiers into an ensemblethis paper begins with an introduction to the naive bayesian classifierthe features used to represent the context in which ambiguous words occur are presented followed by the method for selecting the classifiers to include in the ensemblethen the line and interesi data is describedexperimental results disambiguating these words with an ensemble of naive bayesian classifiers are shown to rival previously published resultsthis paper closes with a discussion of the choices made in formulating this methodology and plans for future worka naive bayesian classifier assumes that all the feature variables representing a problem are conditionally independent given the value of a classification variablein word sense disambiguation the context in which an ambiguous word occurs is represented by the feature variables and the sense of the ambiguous word is represented by the classification variable in this paper all feature variables fi are binary and represent whether or not a particular word occurs within some number of words to the left or right of an ambiguous word ie a window of contextfor a naive bayesian classifier the joint probability of observing a certain combination of contextual features with a particular sense is expressed as the parameters of this model are p and fils the sufficient statistics ie the summaries of the data needed for parameter estimation are the frequency counts of the events described by the interdependent variables in this paper these counts are the number of sentences in the sensetagged text where the word represented by fi occurs within some specified window of context of the ambiguous word when it is used in sense s any parameter that has a value of zero indicates that the associated word never occurs with the specified sense valuethese zero values are smoothed by assigning them a very small default probabilityonce all the parameters have been estimated the model has been trained and can be used as a classifier to perform disambiguation by determining the most probable sense for an ambiguous word given the context in which it occursthe contextual features used in this paper are binary and indicate if a given word occurs within some number of words to the left or right of the ambiguous wordno additional positional information is contained in these features they simply indicate if the word occurs within some number of surrounding wordspunctuation and capitalization are removed from the windows of contextall other lexical items are included in their original form no stemming is performed and noncontent words remainthis representation of context is a variation on the bagofwords feature set where a single window of context includes words that occur to both the left and right of the ambiguous wordan early use of this representation is described in where word sense disambiguation is performed with a naive bayesian classifierthe work in this paper differs in that there are two windows of context one representing words that occur to the left of the ambiguous word and another for those to the rightthe left and right windows of context have nine different sizes 0 1 2 3 4 5 10 25 and 50 wordsthe first step in the ensemble approach is to train a separate naive bayesian classifier for each of the 81 possible combination of left and right window sizesnaive_bayes represents a classifier where the model parameters have been estimated based on frequency counts of shallow lexical features from two windows of context one including 1 words to the left of the ambiguous word and the other including r words to the rightnote that naive_bayes includes no words to the left or right this classifier acts as a majority classifier that assigns every instance of an ambiguous word to the most frequent sense in the training dataonce the individual classifiers are trained they are evaluated using previously heldout test datathe crucial step in building an ensemble is selecting the classifiers to include as membersthe approach here is to group the 81 naive bayesian classifiers into general categories representing the sizes of the windows of contextthere are three such ranges narrow corresponds to windows 0 1 and 2 words wide medium to windows 3 4 and 5 words wide and wide to windows 10 25 and 50 words widethere are nine possible range categories since there are separate left and right windowsfor example naive_bayes belongs to the range category since it is based on a one word window to the left and a three word window to the rightthe most accurate classifier in each of the nine range categories is selected for inclusion in the ensembleeach of the nine member classifiers votes for the most probable sense given the particular context represented by that classifier the ensemble disambiguates by assigning the sense that receives a majority of the votesthe line data was created by by tagging every occurrence of line in the acldci wall street journal corpus and the american printing house for the blind corpus with one of six possible wordnet sensesthese senses and their frequency distribution are shown in table 1this data has since been used in studies by and in that work as well as in this paper a subset of the corpus is utilized such that each sense is uniformly distributed this reduces the accuracy of the majority classifier to 17the uniform distribution is created by randomly sampling 349 sensetagged examples from each sense resulting in a training corpus of 2094 sensetagged sentencesthe interest data was created by by tagging all occurrences of interest in the acldci wall street journal corpus with senses from the longman dictionary of contemporary englishthis data set was subsequently used for word sense disambiguation experiments by and the previous studies and this paper use the entire 2368 sensetagged sentence corpus in their experimentsthe senses and their fresense count product 2218 written or spoken text 405 telephone connection 429 formation of people or things queue 349 an artificial division boundary 376 a thin flexible object cord 371 total 4148 table 1 distribution of senses for line the experiments in this paper and previous work use a uniformly distributed subset of this corpus where each sense occurs 349 times sense count money paid for the use of money 1252 a share in a company or business 500 readiness to give attention 361 advantage advancement or favor 178 activity that one gives attention to 66 causing attention to be given to 11 total 2368 table 2 distribution of senses for interest the experiments in this paper and previous work use the entire corpus where each sense occurs the number of times shown above quency distribution are shown in table 2unlike line the sense distribution is skewed the majority sense occurs in 53 of the sentences while the smallest minority sense occurs in less than 1eightyone naive bayesian classifiers were trained and tested with the line and interest datafivefold cross validation was employed all of the sensetagged examples for a word were randomly shuffled and divided into five equal foldsfour folds were used to train the naive bayesian classifier while the remaining fold was randomly divided into two equal sized test setsthe first devtest was used to evaluate the individual classifiers for inclusion in the ensemblethe second test was used to evaluate the accuracy of the ensemblethus the training data for each word consists of 80 of the available sensetagged text while each of the test sets contains 10this process is repeated five times so that each fold serves as the source of the test data oncethe average accuracy of the individual naive bayesian classifiers across the five folds is reported in tables 3 and 4the standard deviations were between 01 and 025 and are not shown given their relative consistencyeach classifier is based upon a distinct representation of context since each employs a different combination of right and left window sizesthe size and range of the left window of context is indicated along the horizontal margin in tables 3 and 4 while the right window size and range is shown along the vertical marginthus the boxes that subdivide each table correspond to a particular range categorythe classifier that achieves the highest accuracy in each range category is included as a member of the ensemblein case of a tie the classifier with the smallest total window of context is included in the ensemblethe most accurate single classifier for line is naive_bayes which attains accuracy of 84 the accuracy of the ensemble created from the most accurate classifier in each of the range categories is 88the single most accurate classifier for interest is naive_bayes which attains accuracy of 86 while the ensemble approach reaches 89the increase in accuracy achieved by both ensembles over the best individual classifier is statistically significant as judged by mcnemar test with p 01these experiments use the same sensetagged corpora for interest and line as previous studiessummaries of previous results in tables 5 and 6 show that the accuracy of the naive bayesian ensemble is comparable to that of any other approachhowever due to variations in experimental methodologies it can not be concluded that the differences among the most accurate methods are statistically significantfor example in this work fivefold cross validation is employed to assess accuracy while train and test using 100 randomly sampled sets of datasimilar differences in training and testing methodology exist among the other studiesstill the results in this paper are encouraging due to the simplicity of the approachthe interest data was first studied by they employ a representation of context that includes the partofspeech of the two words surrounding interest a morphological feature indicating whether or not interest is singular or plural and the three most statistically significant cooccurring words in the sentence with interest as determined by a test of independencethese features are abbreviated as pos morph and cooccur in table 5a decomposable probabilistic model is induced from the sensetagged corpora using a backward sequential search where candidate models are evaluated with the loglikelihood ratio testthe selected model was used as a probabilistic classifier on a heldout set of test data and achieved accuracy of 78the interest data was included in a study by who represent the context of an ambiguous word with the partofspeech of three words to the left and right of interest a morphological feature indicating if interest is singular or plural an unordered set of frequently occurring keywords that surround interest local collocations that include interest and verbobject syntactic relationshipsthese features are abbreviated pos morph cooccur collocates and verbobj in table 5a nearestneighbor classifier was employed and achieved an average accuracy of 87 over repeated trials using randomly drawn training and test sets and present studies that utilize the original bruce and wiebe feature set and include the interest datathe first compares a range of probabilistic model selection methodologies and finds that none outperform the naive bayesian classifier which attains accuracy of 74the second compares a range of machine learning algorithms and finds that a decision tree learner and a naive bayesian classifier are most accuratethe line data was first studied by they evaluate the disambiguation accuracy of a naive bayesian classifier a content vector and a neural networkthe context of an ambiguous word is represented by a bagofwords where the window of context is two sentences widethis feature set is abbreviated as 2 sentence bow in table 6when the naive bayesian classifier is evaluated words are not stemmed and capitalization remainshowever with the content vector and the neural network words are stemmed and words from a stoplist are removedthey report no significant differences in accuracy among the three approaches the naive bayesian classifier achieved 71 accuracy the content vector 72 and the neural network 76the line data was studied again by where seven different machine learning methodologies are comparedall learning algorithms represent the context of an ambiguous word using the bagofwords with a two sentence window of contextin these experiments words from a stoplist are removed capitalization is ignored and words are stemmedthe two most accurate methods in this study proved to be a naive bayesian classifier and a perceptron the line data was recently revisited by both and the former take an ensemble approach where the output from two neural networks is combined one network is based on a representation of local context while the other represents topical contextthe latter utilize a naive bayesian classifierin both cases context is represented by a set of topical and local featuresthe topical features correspond to the openclass words that occur in a two sentence window of contextthe local features occur within a window of context three words to the left and right of the ambiguous word and include cooccurrence features as well as the partofspeech of words in this windowthese features are represented as local topical bow and pos in table 6 report accuracy of 87 while report accuracy of 84the word sense disambiguation ensembles in this paper have the following characteristics each point is discussed belowthe naive bayesian classifier has emerged as a consistently strong performer in a wide range of comparative studies of machine learning methodologiesa recent survey of such results as well as possible explanations for its success is presented in a similar finding has emerged in word sense disambiguation where a number of comparative studies have all reported that no method achieves significantly greater accuracy than the naive bayesian classifier in many ensemble approaches the member classifiers are learned with different algorithms that are trained with the same datafor example an ensemble could consist of a decision tree a neural network and a nearest neighbor classifier all of which are learned from exactly the same set of training datathis paper takes a different approach where the learning algorithm is the same for all classifiers but the training data is differentthis is motivated by the belief that there is more to be gained by varying the representation of context than there is from using many different learning algorithms on the same datathis is especially true in this domain since the naive bayesian classifier has a history of success and since there is no generally agreed upon set of features that have been shown to be optimal for word sense disambiguationshallow lexical features such as cooccurrences and collocations are recognized as potent sources of disambiguation informationwhile many other contextual features are often employed it is not clear that they offer substantial advantagesfor example report that local collocations alone achieve 80 accuracy disambiguating interest while their full set of features result in 87preliminary experiments for this paper used feature sets that included collocates cooccurrences partof speech and grammatical information for surrounding wordshowever it was clear that no combination of features resulted in disambiguation accuracy significantly higher than that achieved with cooccurrence featuresthe most accurate classifier from each of nine possible category ranges is selected as a member of the ensemblethis is based on preliminary experiments that showed that member classifiers with similar sized windows of context often result in little or no overall improvement in disambiguation accuracythis was expected since slight differences in window sizes lead to roughly equivalent representations of context and classifiers that have little opportunity for collective improvementfor example an ensemble was created for interest using the nine classifiers in the range category the accuracy of this ensemble was 84 slightly less than the most accurate individual classifiers in that range which achieved accuracy of 86early experiments also revealed that an ensemble based on a majority vote of all 81 classifiers performed rather poorlythe accuracy for interest was approximately 81 and line was disambiguated with slightly less than 80 accuracythe lesson taken from these results was that an ensemble should consist of classifiers that represent as differently sized windows of context as possible this reduces the impact of redundant errors made by classifiers that represent very similarly sized windows of contextthe ultimate success of an ensemble depends on the ability to select classifiers that make complementary errorsthis is discussed in the context of combining partofspeech taggers in they provide a measure for assessing the complementarity of errors between two taggers that could be adapted for use with larger ensembles such as the one discussed here which has nine membersin this paper ensemble disambiguation is based on a simple majority vote of the nine member classifiersan alternative strategy is to weight each vote by the estimated joint probability found by the naive bayesian classifierhowever a preliminary study found that the accuracy of a naive bayesian ensemble using a weighted vote was poorfor interest it resulted in accuracy of 83 while for line it was 82the simple majority vote resulted in accuracy of 89 for interest and 88 for linea number of issues have arisen in the course of this work that merit further investigationthe simplicity of the contextual representation can lead to large numbers of parameters in the naive bayesian model when using wide windows of contextsome combination of stoplists and stemming could reduce the numbers of parameters and thus improve the overall quality of the parameter estimates made from the training datain addition to simple cooccurrence features the use of collocation features seems promisingthese are distinct from cooccurrences in that they are words that occur in close proximity to the ambiguous word and do so to a degree that is judged statistically significantone limitation of the majority vote in this paper is that there is no mechanism for dealing with outcomes where no sense gets a majority of the votesthis did not arise in this study but will certainly occur as naive bayesian ensembles are applied to larger sets of datafinally further experimentation with the size of the windows of context seems warrantedthe current formulation is based on a combination of intuition and empirical studyan algorithm to determine optimal windows sizes is currently under developmentthis paper shows that word sense disambiguation accuracy can be improved by combining a number of simple classifiers into an ensemblea methodology for formulating an ensemble of naive bayesian classifiers is presented where each member classifier is based on cooccurrence features extracted from a different sized window of contextthis approach was evaluated using the widely studied nouns line and interest which are disambiguated with accuracy of 88 and 89 which rivals the best previously published resultsthis work extends ideas that began in collaboration with rebecca bruce and janyce wiebeclaudia leacock and raymond mooney provided valuable assistance with the line datai am indebted to an anonymous reviewer who pointed out the importance of separate test and devtest data setsa preliminary version of this paper appears in
A00-2009
a simple approach to building ensembles of naive bayesian classifiers for word sense disambiguationthis paper presents a corpusbased approach to word sense disambiguation that builds an ensemble of naive bayesian classifiers each of which is based on lexical features that represent cooccurring words in varying sized windows of contextdespite the simplicity of this approach empirical results disambiguating the widely studied nouns line and interest show that such an ensemble achieves accuracy rivaling the best previously published resultswe present an ensemble of eightyone naive bayesian classifiers based on varying sized windows of context to the left and right of the target word that define cooccurrence features
a maximumentropyinspired parser we present a new parser for parsing down to penn treebank style parse trees that achieves 901 average precisionrecall for sentences of 40 and less and for of length 100 and less when trained and tested on the previously established 59101517 quotstandardquot sections of the wall street journal treebank this represents a 13 decrease in error rate over the best singleparser results on this corpus 9 the major technical innovation is the use of a quotmaximumentropyinspiredquot model for conditioning and smoothing that let us successfully to test and combine many different conditioning events we also present some partial results showing the effects of different conditioning information including a surprising 2 improvement due to guessing the lexical head preterminal before guessing the lexical head we present a new parser for parsing down to penn treebank style parse trees 16 that achieves 901 average precisionrecall for sentences of length 40 and 895 for sentences of length 100 when trained and tested on the previously established 59101517 quotstandardquot sections of the wall street journal treebankthis represents a 13 decrease in error rate over the best singleparser results on this corpus 9following 510 our parser is based upon a probabilistic generative modelthat is for all sentences s and all parses 7r the parser assigns a probability p p the equality holding when we restrict consideration to 7r whose yield this research was supported in part by nsf grant lis sbr 9720368the author would like to thank mark johnson and all the rest of the brown laboratory for linguistic information processing is s then for any s the parser returns the parse ir that maximizes this probabilitythat is the parser implements the function arg maxrp arg maxirp arg maxrpwhat fundamentally distinguishes probabilistic generative parsers is how they compute p and it is to that topic we turn nextthe model assigns a probability to a parse by a topdown process of considering each constituent c in ir and for each c first guessing the preterminal of c t then the lexical head of c h and then the expansion of c into further constituents ethus the probability of a parse is given by the equation where 1 is the label of c verbphrase etc and h is the relevant history of c information outside c that our probability model deems important in determining the probability in questionmuch of the interesting work is determining what goes into h whenever it is clear to which constituent we are referring we omit the in eg hin this notation the above equation takes the following form next we describe how we assign a probability to the expansion e of a constituentin section 5 we present some results in which the possible expansions of a constituent are fixed in advanced by extracting a treebank grammar 3 from the training corpusthe method that gives the best results however uses a markov grammar a method for assigning probabilities to any possible expansion using statistics gathered from the training corpus 61015the method we use follows that of 10in this scheme a traditional probabilistic contextfree grammar rule can be thought of as consisting of a lefthand side with a label 1 drawn from the nonterminal symbols of our grammar and a righthand side that is a sequence of one or more such symbolsfor us the nonterminal symbols are those of the treebank augmented by the symbols aux and auxg which have been assigned deterministically to certain auxiliary verbs such as quothavequot or quothavingquotfor each expansion we distinguish one of the righthand side labels as the quotmiddlequot or quotheadquot symbol mm is the constituent from which the head lexical item h is obtained according to deterministic rules that pick the head of a constituent from among the heads of its childrento the left of m is a sequence of one or more left labels li including the special termination symbol a which indicates that there are no more symbols to the left and similarly for the labels to the right rithus an expansion e looks like the expansion is generated by guessing first m then in order l1 through l1 and similarly for ri through in a pure markov pcfg we are given the lefthand side label 1 and then probabilistically generate the righthand side conditioning on no information other than 1 and previously generated pieces of the righthand side itselfin the simplest of such models a zeroorder markov grammar each label on the righthand side is generated conditioned only on that is according to the distributions p p and pmore generally one can condition on the m previously generated labels thereby obtaining an mthorder markov grammarso for example in a secondorder markov pcfg l2 would be conditioned on l1 and m in our complete model of course the probability of each label in the expansions is also conditioned on other material as specified in equation 1 eg pthus we would use pnote that the as on both ends of the expansion in expression 2 are conditioned just like any other label in the expansionthe major problem confronting the author of a generative parser is what information to use to condition the probabilities required in the model and how to smooth the empirically obtained probabilities to take the sting out of the sparse data problems that are inevitable with even the most modest conditioningfor example in a secondorder markov grammar we conditioned the l2 label according to the distribution palso remember that h is a placeholder for any other information beyond the constituent c that may be useful in assigning c a probabilityin the past few years the maximum entropy or loglinear approach has recommended itself to probabilistic model builders for its flexibility and its novel approach to smoothing 117a complete review of loglinear models is beyond the scope of this paperrather we concentrate on the aspects of these models that most directly influenced the model presented hereto compute a probability in a loglinear model one first defines a set of quotfeaturesquot functions from the space of configurations over which one is trying to compute probabilities to integers that denote the number of times some pattern occurs in the inputin our work we assume that any feature can occur at most once so features are booleanvalued 0 if the pattern does not occur 1 if it doesin the parser we further assume that features are chosen from certain feature schemata and that every feature is a boolean conjunction of subfeaturesfor example in computing the probability of the head preterminal t we might want a feature schema f that returns 1 if the observed preterminal of c t and the label of c 1 and zero otherwisethis feature is obviously composed of two subfeatures one recognizing t the other 1if both return 1 then the feature returns 1now consider computing a conditional probability p with a set of features h that connect a to the history h in a loglinear model the probability function takes the following form here the ai are weights between negative and positive infinity that indicate the relative importance of a feature the more relevant the feature to the value of the probability the higher the absolute value of the associated athe function z called the partition function is a normalizing constant so the probabilities over all a sum to onenow for our purposes it is useful to rewrite this as a sequence of multiplicative functions gi for 0 i j here go 11z and gi eai fithe intuitive idea is that each factor gi is larger than one if the feature in question makes the probability more likely one if the feature has no effect and smaller than one if it makes the probability less likelymaximumentropy models have two benefits for a parser builderfirst as already implicit in our discussion factoring the probability computation into a sequence of values corresponding to various quotfeaturesquot suggests that the probability model should be easily changeable just change the set of features usedthis point is emphasized by ratnaparkhi in discussing his parser 17second and this is a point we have not yet mentioned the features used in these models need have no particular independence of one anotherthis is useful if one is using a loglinear model for smoothingthat is suppose we want to compute a conditional probability p but we are not sure that we have enough examples of the conditioning event b c in the training corpus to ensure that the empirically obtained probability p is accuratethe traditional way to handle this is also to compute p and perhaps p as well and take some combination of these values as one best estimate for pthis method is known as quotdeleted interpolationquot smoothingin maxentropy models one can simply include features for all three events f1 f2 and f3 and combine them in the model according to equation 3 or equivalently equation 4the fact that the features are very far from independent is not a concernnow let us note that we can get an equation of exactly the same form as equation 4 in the following fashion note that the first term of the equation gives a probability based upon little conditioning information and that each subsequent term is a number from zero to positive infinity that is greater or smaller than one if the new information being considered makes the probability greater or smaller than the previous estimateas it stands this last equation is pretty much contentfreebut let us look at how it works for a particular case in our parsing schemeconsider the probability distribution for choosing the preterminal for the head of a constituentin equation 1 we wrote this as pas we discuss in more detail in section 5 several different features in the context surrounding c are useful to include in h the label head preterminal and head of the parent of c the label of c left sibling and the label of the grandparent of c that is we wish to compute pwe can now rewrite this in the form of equation 5 as follows here we have sequentially conditioned on steadily increasing portions of c historyin many cases this is clearly warrantedfor example it does not seem to make much sense to condition on say hp without first conditioning on tiin other cases however we seem to be conditioning on apples and oranges so to speakfor example one can well imagine that one might want to condition on the parent lexical head without conditioning on the left sibling or the grandparent labelone way to do this is to modify the simple version shown in equation 6 to allow this note the changes to the last three terms in equation 7rather than conditioning each term on the previous ones they are now conditioned only on those aspects of the history that seem most relevantthe hope is that by doing this we will have less difficulty with the splitting of conditioning events and thus somewhat less difficulty with sparse datawe make one more point on the connection of equation 7 to a maximum entropy formulationsuppose we were in fact going to compute a true maximum entropy model based upon the features used in equation 7 ii f2 f3 this requires finding the appropriate ais for equation 3 which is accomplished using an algorithm such as iterative scaling ii in which values for the ai are initially quotguessedquot and then modified until they converge on stable valueswith no prior knowledge of values for the ai one traditionally starts with ai 0 this being a neutral assumption that the feature has neither a positive nor negative impact on the probability in questionwith some prior knowledge nonzero values can greatly speed up this process because fewer iterations are required for convergencewe comment on this because in our example we can substantially speed up the process by choosing values picked so that when the maximumentropy equation is expressed in the form of equation 4 the gi have as their initial values the values of the corresponding terms in equation 7now we observe that if we were to use a maximumentropy approach but run iterative scaling zero times we would in fact just have equation 7the major advantage of using equation 7 is that one can generally get away without computing the partition function zin the simple form it is clear that z 1in the more interesting version equation 7 this is not true in general but one would not expect it to differ much from one and we assume that as long as we are not publishing the raw probabilities the difference from one should be unimportantas partitionfunction calculation is typically the major online computational problem for maximumentropy models this simplifies the model significantlynaturally the distributions required by equation 7 cannot be used without smoothingin a pure maximumentropy model this is done by feature selection as in ratnaparkhi maximumentropy parser 17while we could have smoothed in the same fashion we choose instead to use standard deleted interpolationwe created a parser based upon the maximumentropyinspired model of the last section smoothed using standard deleted interpolationas the generative model is topdown and we use a standard bottomup bestfirst probabilistic chart parser 27 we use the chart parser as a first pass to generate candidate possible parses to be evaluated in the second pass by our probabilistic modelfor runs with the generative model based upon markov grammar statistics the first pass uses the same statistics but conditioned only on standard pcfg informationthis allows the second pass to see expansions not present in the training corpuswe use the gathered statistics for all observed words even those with very low counts though obviously our deleted interpolation smoothing gives less emphasis to observed probabilities for rare wordswe guess the preterminals of words that are not observed in the training data using statistics on capitalization hyphenation word endings and the probability that a given preterminal is realized using a previously unobserved wordas noted above the probability model uses five smoothed probability distributions one each for li mrit and h the equation for the conditional probability distribution for t is given in equation 7the other four equations can be found in a longer version of this paper available on the author website l and r are conditioned on three previous labels so we are using a thirdorder markov grammaralso the label of the parent constituent lp is conditioned upon even when it is not obviously related to the further conditioning eventsthis is due to the importance of this factor in parsing as noted in eg 14in keeping with the standard methodology 5 9101517 we used the penn wall street journal treebank 16 with sections 221 for training section 23 for testing and section 24 for development performance on the test corpus is measured using the standard measures from 591017in particular we measure labeled precision and recall average number of crossbrackets per sentence percentage of sentences with zero cross brackets and percentage of sentences with 2 cross brackets again as standard we take separate measurements for all sentences of length 40 and all sentences of length 100note that the definitions of labeled precision and recall are those given in 9 and used in all of the previous workas noted in 5 these definitions typically give results about 04 higher than the more obvious onesthe results for the new parser as well as for the previous topthree individual parsers on this corpus are given in figure 1as is typical all of the standard measures tell pretty much the same story with the new parser outperforming the other three parserslooking in particular at the precision and recall figures the new parser give us a 13 error reduction over the best of the previous work co1199 9in the previous sections we have concentrated on the relation of the parser to a maximumentropy approach the aspect of the parser that is most novelhowever we do not think this aspect is the sole or even the most important reason for its comparative successhere we list what we believe to be the most significant contributions and give some experimental results on how well the program behaves without themwe take as our starting point the parser labled char97 in figure 1 5 as that is the program from which our current parser derivesthat parser as stated in figure 1 achieves an average precisionrecall of 875as noted in 5 that system is based upon a quottreebank grammarquot a grammar read directly off the training corpusthis is as opposed to the quotmarkovgrammarquot approach used in the current parseralso the earlier parser uses two techniques not employed in the current parserfirst it uses a clustering scheme on words to give the system a quotsoftquot clustering of heads and subheadssecond char97 uses unsupervised learning in that the original system was run on about thirty million words of unparsed text the output was taken as quotcorrectquot and statistics were collected on the resulting parseswithout these enhancements char97 performs at the 866 level for sentences of length 40in this section we evaluate the effects of the various changes we have made by running various versions of our current programto avoid repeated evaluations based upon the testing corpus here our evaluation is based upon sentences of length 40 from the development corpuswe note here that this corpus is somewhat more difficult than the quotofficialquot test corpusfor example the final version of our system achieves an average precisionrecall of 901 on the test corpus but an average precisionrecall of only 897 on the development corpusthis is indicated in figure 2 where the model labeled quotbestquot has precision of 898 and recall of 896 for an average of 897 04 lower than the results on the official test corpusthis is in accord with our experience that developmentcorpus results are from 03 to 05 lower than those obtained on the test corpusthe model labeled quotoldquot attempts to recreate the char97 system using the current programit makes no use of special maximumentropyinspired features it does not guess the preterminal before guessing the lexical head and it uses a treebank grammar rather than a markov grammarthis parser achieves an average precisionrecall of 862this is consistent with the average precisionrecall of 866 for 5 mentioned above as the latter was on the test corpus and the former on the development corpusbetween the old model and the best model figure 2 gives precisionrecall measurements for several different versions of our parserone of the first and without doubt the most significant change we made in the current parser is to move from two stages of probabilistic decisions at each node to threeas already noted char97 first guesses the lexical head of a constituent and then given the head guesses the pcfg rule used to expand the constituent in questionin contrast the current parser first guesses the head preterminal then the head and then the expansionit turns out that usefulness of this process had already been discovered by collins 10 who in turn notes that it was previously used by eisner 12however collins in 10 does not stress the decision to guess the head preterminal first and it might be lost on the casual readerindeed it was lost on the present author until he went back after the fact and found it therein figure 2 we show that this one factor improves performance by nearly 2it may not be obvious why this should make so great a difference since most words are effectively unambiguouswe believe that two factors contribute to this performance gainthe first is simply that if we first guess the preterminal when we go to guess the head the first thing we can condition upon is the preterminal ie we compute pthis quantity is a relatively intuitive one and it seems particularly good to condition upon here since we use it in effect as the unsmoothed probability upon which all smoothing of p is basedthis one quotfixquot makes slightly over a percent difference in the resultsthe second major reason why first guessing the preterminal makes so much difference is that it can be used when backing off the lexical head in computing the probability of the rule expansionfor example when we first guess the lexical head we can move from computing p to pso eg even if the word quotconflatingquot does not appear in the training corpus the quotngquot ending allows our program to guess with relative security that the word has the vbg preterminal and thus the probability of various rule expansions can be considerable sharpenedfor example the treebank pcfg probability of the rule quotvp vbg npquot is 00145 whereas once we condition on the fact that the lexical head is a vbg we get a probability of 0214the second modification is the explicit marking of noun and verbphrase coordinationwe have already noted the importance of conditioning on the parent label pso for example information about an np is conditioned on the parent eg an s vp pp etcnote that when an np is part of an np coordinate structure the parent will itself be an np and similarly for a vpbut nps and vps can occur with np and vp parents in noncoordinate structures as wellfor example in the penn treebank a vp with both main and auxiliary verbs has the structure shown in figure 3note that the subordinate vp has a vp parentthus np and vp parents of constituents are marked to indicate if the parents are a coordinate structurea vp coordinate structure is defined here as a constituent with two or more vp children one or more of the constituents comma cc conjp and nothing else coordinate np phrases are defined similarlysomething very much like this is done in 15as shown in figure 2 conditioning on this information gives a 06 improvementwe believe that this is mostly due to improvements in guessing the subconstituent preterminal and headgiven we are already at the 88 level of accuracy we judge a 06 improvement to be very much worth whilenext we add the less obvious conditioning events noted in our previous discussion of the final model grandparent label lg and left sibling label bwhen we do so using our maximumentropyinspired conditioning we get another 045 improvement in average precisionrecall as indicated in figure 2 on the line labeled quotmaxentinspirednote that we also tried including this information using a standard deletedinterpolation modelthe results here are shown in the line quotstandard interpolationquotincluding this information within a standard deletedinterpolation model causes a 06 decrease from the results using the less conventional modelindeed the resulting performance is worse than not using this information at allup to this point all the models considered in this section are treebank grammar modelsthat is the pcfg grammar rules are read directly off the training corpusas already noted our best model uses a markovgrammar approachas one can see in figure 2 a firstorder markov grammar performs slightly worse than the equivalent treebankgrammar parserhowever a secondorder grammar does slightly better and a thirdorder grammar does significantly better than the treebank parserwe have presented a lexicalized markov grammar parsing model that achieves an average precisionrecall of 911 on sentences of length 40 and 895 on sentences of length 100this corresponds to an error reduction of 13 over the best previously published single parser results on this test set those of collins 9that the previous three best parsers on this test 5917 all perform within a percentage point of each other despite quite different basic mechanisms led some researchers to wonder if there might be some maximum level of parsing performance that could be obtained using the treebank for training and to conjecture that perhaps we were at itthe results reported here disprove this conjecturethe results of 13 achieved by combining the aforementioned threebest parsers also suggest that the limit on treebank trained parsers is much higher than previously thoughtindeed it may be that adding this new parser to the mix may yield still higher resultsfrom our perspective perhaps the two most important numbers to come out of this research are the overall error reduction of 13 over the results in 9 and the intermediateresult improvement of nearly 2 on labeled precisionrecall due to the simple idea of guessing the head preterminal before guessing the headneither of these results were anticipated at the start of this researchas noted above the main methodological innovation presented here is our quotmaximumentropyinspiredquot model for conditioning and smoothingtwo aspects of this model deserve some commentthe first is the slight but important improvement achieved by using this model over conventional deleted interpolation as indicated in figure 2we expect that as we experiment with other more semantic conditioning information the importance of this aspect of the model will increasemore important in our eyes though is the flexibility of the maximumentropyinspired modelthough in some respects not quite as flexible as true maximum entropy it is much simpler and in our estimation has benefits when it comes to smoothingultimately it is this flexibility that let us try the various conditioning events to move on to a markov grammar approach and to try several markov grammars of different orders without significant programmingindeed we initiated this line of work in an attempt to create a parser that would be flexible enough to allow modifications for parsing down to more semantic levels of detailit is to this project that our future parsing work will be devoted
A00-2018
a maximumentropyinspired parserwe present a new parser for parsing down to penn treebank style parse trees that achieves 901 average precisionrecall for sentences of length 40 and less and 895 for sentences of length 100 and less when trained and tested on the previously established 59101517 standard sections of the wall street journal tree bankthis represents a 13 decrease in error rate over the best singleparser results on this corpus 9the major technical innovation is the use of a maximumentropyinspired model for conditioning and smoothing that let us successfully to test and combine many different conditioning eventswe also present some partial results showing the effects of different conditioning information including a surprising 2 improvement due to guessing the lexical head preterminal before guessing the lexical headas an alternative to hard coded heuristics we proposed to recover the penn functional tags automaticallyour parser is also a twostage ctf model where the first stage is a smoothed markov grammar and the second stage is a lexicalized markov grammar with extra annotations about parents and grandparents
an unsupervised method for detecting grammatical errors we present an unsupervised method for detecting grammatical errors by inferring negative evidence from edited textual corpora the system was developed and tested using essaylength responses to prompts on the test of english as a foreign language the errorrecognition system alek performs with about 80 precision and 20 recall a good indicator of whether a person knows the meaning of a word is the ability to use it appropriately in a sentence much information about usage can be obtained from quite a limited context choueka and lusignan found that people can typically recognize the intended sense of a polysemous word by looking at a narrow window of one or two words around itstatisticallybased computer programs have been able to do the same with a high level of accuracy the goal of our work is to automatically identify inappropriate usage of specific vocabulary words in essays by looking at the local contextual cues around a target wordwe have developed a statistical system alek that uses statistical analysis for this purposea major objective of this research is to avoid the laborious and costly process of collecting errors for each word that we wish to evaluateinstead we train alek on a general corpus of english and on edited text containing example uses of the target wordthe system identifies inappropriate usage based on differences between the word local context cues in an essay and the models of context it has derived from the corpora of wellformed sentencesa requirement for alek has been that all steps in the process be automated beyond choosing the words to be tested and assessing the resultsonce a target word is chosen preprocessing building a model of the word appropriate usage and identifying usage errors in essays is performed without manual interventionalek has been developed using the test of english as a foreign language administered by the educational testing servicetoefl is taken by foreign students who are applying to us undergraduate and graduatelevel programsapproaches to detecting errors by nonnative writers typically produce grammars that look for specific expected error types under this approach essays written by esl students are collected and examined for errorsparsers are then adapted to identify those error types that were found in the essay collectionwe take a different approach initially viewing error detection as an extension of the word sense disambiguation problemcorpusbased wsd systems identify the intended sense of a polysemous word by collecting a set of example sentences for each of its various senses and extracting salient contextual cues from these sets to build a statistical model for each sensethey identify the intended sense of a word in a novel sentence by extracting its contextual cues and selecting the most similar word sense model yarowsky golding showed how methods used for wsd could be adapted to detect errors resulting from common spelling confusions among sets such as there their and they arehe extracted contexts from correct usage of each confusable word in a training corpus and then identified a new occurrence as an error when it matched the wrong contexthowever most grammatical errors are not the result of simple word confusionsthis complicates the task of building a model of incorrect usageone approach we considered was to proceed without such a model represent appropriate word usage in a single model and compare a novel example to that modelthe most appealing part of this formulation was that we could bypass the knowledge acquisition bottleneckall occurrences of the word in a collection of edited text could be automatically assigned to a single training set representing appropriate usageinappropriate usage would be signaled by contextual cues that do not occur in trainingunfortunately this approach was not effective for error detectionan example of a word usage error is often very similar to the model of appropriate usagean incorrect usage can contain two or three salient contextual elements as well as a single anomalous elementthe problem of error detection does not entail finding similarities to appropriate usage rather it requires identifying one element among the contextual cues that simply does not fitwhat kinds of anomalous elements does alek identifywriters sometimes produce errors that violate basic principles of english syntax while other mistakes show a lack of information about a specific vocabulary item in order to detect these two types of problems alek uses a 30million word general corpus of english from the san jose mercury news and for each target word a set of 10000 example sentences from north american newspaper text the corpora are extracted from the acldci corporain selecting the sentences for the word alek infers negative evidence from the contextual cues that do not cooccur with the target word either in the word specific corpus or in the general english oneit uses two kinds of contextual cues in a 2 word window around the target word function words and partofspeech tags the brill tagger output is postprocessed to quotenrichquot some closed class categories of its tag set such as subject versus object pronoun and definite versus indefinite determinerthe enriched tags were adapted from francis and kaera after the sentences have been preprocessed alek counts sequences of adjacent partofspeech tags and function words for example the sequence aat fulltimei1 jobinn contributes one occurrence each to the bigrams atjj jjnn ajj and to the partofspeech tag trigram atjjnneach individual tag and function word also contributes to its own unigram countthese frequencies form the basis for the error detection measuresfrom the general corpus alek computes a mutual information measure to determine which sequences of partofspeech tags and function words are unusually rare and are therefore likely to be ungrammatical in english mutual information has often been used to detect combinations of words that occur more frequently than we would expect based on the assumption that the words are independenthere we use this measure for the opposite purpose to find combinations that occur less often than expectedalek also looks for sequences that are common in general but unusual in the word specific corpus these divergences between the two corpora reflect syntactic properties that are peculiar to the target wordthe system computes mutual information comparing the proportion of observed occurrences of bigrams in the general corpus to the proportion expected based on the assumption of independence as shown below here p is the probability of the occurrence of the ab bigram estimated from its frequency in the general corpus and p and p are the probabilities of the first and second elements of the bigram also estimated from the general corpusungrammatical sequences should produce bigram probabilities that are much smaller than the product of the unigram probabilities trigram sequences are also used but in this case the mutual information computation compares the cooccurrence of abc to a model in which a and c are assumed to be conditionally independent given b once again a negative value is often indicative of a sequence that violates a rule of englishalek also uses mutual information to compare the distributions of tags and function words in the wordspecific corpus to the distributions that are expected based on the general corpusthe measures for bigrams and trigrams are similar to those given above except that the probability in the numerator is estimated from the wordspecific corpus and the probabilities in the denominator come from the general corpusto return to a previous example the phrase a knowledge contains the tag bigram for singular determiner followed by singular noun this sequence is much less common in the wordspecific corpus for knowledge than would be expected from the general corpus unigram probabilities of at and nnin addition to bigram and trigram measures alek compares the target word partofspeech tag in the wordspecific corpus and in the general corpusspecifically it looks at the conditional probability of the partofspeech tag given the major syntactic category in both distributions by computing the following valuefor example in the general corpus about half of all noun tokens are plural but in the training set for the noun knowledge the plural knowledges occurs rarely if at allthe mutual information measures provide candidate errors but this approach overgenerates it finds rare but still quite grammatical sequencesto reduce the number of false positives no candidate found by the mi measures is considered an error if it appears in the wordspecific corpus at least two timesthis increases alek precision at the price of reduced recallfor example a knowledge will not be treated as an error because it appears in the training corpus as part of the longer a knowledge of sequence alek also uses another statistical technique for finding rare and possibly ungrammatical tag and function word bigrams by computing the x2 statistic for the difference between the bigram proportions found in the wordspecific and in the general corpus the x2 measure faces the same problem of overgenerating errorsdue to the large sample sizes extreme values can be obtained even though effect size may be minusculeto reduce false positives alek requires that effect sizes be at least in the moderatetosmall range direct evidence from the word specific corpus can also be used to control the overgeneration of errorsfor each candidate error alek compares the larger context in which the bigram appears to the contexts that have been analyzed in the wordspecific corpusfrom the wordspecific corpus alek forms templates sequences of words and tags that represent the local context of the targetif a test sentence contains a low probability bigram the local context of the target is compared to all the templates of which it is a partexceptions to the error that is longer grammatical sequences that contain rare subsequences are found by examining conditional probabilitiesto illustrate this consider the example of a knowledge and a knowledge of the conditional probability of of given a knowledge is high as it accounts for almost all of the occurrences of a knowledge in the wordspecific corpusbased on this high conditional probability the system will use the template for a knowledge of to keep it from being marked as an errorother function words and tags in the 1 position have much lower conditional probability so for example a knowledge is will not be treated as an exception to the errortoefl essays are graded on a 6 point scale where 6 demonstrates quotclear competencequot in writing on rhetorical and syntactic levels and 1 demonstrates quotincompetence in writingquotif low probability ngrams signal grammatical errors then we would expect toefl essays that received lower scores to have more of these ngramsto test this prediction we randomly selected from the toefl pool 50 essays for each of the 6 score values from 10 to 60for each score value all 50 essays were concatenated to form a superessayin every superessay for each adjacent pair and triple of tags containing a noun verb or adjective the bigram and trigram mutual information values were computed based on the general corpustable 1 shows the proportions of bigrams and trigrams with mutual information less than 360as predicted there is a significant negative correlation between the score and the proportion of low probability bigrams and trigrams alek was developed using three target words that were extracted from toefl essays concentrate interest and knowledgethese words were chosen because they represent different parts of speech and varying degrees of polysemyeach also occurred in at least 150 sentences in what was then a small pool of toefl essaysbefore development began each occurrence of these words was manually labeled as an appropriate or inappropriate usage without taking into account grammatical errors that might have been present elsewhere in the sentence but which were not within the target word scopecritical values for the statistical measures were set during this development phasethe settings were based empirically on alek performance so as to optimize precision and recall on the three development wordscandidate errors were those local context sequences that produced a mutual information value of less than 360 based on the general corpus mutual information of less than 500 for the specificgeneral comparisons or a x2 value greater than 1282 with an effect size greater than 030precision and recall for the three words are shown belowalek was tested on 20 wordsthese words were randomly selected from those which met two criteria they appear in a university word list as words that a student in a us university will be expected to encounter and there were at least 1000 sentences containing the word in the toefl essay poolto build the usage model for each target word 10000 sentences containing it were extracted from the north american news corpuspreprocessing included detecting sentence boundaries and partofspeech taggingas in the development system the model of general english was based on bigram and trigram frequencies of function words and partofspeech tags from 30million words of the san jose mercury newsfor each test word all of the test sentences were marked by alek as either containing an error or not containing an errorthe size of the test set for each word ranged from 1400 to 20000 with a mean of 8000 sentencesto evaluate the system for each test word we randomly extracted 125 sentences that alek classified as containing no error and 125 sentences which it labeled as containing an error these 250 sentences were presented to a linguist in a random order for blind evaluationthe linguist who had no part in alek development marked each usage of the target word as incorrect or correct and in the case of incorrect usage indicated how far from the target one would have to look in order to recognise that there was an errorfor example in the case of quotan periodquot the error occurs at a distance of one word from periodwhen the error is an omission as in quotlived in victorian periodquot the distance is where the missing word should have appearedin this case the missing determiner is 2 positions away from the targetwhen more than one error occurred the distance of the one closest to the target was markedtable 3 lists the precision and recall for the 20 test wordsthe column labelled quotrecallquot is the proportion of humanjudged errors in the 250sentence sample that were detected by alekquottotal recallquot is an estimate that extrapolates from the human judgements of the sample to the entire test setwe illustrate this with the results for pollutionthe human judge marked as incorrect usage 912 of the sample from alek eset and 184 of the sample from its csetto estimate overall incorrect usage we computed a weighted mean of these two rates where the weights reflected the proportion of sentences that were in the eset and csetthe eset contained 83 of the pollution sentences and the cset had the remaining 917with the human judgements as the gold standard the estimated overall rate of incorrect usage is 245alek estimated recall is the proportion of sentences in the eset times its precision divided by the overall estimated error rate 245 310the precision results vary from word to wordconclusion and pollution have precision in the low to middle 90 while individual precision is 57overall alek predictions are about 78 accuratethe recall is limited in part by the fact that the system only looks at syntactic information while many of the errors are semanticnicholls identifies four error types an unnecessary word a missing word a word or phrase that needs replacing a word used in the wrong form alek recognizes all of these types of errorsfor closed class words alek identified whether a word was missing the wrong word was used and when an extra word was usedopen class words have a fourth error category form including inappropriate compounding and verb agreementduring the development stage we found it useful to add additional error categoriessince teofl graders are not supposed to take punctuation into account punctuation errors were only marked when they caused the judge to quotgarden pathquot or initially misinterpret the sentencespelling was marked either when a function word was misspelled causing partofspeech tagging errors or when the writer intent was unclearthe distributions of categories for hits and misses shown in table 4 are not strikingly differenthowever the hits are primarily syntactic in nature while the misses are both semantic and syntactic alek is sensitive to openclass word confusions where the part of speech differs or where the target word is confused with another word in both cases the system recognizes that the target is in the wrong syntactic environmentmisses can also be syntactic when the target word is confused with another word but the syntactic environment fails to trigger an errorin addition alek does not recognize semantic errors when the error involves the misuse of an openclass word in combination with the target closed class words typically are either selected by or agree with a head wordso why are there so many misses especially with prepositionsthe problem is caused in part by polysemy when one sense of the word selects a preposition that another sense does notwhen concentrate is used spatially it selects the preposition in as quotthe stores were concentrated in the downtown areaquotwhen it denotes mental activity it selects the preposition on as in quotsusan concentrated on her studiesquotsince alek trains on all senses of concentrate it does not detect the error in quotsusan concentrated in her studiesquotanother because is that adjuncts especially temporal and locative adverbials distribute freely in the wordspecific corpora as in quotsusan concentrated in her roomquot this second problem is more tractable than the polysemy problem and would involve training the system to recognize certain types of adjunctsfalse positives when alek quotidentifiesquot an error where none exists fall into six major categoriesthe percentage of each false positive type in a random sample of 200 false positives is shown in table 5domain mismatch mismatch of the newspaperdomain wordspecific corpora and essaydomain test corpusone notable difference is that some toefl essay prompts call for the writer opinionconsequently toefl essays often contain first person references whereas newspaper articles are written in the third personwe need to supplement the wordspecific corpora with material that more closely resembles the test corpustagger incorrect analysis by the partofspeech taggerwhen the partofspeech tag is wrong alek often recognizes the resulting ngram as anomalousmany of these errors are caused by training on the brown corpus instead of a corpus of essayssyntactic analysis errors resulting from using partofspeech tags instead of supertags or a full parse which would give syntactic relations between constituentsfor example alek false alarms on arguments of ditransitive verbs such as offer and flags as an error quotyou benefitsquot in quotoffers you benefitsquotfree distribution elements that distribute freely such as adverbs and conjunctions as well as temporal and locative adverbial phrases tend to be identified as errors when they occur in some positionspunctuation most notably omission of periods and commassince these errors are not indicative of one ability to use the target word they were not considered as errors unless they caused the judge to misanalyze the sentenceinfrequent tagsan undesirable result of our quotenrichedquot tag set is that some tags eg the postdeterminer last occur too infrequently in the corpora to provide reliable statisticssolutions to some of these problems will clearly be more tractable than to otherscomparison of these results to those of other systems is difficult because there is no generally accepted test set or performance baselinegiven this limitation we compared alek performance to a widely used grammar checker the one incorporated in microsoft word97we created files of sentences used for the three development words concentrate interest and knowledge and manually corrected any errors outside the local context around the target before checking them with word97the performance for concentrate showed overall precision of 089 and recall of 007for interest precision was 085 with recall of 011in sentences containing knowledge precision was 099 and recall was 030word97 correctly detected the ungrammaticality of knowledges as well as a knowledge while it avoided flagging a knowledge ofin summary word97 precision in error detection is impressive but the lower recall values indicate that it is responding to fewer error types than does alekin particular word97 is not sensitive to inappropriate selection of prepositions for these three words of course word97 detects many kinds of errors that alek does notresearch has been reported on grammar checkers specifically designed for an esl populationthese have been developed by hand based on small training and test setsschneider and mccoy developed a system tailored to the error productions of american sign language signersthis system was tested on 79 sentences containing determiner and agreement errors and 101 grammatical sentenceswe calculate that their precision was 78 with 54 recallpark palmer and washburn adapted a categorial grammar to recognize quotclasses of errors that dominatequot in the nine essays they inspectedthis system was tested on eight essays but precision and recall figures are not reportedthe unsupervised techniques that we have presented for inferring negative evidence are effective in recognizing grammatical errors in written textpreliminary results indicate that alek error detection is predictive of toefl scoresif alek accurately detects usage errors then it should report more errors in essays with lower scores than in those with higher scoreswe have already seen in table 1 that there is a negative correlation between essay score and two of alek component measures the general corpus ngramshowever the data in table 1 were not based on specific vocabulary items and do not reflect overall system performance which includes the other measures as welltable 6 shows the proportion of test word occurrences that were classified by alek as containing errors within two positions of the target at each of 6 toefl score pointsas predicted the correlation is negative these data support the validity of the system as a detector of inappropriate usage even when only a limited number of words are targeted and only the immediate context of each target is examinedalek and by a human judge for comparison table 6 also gives the estimated proportions of inappropriate usage by score point based on the human judge classificationhere too there is a negative correlation rs 90 n 5 p 05 twotailedalthough the system recognizes a wide range of error types as table 6 shows it detects only about onefifth as many errors as a human judge doesto improve recall research needs to focus on the areas identified in section 32 and to improve precision efforts should be directed at reducing the false positives described in 33alek is being developed as a diagnostic tool for students who are learning english as a foreign languagehowever its techniques could be incorporated into a grammar checker for native speakerswe thank susanne wolff for evaluating the test sentences and robert kantor ken sheppard and 3 anonymous reviewers for their helpful suggestions
A00-2019
an unsupervised method for detecting grammatical errorswe present an unsupervised method for detecting grammatical errors by inferring negative evidence from edited textual corporathe system was developed and tested using essaylength responses to prompts on the test of english as a foreign language the errorrecognition system alek performs with about 80 precision and 20 recallwe attempt to identify errors on the basis of context more specifically a 2 word window around the word of interest from which we consider function words and pos tagswe use a mutual information measure in addition to raw frequency of n gramsthe grammar feature covers errors such as sentence fragments verb form errors and pronoun errorswe utilize mutual information and chisquare statistics to identify typical contexts for a small set of targeted words from a large wellformed corpus
cut and paste based text summarization we present a cut and paste based text summarizer which uses operations derived from an analysis of human written abstracts the summarizer edits extracted sentences using reduction to remove inessential phrases and combination to merge resulting phrases together as coherent sentences our work includes a statistically based sentence decomposition program that identifies where the phrases of a summary originate in the original document producing an aligned corpus of summaries and articles which we used to develop the summarizer there is a big gap between the summaries produced by current automatic summarizers and the abstracts written by human professionalscertainly one factor contributing to this gap is that automatic systems can not always correctly identify the important topics of an articleanother factor however which has received little attention is that automatic summarizers have poor text generation techniquesmost automatic summarizers rely on extracting key sentences or paragraphs from an article to produce a summarysince the extracted sentences are disconnected in the original article when they are strung together the resulting summary can be inconcise incoherent and sometimes even misleadingwe present a cut and paste based text summarization technique aimed at reducing the gap between automatically generated summaries and humanwritten abstractsrather than focusing on how to identify key sentences as do other researchers we study how to generate the text of a summary once key sentences have been extractedthe main idea of cut and paste summarization is to reuse the text in an article to generate the summaryhowever instead of simply extracting sentences as current summarizers do the cut and paste system will quotsmoothquot the extracted sentences by editing themsuch edits mainly involve cutting phrases and pasting them together in novel waysthe key features of this work areing operationswe identified six operations that can be used alone or together to transform extracted sentences into sentences in humanwritten abstractsthe operations were identified based on manual and automatic comparison of humanwritten abstracts and the original articlesexamples include sentence reduction sentence combination syntactic transformation and lexical paraphrasing development of an automatic system to perform cut and paste operationstwo operations sentence reduction and sentence combination are most effective in transforming extracted sentences into summary sentences that are as concise and coherent as in humanwritten abstractswe implemented a sentence reduction module that removes extraneous phrases from extracted sentences and a sentence combination module that merges the extracted sentences or the reduced forms resulting from sentence reductionour sentence reduction model determines what to cut based on multiple sources of information including syntactic knowledge context and statistics learned from corpus analysisit improves the conciseness of extracted sentences making them concise and on targetour sentence combination module implements combination rules that were identified by observing examples written by human professionalsit improves the coherence of extracted sentences sentencesthe cut and paste technique we propose here is a new computational model which we based on analysis of humanwritten abstractsto do this analysis we developed an automatic system that can match a phrase in a humanwritten abstract to the corresponding phrase in the article identifying its most likely locationthis decomposition program allows us to analyze the construction of sentences in a humanwritten abstractits results have been used to train and test the sentence reduction and sentence combination modulein section 2 we discuss the cut and paste technique in general from both a professional and computational perspectivewe also describe the six cut and paste operationsin section 3 we describe the system architecturethe major components of the system including sentence reduction sentence combination decomposition and sentence selection are described in section 4the evaluation results are shown in section 5related work is discussed in section 6finally we conclude and discuss future workprofessionals take two opposite positions on whether a summary should be produced by cutting and pasting the original textone school of scholars is opposed quot your own words do not keep too close to the words before youquot states an early book on abstracting for american high school students another study however shows that professional abstractors actually rely on cutting and pasting to produce summaries quottheir professional role tells abstractors to avoid inventing anythingthey follow the author as closely as possible and reintegrate the most important points of a document in a shorter textquot some studies are somewhere in between quotsummary language may or may not follow that of authorquot other guidelines or books on abstracting do not discuss the issueour cut and paste based summarization is a computational model we make no claim that humans use the same cut and paste operationswe manually analyzed 30 articles and their corresponding humanwritten summaries the articles and their summaries come from different domains and the summaries were written by professionals from different organizationswe found that reusing article text for summarization is almost universal in the corpus we studiedwe defined six operations that can be used alone sequentially or simultaneously to transform selected sentences from an article into the corresponding summary sentences in its humanwritten abstractremove extraneous phrases from a selected sentence as in the following example 1 the examples in this section were produced by human professionals document sentence when it arrives sometime next year in new tv sets the vchip will give parents a new and potentially revolutionary device to block out programs they do not want their children to seesummary sentence the vchip will give parents a device to block out programs they do not want their children to seethe deleted material can be at any granularity a word a phrase or a clausemultiple components can be removedmerge material from several sentencesit can be used together with sentence reduction as illustrated in the following example which also uses paraphrasing text sentence 1 but it also raises serious questions about the privacy of such highly personal information wafting about the digital worldtext sentence 2 the issue thus fits squarely into the broader debate about privacy and security on the internet whether it involves protecting credit card number or keeping children from offensive informationsummary sentence but it also raises the issue of privacy of such personal information and this issue hits the head on the nail in the broader debate about privacy and security on the internetin both sentence reduction and combination syntactic transformations may be involvedfor example the position of the subject in a sentence may be moved from the end to the frontreplace phrases with their paraphrasesfor instance the summaries substituted point out with note and fits squarely into with a more picturesque description hits the head on the nail in the previous examples generalization or specification replace phrases or clauses with more general or specific descriptionsexamples of generalization and specification include generalization quota proposed new law that would require web publishers to obtain parental consent before collecting personal information from childrenquot quotlegislation to protect children privacy onlinequot specification quotthe white house top drug officialquot 4 quotgen barry r mccaffrey the white house top drug officialquotchange the order of extracted sentencesfor instance place an ending sentence in an article at the beginning of an abstractin humanwritten abstracts there are of course sentences that are not based on cut and paste but completely written from scratchwe used our decomposition program to automatically analyze 300 humanwritten abstracts and found that 19 of sentences in the abstracts were written from scratchthere are also other cut and paste operations not listed here due to their infrequent occurrencethe architecture of our cut and paste based text summarization system is shown in figure 1input to the system is a single document from any domainin the first stage extraction key sentences in the article are identified as in most current summarizersin the second stage cut and paste based generation a sentence reduction module and a sentence combination module implement the operations we observed in humanwritten abstractsthe cut and paste based component receives as input not only the extracted key sentences but also the original articlethis component can be ported to other singledocument summarizers to serve as the generation component since most current summarizers extract key sentences exactly what the extraction module in our system doesother resources and tools in the summarization system include a corpus of articles and their humanwritten abstracts the automatic decomposition program a syntactic parser a coreference resolution system the wordnet lexical database and a largescale lexicon we combined from multiple resourcesthe components in dotted lines are existing tools or resources all the others were developed by ourselvesthe main focus of our work is on decomposition of summaries sentence reduction and sentence combinationwe also describe the sentence extraction module although it is not the main focus of our workthe decomposition program see for details is used to analyze the construction of sentences in humanwritten abstractsthe results from decomposition are used to build the training and testing corpora for sentence reduction and sentence combinationthe decomposition program answers three questions about a sentence in a humanwritten abstract is the sentence constructed by cutting and pasting phrases from the input article if so what phrases in the sentence come from the original article where in the article do these phrases come fromwe used a hidden markov model solution to the decomposition problemwe first mathematically formulated the problem reducing it to a problem of finding for each word in a summary sentence a document position that it most likely comes fromthe position of a word in a document is uniquely identified by the position of the sentence where the word appears and the position of the word within the sentencebased on the observation of cut and paste practice by humans we produced a set of general heuristic rulessample heuristic rules include two adjacent words in a summary sentence are most likely to come from two adjacent words in the original document adjacent words in a summary sentence are not very likely to come from sentences that are far apart in the original documentwe use these heuristic rules to create a hidden markov modelthe viterbi algorithm is used to efficiently find the most likely document position for each word in the summary sentencefigure 2 shows sample output of the programfor the given summary sentence the program correctly identified that the sentence was combined from four sentences in the input articleit also divided the summary sentence into phrases and pinpointed the exact document origin of each phrasea phrase in the summary sentence is annotated as where fnum is the sequential number of the phrase and snum is the number of the document sentence where the phrase comes fromsnum 1 means that the component does not come from the original documentthe phrases in the document sentences are annotated as the task of the sentence reduction module described in detail in is to remove extraneous phrases from extracted sentencesthe goal of reduction is to quotreduce without major lossquot that is we want to remove as many extraneous phrases as possible from an extracted sentence so that it can be concise but without detracting from the main idea that the sentence conveysideally we want to remove a phrase from an extracted sentence only if it is irrelavant to the main topicour reduction module makes decisions based on multiple sources of knowledge original sentence when it arrives sometime next year in new tv sets the vchip will give parents a new and potentially revolutionary device to block out programs they do not want their children to see by adding up the scores of its children nodes in the parse treethis score indicates how important the phrase is to the main topic in discussionthe phrases we remove from an extracted sentence include clauses prepositional phrases gerunds and toinfinitivesthe result of sentence reduction is a shortened version of an extracted sentence 2this shortened text can be used directly as a summary or it can be fed to the sentence combination module to be merged with other sentencesfigure 3 shows two examples produced by the reduction programthe corresponding sentences in humanwritten abstracts are also provided for comparison21t is actually also possible that the reduction program decides no phrase in a sentence should be removed thus the result of reduction is the same as the inputto build the combination module we first manually analyzed a corpus of combination examples produced by human professionals automatically created by the decomposition program and identified a list of combination operationstable 1 shows the combination operationsto implement a combination operation we need to do two things decide when to use which combination operation and implement the combining actionsto decide when to use which operation we analyzed examples by humans and manually wrote a set of rulestwo simple rules are shown in figure 4sample outputs using these two simple rules are shown in figure 5we are currently exploring using machine learning techniques to learn the combination rules from our corpusthe implementation of the combining actions involves joining two parse trees substituting a subtree with another or adding additional nodeswe implemented these actions using a formalism based on tree adjoining grammar the extraction module is the front end of the summarization system and its role is to extract key sentencesour method is primarily based on lexical relationsfirst we link words in a sentence with other words in the article through repetitions morphological relations or one of the lexical relations encoded in wordnet similar to step 2 in sentence reductionan importance score is computed for each word in a sentence based on the number of lexical links it has with other words the type of links and the directions of the linksafter assigning a score to each word in a sentence we then compute a score for a sentence by adding up the scores for each wordthis score is then normalif and then replace the phrase with the full name plus the full description if and and then merge the two sentences by removing the subject in the second sentence and then combining it with the first sentence using connective quotandquot ized over the number of words a sentence containsthe sentences with high scores are considered importantthe extraction system selects sentences based on the importance computed as above as well as other indicators including sentence positions cue phrases and tfidf scoresour evaluation includes separate evaluations of each module and the final evaluations of the overall systemwe evaluated the decomposition program by two experiments described in in the first experiment we selected 50 humanwritten abstracts consisting of 305 sentences in totala human subject then read the decomposition results of these sentences to judge whether they are correct938 of the sentences were correctly decomposedin the second experiment we tested the system in a summary alignment taskwe ran the decomposition program to identify the source document sentences that were used to construct the sentences in humanwritten abstractshuman subjects were also asked to select the document sentences that are semanticequivalent to the sentences in the abstractswe compared the set of sentences identified by the program with the set of sentences selected by the majority of human subjects which is used as the gold standard in the computation of precision and recallthe program achieved an average 815 precision 785 recall and 791 fmeasure for 10 documentsthe average performance of 14 human judges is 888 precision 844 recall and 857 fmeasurerecently we have also tested the system on legal documents and the program works well on those documents toothe evaluation of sentence reduction for details used a corpus of 500 sentences and their reduced forms in humanwritten abstracts400 sentences were used to compute corpus probabilities and 100 sentences were used for testingthe results show that 813 of the reduction decisions made by the system agreed with those of humansthe humans reduced the length of the 500 sentences by 442 on average and the system reduced the length of the 100 test sentences by 327the evaluation of sentence combination module is not as straightforward as that of decomposition or reduction since combination happens later in the pipeline and it depends on the output from prior the new measure is an echo of the original bad idea blurred just enough to cloud prospects both for enforcement and for court reviewunlike the 1996 act this one applies only to commercial web sites thus sidestepping conversation deemed quotindecentquot by somebody somewherethe new version also replaces the vague quotindecencyquot standard to which the court objected with the betterdefined one of material ruled quotharmful to minorsquot combined sentences the new measure is an echo of the original bad ideathe new version applies only to commercial web sites and replaces the vague quotindecencyquot standard with the betterdefined one of material ruled quotharmful to minorsquot modulesto evaluate just the combination component we assume that the system makes the same reduction decision as humans and the coreference system has a perfect performancethis involves manual tagging of some examples to prepare for the evaluation this preparation is in progressthe evaluation of sentence combination will focus on the accessment of combination rulesthe overall system evaluation includes both intrinsic and extrinsic evaluationin the intrinsic evaluation we asked human subjects to compare the quality of extractionbased summaries and their revised versions produced by our sentence reduction and combination moduleswe selected 20 documents three different automatic summarizers were used to generate a summary for each document producing 60 summaries in totalthese summaries are all extractionbasedwe then ran our sentence reduction and sentence combination system to revise the summaries producing a revised version for each summarywe presented human subjects with the full documents the extractionbased summaries and their revised versions and asked them to compare the extractionbased summaries and their revised versionsthe human subjects were asked to score the conciseness of the summaries based on a scale from 0 to 10 the higher the score the more concise a summary isthey were also asked to score the coherence of the summaries based on a scale from 0 to 10on average the extractionbased summaries have a score of 42 for conciseness while the revised summaries have a score of 79 the average improvement for the three systems are 78 105 and 88 respectivelythe revised summaries are on average 41 shorter than the original extractionbased summariesfor summary coherence the average score for the extractionbased summaries is 39 while the average score for the revised summaries is 61 the average improvement for the three systems are 69 57 and 53 respectivelywe are preparing a taskbased evaluation in which we will use the data from the summarization evaluation conference and compare how our revised summaries can influence humans performance in tasks like text categorization and adhoc retrieval addressed the problem of revising summaries to improve their qualitythey suggested three types of operations elimination aggregation and smoothingthe goal of the elimination operation is similar to that of the sentence reduction operation in our systemthe difference is that while elimination always removes parentheticals sentenceinitial pps and certain adverbial phrases for every extracted sentence our sentence reduction module aims to make reduction decisions according to each case and removes a sentence component only if it considers it appropriate to do sothe goal of the aggregation operation and the smoothing operation is similar to that of the sentence combination operation in our systemhowever the combination operations and combination rules that we derived from corpus analysis are significantly different from those used in the above system which mostly came from operations in traditional natural language generationthis paper presents a novel architecture for text summarization using cut and paste techniques observed in humanwritten abstractsin order to automatically analyze a large quantity of humanwritten abstracts we developed a decomposition programthe automatic decomposition allows us to build large corpora for studying sentence reduction and sentence combination which are two effective operations in cut and pastewe developed a sentence reduction module that makes reduction decisions using multiple sources of knowledgewe also investigated possible sentence combination operations and implemented the combination modulea sentence extraction module was developed and used as the front end of the summarization systemwe are preparing the taskbased evaluation of the overall systemwe also plan to evaluate the portability of the system by testing it on another corpuswe will also extend the system to querybased summarization and investigate whether the system can be modified for multiple document summarizationwe thank ibm for licensing us the esg parser and the mitre corporation for licensing us the coreference resolution systemthis material is based upon work supported by the national science foundation under grant noiri 9619124 and mi 9618797any opinions findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the national science foundation
A00-2024
cut and paste based text summarizationwe present a cut and paste based text summarizer which uses operations derived from an analysis of human written abstractsthe summarizer edits extracted sentences using reduction to remove inessential phrases and combination to merge resuiting phrases together as coherent sentencesour work includes a statistically based sentence decomposition program that identifies where the phrases of a summary originate in the original document producing an aligned corpus of summaries and articles which we used to develop the summarizerwe first extract sentences then remove redundant phrases and use recombination rules to produce coherent outputwe manually analyze 30 humanwritten summaries and find that 19 of sentences can not be explained by cutandpaste operations from the source text
trainable methods for surface natural language generation we present three systems for surface natural language generation that are trainable from annotated corpora the first two systems called nlg1 and nlg2 require a corpus marked only with domainspecific semantic attributes while the last system called nlg3 requires a corpus marked with both semantic attributes and syntactic dependency information all systems attempt to produce a grammatical natural language phrase from a domainspecific semantic representation nlg1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step while nlg2 and nlg3 use maximum entropy probability models to individually generate each word in the phrase the systems nlg2 and nlg3 learn to determine both the word choice and the word order of the phrase we present experiments in which we generate phrases to describe flights in the air travel domain this paper presents three trainable systems for surface natural language generation surface nlg for our purposes consists of generating a grammatical natural language phrase that expresses the meaning of an input semantic representationthe systems take a quotcorpusbasedquot or quotmachinelearningquot approach to surface nlg and learn to generate phrases from semantic input by statistically analyzing examples of phrases and their corresponding semantic representationsthe determination of the content in the semantic representation or quotdeepquot generation is not discussed hereinstead the systems assume that the input semantic representation is fixed and only deal with how to express it in natural languagethis paper discusses previous approaches to surface nlg and introduces three trainable systems for surface nlg called nlg1 nlg2 and nlg3quantitative evaluation of experiments in the air travel domain will also be discussedtemplates are the easiest way to implement surface nlga template for describing a flight noun phrase in the air travel domain might be flight departing from cityfr at timedep and arriving in cityto at timearr where the words starting with quotquot are actually variables representing the departure city and departure time the arrival city and the arrival time respectively whose values will be extracted from the environment in which the template is usedthe approach of writing individual templates is convenient but may not scale to complex domains in which hundreds or thousands of templates would be necessary and may have shortcomings in maintainability and text quality for a discussionthere are more sophisticated surface generation packages such as fufsurge kpml mumble and realpro which produce natural language text from an abstract semantic representationthese packages require linguistic sophistication in order to write the abstract semantic representation but they are flexible because minor changes to the input can accomplish major changes to the generated textthe only trainable approaches to surface generation are the purely statistical machine translation systems such as and the corpusbased generation system described in the mt systems of learn to generate text in the target language straight from the source language without the aid of an explicit semantic representationin contrast uses corpusderived statistical knowledge to rank plausible hypotheses from a grammarbased surface generation componentin trainable surface nlg the goal is to learn the mapping from semantics to words that would otherwise need to be specified in a grammar or knowledge baseall systems in this paper use attributevalue pairs as a semantic representation which suffice as a representation for a limited domain like air travelfor example the set of attributevalue pairs cityfr new york city cityto seattle timedep 6 am datedep wednesday represent the meaning of the noun phrase quota flight to seattle that departs from new york city at 6 am on wednesdayquotthe goal more specifically is then to learn the optimal attribute ordering and lexical choice for the text to be generated from the attributevalue pairsfor example the nlg system should automatically decide if the attribute ordering in quotflights to new york in the eveningquot is better or worse than the ordering in quotflights in the evening to new yorkquotfurthermore it should automatically decide if the lexical choice in quotflights departing to new yorkquot is better or worse than the choice in quotflights leaving to new yorkquotthe motivation for a trainable surface generator is to solve the above two problems in a way that reflects the observed usage of language in a corpus but without the manual effort needed to construct a grammar or knowledge baseall the trainable nlg systems in this paper assume the existence of a large corpus of phrases in which the values of interest have been replaced with their corresponding attributes or in other words a corpus of generation templatesfigure 1 shows a sample of training data where only words marked with a quot8quot are attributesall of the nlg systems in this paper work in two steps as shown in table 2the systems nlg1 nlg2 and nlg3 all implement step 1 they produce a sequence of words intermixed with attributes ie a template from the the attributes alonethe values are ignored until step 2 when they replace their corresponding attributes in the phrase produced by step 1the surface generation model nlg1 simply chooses the most frequent template in the training data that corresponds to a given set of attributesits performance is intended to serve as a baseline result to the more sophisticated models discussed laterspecifically nlgi returns the phrase that corresponds to the attribute set a empty string ta where ta are the phrases that have occurred with a in the training data and where c is the training data frequency of the natural language phrase phrase and the set of attributes a nlg1 will fail to generate anything if a is a novel combination of attributesthe surface generation system nlg2 assumes that the best choice to express any given attributevalue set is the word sequence with the highest probability that mentions all of the input attributes exactly oncewhen generating a word it uses local information captured by word ngrams together with certain nonlocal information namely the subset of the original attributes that remain to be generatedthe local and nonlocal information is integrated with use of features in a maximum entropy probability model and a highly pruned search procedure attempts to find the best scoring word sequence according to the modelthe probability model in nlg2 is a conditional distribution over v you stop where v is the generation vocabulary and where stop is a special quotstopquot symbolthe generation vocabulary v consists of all the words seen in the training datathe form of the maximum entropy probability model is identical to the one used in where wi ranges over v you stop and wii wi2 attri is the history where wi denotes the ith word in the phrase and attri denotes the attributes that remain to be generated at position i in the phrasethe h where f3 e 01 are called features and capture any information in the history that might be useful for estimating pthe features used in nlg2 are described in the next section and the feature weights ai obtained from the improved iterative scaling algorithm are set to maximize the likelihood of the training datathe probability of the sequence w wn given the attribute set a is the feature patterns used in nlg2 are shown in table 3the actual features are created by matching the patterns over the training data eg an actual feature derived from the word higram template might be f01 if w from and wii flight and city fr e attri otherwise input to step 1 1 cityfr cityto timedep datedep 1 output of step 1 quota flight to cityto that departs from cityfr at timedep on datedepquot input to step 2 quota flight to cityto that departs from cityfr at timedep on datedepquot cityfr new york city cityto seattle timedep 6 am datedep wednesday output of step 2 quota flight to seattle that departs from new york city at 6 am on wednesdayquot low frequency features involving word ngrams tend to be unreliable the nlg2 system therefore only uses features which occur k times or more in the training datathe search procedure attempts to find a word sequence wi wn of any length n m for the input attribute set a such that and where m is an heuristically set maximum phrase lengththe search is similar to a lefttoright breadthfirstsearch except that only a fraction of the word sequences are consideredmore specifically the search procedure implements the recurrence vvna top 147mi11 top the set wni is the top n scoring sequences of length i and the expression next returns all sequences w1 w1 such that wi w e wn and wifi e v you stopthe expression top finds the top n sequences in nextduring the search any sequence that ends with stop is removed and placed in the set of completed sequencesif n completed hypotheses are discovered or if wnm is computed the search terminatesany incomplete sequence which does not satisfy condition is discarded and any complete sequence that does not satisfy condition is also discardedwhen the search terminates there will be at most n completed sequences of possibly differing lengthscurrently there is no normalization for different lengths ie all sequences of length n m are equiprobable nlg2 chooses the best answer to express the attribute set a as follows where wnig2 are the completed word sequences that satisfy the conditions of the nlg2 search described abovenlg3 addresses a shortcoming of nlg2 namely that the previous two words are not necessarily the best informants when predicting the next wordinstead nlg3 assumes that conditioning on syntactically related words in the history will result on more accurate surface generationthe search procedure in nlg3 generates a syntactic dependency tree from toptobottom instead of a word sequence from lefttoright where each word is predicted in the context of its syntactically related parent grandparent and siblingsnlg3 requires a corpus that has been annotated with tree structure like the sample dependency tree shown in figure 1the probability model for nlg3 shown in figure 2 conditions on the parent the two closest siblings the direction of the child relative to the parent and the attributes that remain to be generatedjust as in nlg2 p is a distribution over v you stop and the improved iterative scaling algorithm is used to find the feature weights a3the expression chi denotes the ith closest child to the headword w par denotes the parent of the headword w dir e left right denotes the direction of the child relative to the parent and attri denotes the attributes that remain to be generated in the tree when headword w is predicting its ith childfor example in figure 1 if w quotflightsquot then chi quoteveningquot when generating the left children and chl quotfromquot when generating the right childrenas shown in figure 3 the probability of a dependency tree that expresses an attribute set a can be found by computing for each word in the tree the probability of generating its left children and then its right childrenin this formulation the left children are generated independently from the right childrenas in nlg2 nlg3 assumes the uniform distribution for the length probabilities pr and pr up to a certain maximum length m 10the feature patterns for nlg3 are shown in table 4as before the actual features are created by matching the patterns over the training datathe features in nlg3 have access to syntactic information whereas the features in nlg2 do notlow frequency features involving word ngrams tend to be unreliable the nlg3 system therefore only uses features which occur k times or more in the training datafurthermore if a feature derived from table 4 looks at a particular word chi and attribute a we only allow it if a has occurred as a descendent of 1we use a dummy root node to generate the top most head word of the phrase chi in some dependency tree in the training setas an example this condition allows features that look at chi quottoquot and citytoe attri but disallows features that look at chi quottoquot and cityfre the idea behind the search procedure for nlg3 is similar to the search procedure for nlg2 namely to explore only a fraction of the possible trees by continually sorting and advancing only the top n trees at any given pointhowever the dependency trees are not built lefttoright like the word sequences in nlg2 instead they are built from the current head in the following order as before any incomplete trees that have generated a particular attribute twice as well as completed trees that have not generated a necessary attribute are discarded by the searchthe search terminates when either n complete trees or n trees of the maximum length m are discoverednlg3 chooses the best answer to express the attribute set a as follows where tn193 are the completed dependency trees that satisfy the conditions of the nlg3 search described abovethe training and test sets used to evaluate nlg1 nlg2 and nlg3 were derived semiautomatically from a preexisting annotated corpus of user queries in the air travel domainthe annotation scheme used a total of 26 attributes to represent flightsthe training set consisted of 6000 templates describing flights while the test set consisted of 1946 templates describing flightsall systems used the same training set and were tested on the attribute sets extracted from the phrases in the test setfor example if the test set contains the template quotflights to cityto leaving at timedepquot the surface generation systems will be told to generate a phrase for the attribute set cityto timedep the output of nlg3 on the attribute set cityto cityfr timedep is shown in table 9there does not appear to be an objective automatic evaluation method2 for generated text that correlates with how an actual person might judge the outputtherefore two judges the author and a colleague manually evaluated the output of all three systemseach judge assigned each phrase from each of the three systems one of the following rankings ok tense or agreement is wrong but word choice is correctbad words are missing or extraneous words are present no output the system failed to produce any output while there were a total 1946 attribute sets from the test examples the judges only needed to evaluate the 190 unique attribute sets eg the attribute set cityfr cityto occurs 741 times in the test datasubjective evaluation of generation output is 2measuring word overlap or edit distance between the system output and a quotreferencequot set would be an automatic scoring methodwe believe that such a method does not accurately measure the correctness or grammaticality of the text not ideal but is arguably superior than an automatic evaluation that fails to correlate with human linguistic judgementthe results of the manual evaluation as well as the values of the search and feature selection parameters for all systems are shown in tables 5 6 7 and 8the weighted results in tables 5 and 6 account for multiple occurrences of attribute sets whereas the unweighted results in tables 7 and 8 count each unique attribute set once ie cityfr scityto is counted 741 times in the weighted results but once in the unweighted resultsusing the weighted results which represent testing conditions more realistically than the unweighted results both judges found an improvement from nlg1 to nlg2 and from nlg2 to nlg3nlg3 cuts the error rate from nlg1 by at least 33 nlg2 cuts the error rate by at least 22 and underperforms nlg3 but requires far less annotation in its training datanlg1 has no chance of generating anything for 3 of the data it fails completely on novel attribute setsusing the unweighted results both judges found an improvement from nlg1 to nlg2 but surprisingly judge a found a slight decrease while judge b found an increase in accuracy from nlg2 to nlg3the unweighted results show that the baseline nlg1 does well on the common attribute sets since it correctly generates only less than 50 of the unweighted cases but over 80 of the weighted casesthe nlg2 and nlg3 systems automatically attempt to generalize from the knowledge inherent in the training corpus of templates so that they can generate templates for novel attribute setsthere probability 0107582 000822441 000564712 000343372 00012465 generated text timedep flights from cityfr to cityto timedep flights between cityfr and cityto timedep flights cityfr to cityto flights from cityfr to cityto at timedep timedep flights from cityfr to to cityto table 9 sample output from nlg3typical values for attributes timedep quot10 amquot cityfr quotnew yorkquot cityto quotmiamiquot is some additional cost associated with producing the syntactic dependency annotation necessary for nlg3 but virtually no additional cost is associated with nlg2 beyond collecting the data itself and identifying the attributesthe trainable surface nlg systems in this paper differ from grammarbased systems in how they determine the attribute ordering and lexical choicenlg2 and nlg3 automatically determine attribute ordering by simultaneously searching multiple orderingsin grammarbased approaches such preferences need to be manually encodednlg2 and nlg3 solve the lexical choice problem by learning the words that correlate with a given attribute and local context whereas uses a rulebased approach to decide the word choicewhile trainable approaches avoid the expense of crafting a grammar to determine attribute ordering and lexical choice they are less accurate than grammarbased approachesfor short phrases accuracy is typically 100 with grammarbased approaches since the grammar writer can either correct or add a rule to generate the phrase of interest once an error is detectedwhereas with nlg2 and nlg3 one can tune the feature patterns search parameters and training data itself but there is no guarantee that the tuning will result in 100 generation accuracyour approach differs from the corpusbased surface generation approaches of and maps from semantics to words with a concept ontology grammar and lexicon and ranks the resulting word lattice with corpusbased statistics whereas nlg2 and nlg3 automatically learn the mapping from semantics to words from a corpus describes a statistical machine translation approach that generates text in the target language directly from the source textnlg2 and nlg3 are also statistical learning approaches but generate from an actual semantic representationthis comparison suggests that statistical mt systems could also generate text from an quotinterlinguaquot in a way similar to that of knowledgebased translation systemswe suspect that our statistical generation approach should perform accurately in domains of similar complexity to air travelin the air travel domain the length of a phrase fragment to describe an attribute is usually only a few wordsdomains which require complex and lengthy phrase fragments to describe a single attribute will be more challenging to model with features that only look at word ngrams for n e 2 3domains in which there is greater ambiguity in word choice will require a more thorough search ie a larger value of n at the expense of cpu time and memorymost importantly the semantic annotation scheme for air travel has the property that it is both rich enough to accurately represent meaning in the domain but simple enough to yield useful corpus statisticsour approach may not scale to domains such as freely occurring newspaper text in which the semantic annotation schemes do not have this propertyour current approach has the limitation that it ignores the values of attributes even though they might strongly influence the word order and word choicethis limitation can be overcome by using features on values so that nlg2 and nlg3 might discover to use a hypothetical example that quotflights leaving cityfrquot is preferred over quotflights from cityfrquot when cityfr is a particular value such as quotmiamiquotthis paper presents the first systems that use a statistical learning approach to produce natural language text directly from a semantic representationinformation to solve the attribute ordering and lexical choice problems which would normally be specified in a large handwritten grammar is automatically collected from data with a few feature patterns and is combined via the maximum entropy frameworknlg2 shows that using just local ngram information can outperform the baseline and nlg3 shows that using syntactic information can further improve generation accuracywe conjecture that nlg2 and nlg3 should work in other domains which have a complexity similar to air travel as well as available annotated datathe author thanks scott mccarley for serving as the second judge and scott axelrod kishore papineni and todd ward for their helpful comments on this workthis work was supported in part by darpa contract mda97297c0012
A00-2026
trainable methods for surface natural language generationwe present three systems for surface natural language generation that are trainable from annotated corporathe first two systems called nlg1 and nlg2 require a corpus marked only with domainspecific semantic attributes while the last system called nlg3 requires a corpus marked with both semantic attributes and syntactic dependency informationall systems attempt to produce a grammatical natural language phrase from a domainspecific semantic representationnlg1 serves a baseline system and uses phrase frequencies to generate a whole phrase in one step while nlg2 and nlg3 use maximum entropy probability models to individually generate each word in the phrasethe systems nlg2 and nlg3 learn to determine both the word choice and the word order of the phrasewe present experiments in which we generate phrases to describe flights in the air travel domainwe use maximum entropy models to drive generation with word bigram or dependency representations taking into account semantic featureswe use a large collection of generation templates for surface realizationwe present maximum entropy models to learn attribute ordering and lexical choice for sentence generation from a semantic representation of attributevalue pairs restricted to an air travel domain
a novel use of statistical parsing to extract information from text since 1995 a few statistical parsing algorithms have demonstrated a breakthrough in parsing accuracy as measured against the upenn treebank as a gold standard in this paper we report adapting a lexic al ized probabilistic contextfree parser to information extraction and evaluate this new technique on muc7 template elements and template relations since 1995 a few statistical parsing algorithms demonstrated a breakthrough in parsing accuracy as measured against the university of pennsylvania treebank as a gold standardyet relatively few have embedded one of these algorithms in a taskchiba was able to use such a parsing algorithm to reduce perplexity with the long term goal of improved speech recognitionin this paper we report adapting a lexicalized probabilistic contextfree parser with head rules to information extractionthe technique was benchmarked in the seventh message understanding conference in 1998several technical challenges confronted us and were solved treebank on wall street journal adequately train the algorithm for new york times newswire which includes dozens of newspapersmanually creating sourcespecific training data for syntax was not requiredinstead our parsing algorithm trained on the upenn treebank was run on the new york times source to create unsupervised syntactic training which was constrained to be consistent with semantic annotationwe evaluated the new approach to information extraction on two of the tasks of the seventh message understanding conference and reported in the template element task identifies organizations persons locations and some artifacts for each organization in an article one must identify all of its names as used in the article its type and any significant description of itfor each person one must find all of the person names within the document hisher type and any significant descriptions for each location one must also give its type for the following example the the template relations task involves identifying instances of three relations in the text tr builds on te in that tr reports binary relations between elements of tefor the following example the template relation in figure 2 was to be generated quotdonald m goldstein a historian at the university of pittsburgh who helped writequotalmost all approaches to information extraction even at the sentence level are based on the divideandconquer strategy of reducing a complex problem to a set of simpler onescurrently the prevailing architecture for dividing sentential processing is a fourstage pipeline consisting of since we were interested in exploiting recent advances in parsing replacing the syntactic analysis stage of the standard pipeline with a modern statistical parser was an obvious possibilityhowever pipelined architectures suffer from a serious disadvantage errors accumulate as they propagate through the pipelinefor example an error made during partofspeechtagging may cause a future error in syntactic analysis which may in turn because a semantic interpretation failurethere is no opportunity for a later stage such as parsing to influence or correct an earlier stage such as partofspeech taggingan integrated model can limit the propagation of errors by making all decisions jointlyfor this reason we focused on designing an integrated model in which tagging namefinding parsing and semantic interpretation decisions all have the opportunity to mutually influence each othera second consideration influenced our decision toward an integrated modelwe were already using a generative statistical model for partofspeech tagging and more recently had begun using a generative statistical model for name finding finally our newly constructed parser like that of was based on a generative statistical modelthus each component of what would be the first three stages of our pipeline was based on the same general class of statistical modelalthough each model differed in its detailed probability structure we believed that the essential elements of all three models could be generalized in a single probability modelif the single generalized model could then be extended to semantic analysis all necessary sentence level processing would be contained in that modelbecause generative statistical models had already proven successful for each of the first three stages we were optimistic that some of their properties especially their ability to learn from large amounts of data and their robustness when presented with unexpected inputs would also benefit semantic analysisour integrated model represents syntax and semantics jointly using augmented parse treesin these trees the standard treebank structures are augmented to convey semantic information that is entities and relationsan example of an augmented parse tree is shown in figure 3the five key facts in this example are here each quotreportablequot name or description is identified by a quotrquot suffix attached to its semantic labelfor example quotperrquot identifies quotnancequot as a named person and quotperdescrquot identifies quota paid consultant to abc newsquot as a person descriptionother labels indicate relations among entitiesfor example the coreference relation between quotnancequot and quota paid consultant to abc newsquot is indicated by quotperdescofquot in this case because the argument does not connect directly to the relation the intervening nodes are labeled with semantics quotptrquot to indicate the connectionfurther details are discussed in the section tree augmentationto train our integrated model we required a large corpus of augmented parse treessince it was known that the muc7 evaluation data would be drawn from a variety of newswire sources and that the articles would focus on rocket launches it was important that our training corpus be drawn from similar sources and that it cover similar eventsthus we did not consider simply adding semantic labels to the existing penn treebank which is drawn from a single source the wall street journal and is impoverished in articles about rocket launchesinstead we applied an information retrieval system to select a large number of articles from the desired sources yielding a corpus rich in the desired types of eventsthe retrieved articles would then be annotated with augmented tree structures to serve as a training corpusinitially we tried to annotate the training corpus by hand marking for each sentence the entire augmented treeit soon became painfully obvious that this task could not be performed in the available timeour annotation staff found syntactic analysis particularly complex and slow goingby necessity we adopted the strategy of hand marking only the semanticsfigure 4 shows an example of the semantic annotation which was the only type of manual annotation we performedto produce a corpus of augmented parse trees we used the following multistep training procedure which exploited the penn treebank applying this procedure yielded a new version of the semantically annotated corpus now annotated with complete augmented trees like that in figure 3in this section we describe the algorithm that was used to automatically produce augmented trees starting with a humangenerated semantic annotations and b machinegenerated syntactic parse treesfor each sentence combining these two sources involved five stepsthese steps are given belowsyntactic modifier of the other the inserted node serves to indicate the relation as well as the argumentfor example in the phrase quotlt cmdrdavid edwin lewisquot a node is inserted to indicate that quotlt cmdrquot is a descriptor for quotdavid edwin lewisquot 5whenever a relation involves an entity that is not a direct descendant of that relation in the parse tree semantic pointer labels are attached to all of the intermediate nodesthese labels serve to form a continuous chain between the relation and its argumentin our statistical model trees are generated according to a process similar to that described in the detailed probability structure differs however in that it was designed to jointly perform partofspeech tagging name finding syntactic parsing and relation finding in a single processfor each constituent the head is generated first followed by the modifiers which are generated from the head outwardhead words along with their partofspeech tags and features are generated for each modifier as soon as the modifier is createdword features are introduced primarily to help with unknown words as in we illustrate the generation process by walking through a few of the steps of the parse shown in figure 3at each step in the process a choice is made from a statistical distribution with the probability of each possible selection dependent on particular features of previously generated elementswe pick up the derivation just after the topmost s and its head word said have been producedthe next steps are to generate in order in this case there are none8postmodifier constituents for the pernpfirst a comma then an sbar structure and then a second comma are each generated in turnthis generation process is continued until the entire tree has been producedwe now briefly summarize the probability structure of the modelthe categories for head constituents cl are predicted based solely on the category of the parent node cp modifier constituent categories cm are predicted based on their parent node cp the head constituent of their parent node chp the previously generated modifier c_1 and the head word of their parent wpseparate probabilities are maintained for left and right modifiers partofspeech tags t for modifiers are predicted based on the modifier cm the partofspeech tag of the head word th and the head word itself wh head words w for modifiers are predicted based on the modifier cm the partofspeech tag of the modifier word t the partofspeech tag of the head word th and the head word itself wh lawmicmtmthwh egfinally word features fm for modifiers are predicted based on the modifier cm the partofspeech tag of the modifier word t the partofspeech tag of the head word th the head word itself wh and whether or not the modifier head word w is known or unknownthe probability of a complete tree is the product of the probabilities of generating each element in the treeif we generalize the tree components and treat them all as simply elements e and treat all the conditioning factors as the history h we can writemaximum likelihood estimates for the model probabilities can be obtained by observing frequencies in the training corpushowever because these estimates are too sparse to be relied upon we use interpolated estimates consisting of mixtures of successively lowerorder estimates for modifier constituents the mixture components are for partofspeech tags the mixture components are finally for word features the mixture components aregiven a sentence to be analyzed the search program must find the most likely semantic and syntactic interpretationmore precisely it must find the most likely augmented parse treealthough mathematically the model predicts tree elements in a topdown fashion we search the space bottomup using a chartbased searchthe search is kept tractable through a combination of ckystyle dynamic programming and pruning of low probability elementswhenever two or more constituents are equivalent relative to all possible later parsing decisions we apply dynamic programming keeping only the most likely constituent in the charttwo constituents are considered equivalent if threshold of the highest scoring constituent are maintained all others are prunedfor purposes of pruning and only for purposes of pruning the prior probability of each constituent category is multiplied by the generative probability of that constituent we can think of this prior probability as an estimate of the probability of generating a subtree with the constituent category starting at the topmost nodethus the scores used in pruning can be considered as the product of 1the probability of generating a constituent of the specified category starting at the topmost node2the probability of generating the structure beneath that constituent having already generated a constituent of that categorygiven a new sentence the outcome of this search process is a tree structure that encodes both the syntactic and semantic structure of the sentencethe semantics that is the entities and relations can then be directly extracted from these sentential treesour system for muc7 consisted of the sentential model described in this paper coupled with a simple probability model for crosssentence mergingthe evaluation results are summarized in table 1in both template entity and template relation our system finished in second place among all entrantsnearly all of the work was done by the sentential model disabling the crosssentence model entirely reduced our overall fscore by only 2 pointsgiven multiple constituents that cover identical spans in the chart only those constituents with probabilities within a while our focus throughout the project was on te and tr we became curious about how well the model did at partofspeech tagging syntactic parsing and at name findingwe evaluated partofspeech tagging and parsing accuracy on the wall street journal using a now standard procedure and evaluated name finding accuracy on the muc7 named entity testthe results are summarized in table 2while performance did not quite match the best previously reported results for any of these three tasks we were pleased to observe that the scores were at or near stateoftheart levels for all caseswe have demonstrated at least for one problem that a lexicalized probabilistic contextfree parser with head rules can be used effectively for information extractiona single model proved capable of performing all necessary sentential processing both syntactic and semanticwe were able to use the penn treebank to estimate the syntactic parameters no additional syntactic training was requiredthe semantic training corpus was produced by students according to a simple set of guidelinesthis simple semantic annotation was the only source of task knowledge used to configure the modelthe work reported here was supported in part by the defense advanced research projects agencytechnical agents for part of this work were fort huachucha and afrl under contract numbers dabt6394c0062 f3060297c0096 and 4132bbn001the views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies either expressed or implied of the defense advanced research projects agency or the united states governmentwe thank michael collins of the university of pennsylvania for his valuable suggestions
A00-2030
a novel use of statistical parsing to extract information from textsince 1995 a few statistical parsing algorithms have demonstrated a breakthrough in parsing accuracy as measured against the upenn treebank as a gold standardin this paper we report adapting a lexicalized probabilistic contextfree parser to information extraction and evaluate this new technique on muc7 template elements and template relationsour rulebased methods employ a number of linguistic rules to capture relation patternswe take the view that relation extraction is just a form of probabilistic parsing where parse trees are augmented to identify all relationswe integrate various tasks such as partofspeech tagging named entity recognition template element extraction and relation extraction in a single modelwe combine entity recognition parsing and relation extraction into a jointlytrained single statistical parsing model that achieves improved performance on all the subtasks part of the contribution of the current work is to suggest that joint decoding can be effective even when joint training is not possible because jointlylabeled data is unavailable
assigning function tags to parsed text and vincent j della pietra 1996 a maximum entropy approach to natural lanprocessing linguistics parsing sentences using statistical information gathered from a treebank was first examined a decade ago in and is by now a fairly wellstudied problem but to date the end product of the parsing process has for the most part been a bracketing with simple constituent labels like np vp or sbarthe penn treebank contains a great deal of additional syntactic and semantic information from which to gather statistics reproducing more of this information automatically is a goal which has so far been mostly ignoredthis paper details a process by which some of this informationthe function tags may be recovered automaticallyin the penn treebank there are 20 tags that can be appended to constituent labels in order to indicate additional information about the syntactic or semantic role of the constituentwe have divided them into four categories based on those in the bracketing guidelines a constituent can be tagged with multiple tags but never with two tags from the same category1 in actuality the case where a constituent has tags from all four categories never happens but constituents with three tags do occur at a high level we can simply say that having the function tag information for a given text is useful just because any further information would helpbut specifically there are distinct advantages for each of the various categoriesgrammatical tags are useful for any application trying to follow the thread of the textthey find the who does what of each clause which can be useful to gain information about the situation or to learn more about the behaviour of the words in the sentencethe formfunction tags help to find those constituents behaving in ways not conforming to their labelled type as well as further clarifying the behaviour of adverbial phrasesinformation retrieval applications specialising in describing events as with a number of the muc applications could greatly benefit from some of these in determining the wherewhenwhy of thingsnoting a topicalised constituent could also prove useful to these applications and it might also help in discourse analysis or pronoun resolutionfinally the miscellaneous tags are convenient at various times particularly the clr closely related tag which among other things marks phrasal verbs and prepositional ditransitivesto our knowledge there has been no attempt so far to recover the function tags in parsing treebank textin fact we know of only one project that used them at all defines certain constituents as complements based on a combination of label and function tag informationthis boolean condition is then used to train an improved parserwe have found it useful to define our statistical model in terms of featuresa feature in this context is a booleanvalued function generally over parse tree nodes and either node labels or lexical itemsfeatures can be fairly simple and easily read off the tree or slightly more complex this is concordant with the usage in the maximum entropy literature when using a number of known features to guess an unknown one the usual procedure is to calculate the value of each feature and then essentially look up the empirically most probable value for the feature to be guessed based on those known valuesdue to sparse data some of the features later in the list may need to be ignored thus the probability of an unknown feature value would be estimated as where p refers to an empirically observed probabilityof course if features 1 through i only cooccur a few times in the training this value may not be reliable so the empirical probability is usually smoothed the values for ai can then be determined according to the number of occurrences of features 1 through i together in the trainingone way to think about equation 1 is as follows we begin with the prior probability of f if we have data indicating p we multiply in that likelihood while dividing out the original priorif we have data for p we multiply that in while dividing out the p termthis is repeated for each piece of feature data we have at each point we are adjusting the probability we already have estimatedif knowledge about feature fi makes f more likely than with just fi_i the term where fi is added will be greater than one and the running probability will be adjusted upwardthis gives us the new probability shown in equation 3 which is exactly equivalent to equation 1 since everything except the last numerator cancels out of the equationthe value of j is chosen such that features ii fi are sufficiently represented in the training data sometimes all n features are used but often that would cause sparse data problemssmoothing is performed on this equation exactly as before each term is interpolated between the empirical value and the prior estimated probability according to a value of ai that estimates confidencebut aside from perhaps providing a new way to think about the problem equation 3 is not particularly useful as it isit is exactly the same as what we had beforeits real usefulness comes as shown in when we move from the notion of a feature chain to a feature treethese feature chains do not capture everything we would like them toif there are two independent features that are each relatively sparse but occasionally carry a lot of information then putting one before the other in a chain will effectively block the second from having any effect since its information is conditioned on the first one whose sparseness will completely dilute any gainwhat we would really like is to be able to have a feature tree whereby we can condition those two sparse features independently on one common predecessor featureas we said before equation 3 represents for each feature the probability of f based on fi and all its predecessors divided by the probability of f based only on the predecessorsin the chain case this means that the denominator is conditioned on every feature from 1 to i 1 but if we use a feature tree it is conditioned only on those features along the path to the root of the treea notable issue with feature trees as opposed to feature chains is that the terms do not all cancel outevery leaf on the tree will be represented in the numerator and every fork in the tree will be represented at least once in the denominatorfor example in figure 3 we have a small feature tree that has one target feature and four conditioning featuresfeatures b and d are independent of each other but each depends on a c depends directly only on bthe unsmoothed version of the corresponding equation would be which after cancelling of terms and smoothing results in note that strictly speaking the result is not a probability distributionit could be made into one with an appropriate normalisationthe socalled partition function in the maximumentropy literaturehowever if the independence assumptions made in the derivation of equation 4 are good ones the partition function will be close to 10we assume this to be the case for our feature treesnow we return the discussion to function taggingthere are a number of features that seem tar et feature to condition strongly for one function tag or another we have assembled them into the feature tree shown in figure 42 this figure should be relatively selfexplanatory except for the notion of an alternate head currently an alternate head is only defined for prepositional phrases and is the head of the object of the prepositional phrasethis data is very important in distinguishing for example by john from by next year and by selling it in the training phase of our experiment we gathered statistics on the occurrence of function tags in sections 221 of the penn treebankspecifically for every constituent in the treebank we recorded the presence of its function tags along with its conditioning informationfrom this we calculated the empirical probabilities of each function tag referenced in section 2 of this papervalues of a were determined using them on the development corpus to test then we simply took the output of our parser on the test corpus and applied a postprocessing step to add function tagsfor each constituent in the tree we calculated the likelihood of each function tag according to the feature tree in figure 4 and for each category we assigned the most likely function tag 2the reader will note that the features listed in the tree are in fact not booleanvalued each node in the given tree can be assumed to stand for a chain of boolean features one per potential value at that node exactly one of which will be trueto evaluate our results we first need to determine what is correctthe definition we chose is to call a constituent correct if there exists in the correct parse a constituent with the same start and end points label and function tag since we treated each of the four function tag categories as a separate feature for the purpose of tagging evaluation was also done on a percategory basisthe denominator of the accuracy measure should be the maximum possible number we could get correctin this case that means excluding those constituents that were already wrong in the parser output the parser we used attains 89 labelled precisionrecall so roughly 11 of the constituents are excluded from the function tag accuracy evaluationanother consideration is whether to count nontagged constituents in our evaluationon the one hand we could count as correct any constituent with the correct tag as well as any correctly nontagged constituent and use as our denominator the number of all correctlylabelled constituentson the other hand we could just count constituents with the correct tag and use as our denominators the total number of tagged correctlylabelled constituentswe believe the latter number to be a better performance metric as it is not overwhelmed by the large number of untagged constituentsboth are reported belowthere are it seems two reasonable baselines for this and future workfirst of all most constituents in the corpus have no tags at all so obviously one baseline is to simply guess no tag for any constituenteven for the most common type of function tag this method performs with 87 accuracythus the withnull accuracy of a function tagger needs to be very high to be significant herethe second baseline might be useful in examining the nonull accuracy values always guess the most common tag in a categorythis means that every constituent gets labelled with sbjtmptpcclr this combination of tags is in fact entirely illegal by the treebank guidelines but performs adequately for a baselinethe precision is of course abysmal for the same reasons the first baseline did so well but the recall is substantialthe performances of the two baseline measures are given in table 1in table 2 we give the results for each categorythe first column is the withnull accuracy and the precision and recall values given are the nonull accuracy as noted in section 4grammatical tagging performs the best of the four categorieseven using the more difficult nonull accuracy measure it has a 96 accuracythis seems to reflect the fact that grammatical relations can often be guessed based on constituent labels parts of speech and highfrequency lexical items largely avoiding sparsedata problemstopicalisation can similarly be guessed largely on highfrequency information and performed almost as well on the other hand we have the formfunction tags and the miscellaneous tagsthese are characterised by much more semantic information and the relationships between lexical items are very important making sparse data a real problemall the same it should be noted that the performance is still far better than the baselinesthe feature tree given in figure 4 is by no means the only feature tree we could have usedindeed we tried a number of different trees on the development corpus this tree gave among the best overall results with no category performing too badlyhowever there is no reason to use only one feature tree for all four categories the best results can be got by using a separate tree for each oneone can thus achieve slight gains in each categorythe overall performance given in table 3 appears promisingwith a tagging accuracy of about 87 various information retrieval and knowledge base applications can reasonably expect to extract useful informationthe performance given in the first row is the functiontagger performance on the correctlylabelled constituents output by our parserfor comparison we also give its performance when run directly on the original treebank parse since the parser accuracy is about 89 working directly with the treebank means our statistics are over roughly 12 more constituentsthis second version does slightly betterthe main reason that tagging does worse on the parsed version is that although the constituent itself may be correctly bracketed and labelled its exterior conditioning information can still be incorrectan example of this that actually occurred in the development corpus is the that clause in the phrase can swallow the premise that the rewards for such ineptitude are sixfigure salaries correctly diagrammed in figure 5the function tagger gave this sbar an adv tag indicating an unspecified adverbial functionthis seems extremely odd given that its conditioning information clearly show that it is part of an np and hence probably modifies the preceding nnindeed the statistics give the probability of an adv tag in this conditioning environment as vanishingly smallhowever this was not the conditioning information that the tagger receivedthe parser had instead decided on the parse in figure 6as such the tagger decision makes much more sense since an sbar under two vps whose heads are vb and md is rather likely to be an advthe sbar itself is correctly bracketed and labelled so it still gets counted in the statisticshappily this sort of case seems to be relatively rareanother thing that lowers the overall performance somewhat is the existence of error and inconsistency in the treebank taggingsome tags seem to have been relatively easy for the human treebank taggers and have few errorsother tags have explicit caveats that however welljustified proved difficult to remember for the taggersfor instance there are 37 instances of a pp being tagged with lgs in spite of the guidelines specifically saying lgs attaches to the np object of by and not to the pp node itself each mistagging in the test corpus can cause up to two spurious errors one in precision and one in recallstill another source of difficulty comes when the guidelines are vague or silent on a specific issueto return to logical subjects it is clear that the loss is a logical subject in the company was hurt by the loss but what about in the company was unperturbed by the lossin addition a number of the function tags are authorised for metaphorical use but what exactly constitutes such a use is somewhat inconsistently markedit is as yet unclear just to what degree these tagging errors in the corpus are affecting our resultsthis work presents a method for assigning function tags to text that has been parsed to the simple label levelbecause of the lack of prior research on this task we are unable to compare our results to those of other researchers but the results do seem promisinghowever a great deal of future work immediately suggests itself although we tested twenty or so feature trees besides the one given in figure 4 the space of possible trees is still rather unexploreda more systematic investigation into the advantages of different feature trees would be useful we could add to the feature tree the values of other categories of function tag or the function tags of various treerelatives one of the weaknesses of the lexical features is sparse data whereas the part of speech is too coarse to distinguish by john from by monday the lexical information may be too sparsethis could be assisted by clustering the lexical items into useful categories and adding those categories as an additional feature type there is no reason to think that this work could not be integrated directly into the parsing process particularly if one parser is already geared partially or entirely towards featurebased statistics the function tag information could prove quite useful within the parse itself to rank several parses to find the most plausible
A00-2031
assigning function tags to parsed textit is generally recognized that the common nonterminal labels for syntactic constituents do not exhaust the syntactic and semantic information one would like about parts of a syntactic treefor example the penn treebank gives each constituent zero or more function tags indicating semantic roles and other related information not easily encapsulated in the simple constituent labelswe present a statistical algorithm for assigning these function tags that on text already parsed to a simplelabel level achieves an fmeasure of 87 which rises to 99 when considering no tag as a valid choiceas an alternative to hard coded heuristics we propose to recover the penn functional tags automatically
using semantic preferences to identify verbal participation in role switching alternations we propose a method for identifying diathesis alternations where a particular argument type is seen in slots which have different grammatical roles in the alternating forms the method uses selectional preferences acquired as probability distributions over wordnet preferences for the target slots are compared using a measure of distributional similarity the method is evaluated on the causative and conative alternations but is generally applicable and does not require a priori knowledge specific to the alternation diathesis alternations are alternate ways in which the arguments of a verb are expressed syntacticallythe syntactic changes are sometimes accompanied by slight changes in the meaning of the verban example of the causative alternation is given in belowin this alternation the object of the transitive variant can also appear as the subject of the intransitive variantin the conative alternation the transitive form alternates with a prepositional phrase construction involving either at or onan example of the conative alternation is given in we refer to alternations where a particular semantic role appears in different grammatical roles in alternate realisations as quotrole switching alternationsquot it is these alternations that our method applies torecently there has been interest in corpusbased methods to identify alternations and associated verb classifications these have either relied on a priori knowledge specified for the alternations in advance or are not suitable for a wide range of alternationsthe fully automatic method outlined here is applied to the causative and conative alternations but is applicable to other rsasdiathesis alternations have been proposed for a number of nlp tasksseveral researchers have suggested using them for improving lexical acquisitionkorhonen uses them in subcategorization frame acquisition to improve the performance of a statistical filter which determines whether a scf observed for a particular verb is genuine or notthey have also been suggested for the recovery of predicate argument structure necessary for scf acquisition and ribas showed that selectional preferences acquired using alternations performed better on a word sense disambiguation task compared to preferences acquired without alternationshe used alternations to indicate where the argument head data from different slots can be combined since it occupies the same semantic relationship with the predicatedifferent diathesis alternations give different emphasis and nuances of meaning to the same basic contentthese subtle changes of meaning are important in natural language generation alternations provide a means of reducing redundancy in the lexicon since the alternating scfs need not be enumerated for each individual verb if a marker is used to specify which verbs the alternation applies toalternations also provide a means of generalizing patterns of behaviour over groups of verbs typically the group members are semantically relatedlevin provides a classification of over 3000 verbs according to their participation in alternations involving np and pp constituentslevin classification is not intended to be exhaustiveautomatic identification of alternations would be a useful tool for extending the classification with new participantslevin taxonomy might also be used alongside observed behaviour to predict unseen behaviourlevin classification has been extended by other nlp researchers dang et al modify it by adding new classes which remove the overlap between classes from the original schemedorr and jones extend the classification by using grammatical information in ldoce alongside semantic information in wordnetwhat is missing is a way of classifying verbs when the relevant information is not available in a manmade resourceusing corpora bypasses reliance on the availability and adequacy of mrdsadditionally the frequency information in corpora is helpful for estimating alternation productivity estimations of productivity have been suggested for controlling the application of alternations we propose a method to acquire knowledge of alternation participation directly from corpora with frequency information available as a byproductwe use both syntactic and semantic information for identifying participants in rsasfirstly syntactic processing is used to find candidates taking the alternating scfssecondly selectional preference models are acquired for the argument heads associated with a specific slot in a specific scf of a verbwe use the scf acquisition system of briscoe and carroll with a probabilistic lr parser for syntactic processingthe corpus data is pos tagged and lemmatised before the lr parser is appliedsubcategorization patterns are extracted from the parses these include both the syntactic categories and the argument heads of the constituentsthese subcategorization patterns are then classified according to a set of 161 scf classesthe scf entries for each verb are then subjected to a statistical filter which removes scfs that have occurred with a frequency less than would be expected by chancethe resulting scf lexicon lists each verb with the scfs it takeseach scf entry includes a frequency count and lists the argument heads at all slotsselectional preferences are automatically acquired for the slots involved in the role switchingwe refer to these as the target slotsfor the causative alternation the slots are the direct object slot of the transitive scf and the subject slot of the intransitivefor the conative the slots are the direct object of the transitive and the pp of the up v pp scfselectional preferences are acquired using the method devised by li and abe the preferences for a slot are represented as a tree cut model this is a set of disjoint classes that partition the leaves of the wordnet noun hypernym hierarchya conditional probability is attached to each of the classes in the setto ensure the tcm covers all the word senses in wordnet we modify li and abe original scheme by creating hyponym leaf classes below all wordnet hypernym classeseach leaf holds the word senses previously held at the internal classthe nominal argument heads from a target slot are collected and used to populate the wordnet hierarchy with frequency informationthe head lemmas are matched to the classes which contain them as synonymswhere a lemma appears as a synonym in more than one class its frequency count is divided between all classes for which it has direct membershipthe frequency counts from hyponym classes are added to the count for each hypernym classa root node created above all the wordnet roots contains the total frequency count for all the argument head lemmas found within wordnetthe minimum description length principle is used to find the best tcm by considering the cost of describing both the model and the argument head data encoded in the modelthe cost for a tcm is calculated according to equation 1the number of parameters of the model is given by k this is the number of classes in the tcm minus ones is the sample size of the argument head datathe cost of describing each argument head is calculated using the log of the probability estimate for the classes on the tcm that n belongs to a small portion of the tcm for the object slot of start in the transitive frame is displayed in figure 1wordnet classes are displayed in boxes with a label which best reflects the sense of the classthe probability estimates are shown for the classes along the tcmexamples of the argument head data are displayed below the wordnet classes with dotted lines indicating membership at a hyponym class beneath these classeswe assume that verbs which participate will show a higher degree of similarity between the preferences at the target slots compared with nonparticipating verbsto compare the preferences we compare the probability distributions across wordnet using a measure of distributional similaritysince the probability distributions may be at different levels of wordnet we map the tcms at the target slots to a common tree cut a quotbase cutquotwe experiment with two different types of base cutthe first is simply a base cut at the eleven root classes of wordnetwe refer to this as the quotroot base cutquot the second is termed the quotunion base cutquot this is obtained by taking all classes from the union of the two tcms which are not subsumed by another class in this unionduplicates are removedprobabilities are assigned to the classes of a base cut using the estimates on the original tcmthe probability estimate for a hypernym class is obtained by combining the probability estimates for all its hyponyms on the original cutfigure 2 exemplifies this process for two tcms in an imaginary hierarchythe ubc is at the classes b c and d to quantify the similarity between the probability distributions for the target slots we use the askew divergence proposed by lee 1 this measure defined in equation 2 is a smoothed version of the kulbackliebler divergence p1 and p2 are the two probability distributions which are being comparedthe a constant is a value between 0 and we also experimented with euclidian distance the li norm and cosine measuresthe differences in performance of these measures were not statistically significant1 which smooths p1 with p2 so that asri is always definedwe use the same value for a as leeif a is set to 1 then this measure is equivalent to the kulbackliebler divergencewe experiment with a scf lexicon produced from 193 million words of parsed text from the bnc we used the causative and conative alternations since these have enough candidates in our lexicon for experimentationevaluation is performed on verbs already filtered by the syntactic processingthe scf acquisition system has been evaluated elsewhere we selected candidate verbs which occurred with 10 or more nominal argument heads at the target slotsthe argument heads were restricted to those which can be classified in the wordnet hypernym hierarchycandidates were selected by hand so as to obtain an even split between candidates which did participate in the alternation and those which did not four human judges were used to determine the quotgold standardquotthe judges were asked to specify a yes or no decision on participation for each verbthey were also permitted a do not know verdictthe kappa statistic was calculated to ensure that there was significant agreement between judges for the initial set of candidatesfrom these verbs were selected which had 75 or more agreement ie three or more judges giving the same yes or no decision for the verbfor the causative alternation we were left with 46 positives and 53 negativesfor the conative alternation we had 6 of eachin both cases we used the mann whitney you test to see if there was a significant relationship between the similarity measure and participationwe then used a threshold on the similarity scores as the decision point for participation to determine a level of accuracywe experimented with both the mean and median of the scores as a thresholdseven of the negative causative candidates were randomly chosen and removed to ensure an even split between positive and negative candidates for determining accuracy using the mean and median as thresholdsthe following subsection describes the results of the experiments using the method described in section 3 abovesubsection 42 describes an experiment on the same data to determine participation using a similarity measure based on the intersection of the lemmas at the target slotsthe results for the causative alternation are displayed in table 1 for both the rbc and the ubcthe relationship between participation and asp is highly significant in both cases with values of p well below 001accuracy for the mean and median thresholds are displayed in the fourth and fifth columnsboth thresholds outperform the random baseline of 50the results for the ubc are slightly improved compared to those for the rbc however the improvement is not significantthe numbers of false negative and false positive errors for the mean and median thresholds are displayed in table 2 along with the threshold and accuracythe outcomes for each individual verb for the experiment using the rbc and the mean threshold are as follows add admit answer believe borrow cost declare demand expect feel imagine know notice pay perform practise proclaim read remember sing survive understand win write accelerate bang bend boil break burn change close cook cool crack decrease drop dry end expand fly improve increase match melt open ring rip rock roll shatter shut slam smash snap spill split spread start stop stretch swing tilt turn wake ask attack catch choose climb drink eat help kick knit miss outline pack paint plan prescribe pull remain steal suck warn wash the results for the ubc experiment are very similarif the median is used the number of fps and fns are evenly balancedthis is because the median threshold is by definition taken midway between the test items arranged in order of their similarity scoresthere are an even number of items on either side of the decision point and an even number of positive and negative candidates in our test samplethus the errors on either side of the decision point are equal in numberfor both base cuts there are a larger number of false positives than false negatives when the mean is usedthe mean produces a higher accuracy than the median but gives an increase in false positivesmany false positives arise where the preferences at both target slots are near neighbours in wordnetfor example this occurred for eat and drinkthere verbs have a high probability mass under the entity class in both target slots since both people and types of food occur under this classin cases like these the probability distributions at the rbc and frequently the ubc are not sufficiently distinctivethe polysemy of the verbs may provide another explanation for the large quantity of false positivesthe scfs and data of different senses should not ideally be combined at least not for coarse grained sense distinctionswe tested the false positive and true negative candidates to see if there was a relationship between the polysemy of a verb and its misclassificationthe number of senses was used to indicate the polysemy of a verbthe mann whitney you test was performed on the verbs found to be true negative and false positive using the rbca significant relationship was not found between participation and misclassificationboth groups had an average of 5 senses per verbthis is not to say that distinguishing verb senses would not improve performance provided that there was sufficient datahowever verb polysemy does not appear to be a major source of error from our preliminary analysisin many cases such as read which was classified both by the judges and the system as a negative candidate the predominant sense of the verb provides the majority of the dataalternate senses for example the book reads well often do not contribute enough data so as to give rise to a large proportion of errorsfinding an appropriate inventory of senses would be difficult since we would not wish to separate related senses which occur as alternate variants of one anotherthe inventory would therefore require knowledge of the phenomena that we are endeavouring to acquire automaticallyto show that our method will work for other rsas we use the conativeour sample size is rather small since we are limited by the number of positive candidates in the corpus having sufficient frequency for both scfsthe sparse data problem is acute when we look at alternations with specific prepositionsa sample of 12 verbs remained after the selection process outlined abovefor this small sample we obtained a significant result with a mean accuracy of 67 and a median accuracy of 83on this occasion the median performed better than the meanmore data is required to see if this difference is significantthis experiment was conducted using the same data as that used in the previous subsectionin this experiment we used a similarity score on the argument heads directly instead of generalizing the argument heads to wordnet classesthe venn diagram in figure 3 shows a subset of the lemmas at the transitive and intransitive scfs for the verb breakthe lemma based similarity measure is termed lemma overlap and is given in equation 3 where a and b represent the target slotslo is the size of the intersection of the multisets of argument heads at the target slots divided by the size of the smaller of the two multisetsthe intersection of two multisets includes duplicate items only as many times as the item is in both setsfor example if one slot contained the argument heads and the other slot contained person person child chair collection then the intersection would be person person child and lo would be t this measure ranges between zero and i using the mann whitney you test on the lo scores we obtained a z score of 200this is significant to the 95 level a lower level than that for the classbased experimentsthe results using the mean and median of the lo scores are shown in table 3performance is lower than that for the classbased experimentsthe outcome for the individual verbs using the mean as a threshold wasadd admit answer borrow choose climb cost declare demand drink eat feel imagine notice outline pack paint perform plan practise prescribe proclaim read remain sing steal suck survive understand wash win write accelerate bang break cook crack decrease drop expand flood land march repeat rip rock shatter interestingly the errors for the lo measure tend to be false negatives rather than false positivesthe lo measure is much more conservative than the approach using the tcmsin this case the median threshold produces better resultsfor the conative alternation the lemma based method does not show a significant relationship between participation and the lo scoresmoreover there is no difference between the sums of the ranks of the two groups for the mann whitney you testthe mean produces an accuracy of 58 whilst the median produces an accuracy of 50there has been some recent interest in observing alternations in corpora and predicting related verb classifications earlier work by resnik demonstrated a link between selectional preference strength and participation in alternations where the direct object is omittedresnik used syntactic information from the bracketing within the penn treebank corpusresearch into the identification of other diathesis alternations has been advanced by the availability of automatic syntactic processingmost work using corpus evidence for verb classification has relied on a priori knowledge in the form of linguistic cues specific to the phenomena being observed our approach whilst being applicable only to rsas does not require human input specific to the alternation at handlapata identifies participation in the dative and benefactive alternationslapata strategy is to identify participants using a shallow parser and various linguistic and semantic cues which are specified manually for these two alternationspp attachments are resolved using hindle and rooth lexical association scorecompound nouns which could be mistaken for the double object construction were filtered using the loglikelihood ratio testthe semantic cues were obtained by manual analysisthe relative frequency of a scf for a verb compared to the total frequency of the verb was used for filtering out erroneous scfslapata does not report recall and precision figures against a gold standardthe emphasis is on the phenomena actually evident in the corpus datamany of the verbs listed in levin as taking the alternation were not observed with this alternation in the corpus datathis amounted to 44 of the verbs for the benefactive and 52 for the dativethese figures only take into account the verbs for which at least one of the scfs were observed54 of the verbs listed for the dative and benefactive by levin were not acquired with either of the target scfsconversely many verbs not listed in levin were identified as taking the benefactive or dative alternation using lapata criteriamanual analysis of these verbs revealed 18 false positives out of 52 candidatesstevenson and merlo use syntactic and lexical cues for classifying 60 verbs in three verb classes unergative unaccusative and verbs with an optional direct objectthese three classes were chosen because a few well defined features specified a priori can distinguish the three groupstwenty verbs from levin classification were used in each classthey were selected by virtue of having sufficient frequency in a combined corpus of 65 million wordsthe verbs were also chosen for having one predominant intended sense in the corpusstevenson and merlo used four linguistically motivated features to distinguish these groupscounts from the corpus data for each of the four features were normalised to give a score on a scale of 1 to 100one feature was the causative noncausative distinctionfor this feature a measure similar to our lo measure was usedthe four features were identified in the corpus using automatic pos tagging and parsing of the datathe data for half of the verbs in each class was subject to manual scrutiny after initial automatic processingthe rest of the data was produced fully automaticallythe verbs were classified automatically using the four featuresthe accuracy of automatic classification was 52 using all four features compared to a baseline of 33the best result was obtained using a combination of three featuresthis gave an accuracy of 66mccarthy and korhonen proposed a method for identifying asas using mdlthis method relied on an estimation of the cost of using tcms to encode the argument head data at a target slotthe sum of the costs for the two target slots was compared to the cost of a tcm for encoding the union of the argument head data over the two slotsresults are reported for the causative alternation with 15 verbsthis method depends on there being similar quantities of data at the alternating slots otherwise the data at the more frequent slot overwhelms the data at the less frequent slothowever many alternations involve scfs with substantially different relative frequencies especially when one scf is specific to a particular prepositionwe carried out some experiments using the mdl method and our tcmsfor the causative we used a sample of 110 verbs and obtained 63 accuracyfor the conative a sample of 16 verbs was used and this time accuracy was only 56notably only one negative decision was made because of the disparate frame frequencies which reduces the cost of combining the argument head datawe have discovered a significant relationship between the similarity of selectional preferences at the target slots and participation in the causative and conative alternationsa threshold such as the mean or median can be used to obtain a level of accuracy well above the baselinea lemma based similarity score does not always indicate a significant relationship and generally produces a lower accuracythere are patterns of diathesis behaviour among verb groups accuracy may be improved by considering several alternations collectively rather than in isolationcomplementary techniques to identify alternations for example might be combined with oursalthough we have reported results on only two rsas our method is applicable to other such alternationsfurthermore such application requires no human endeavour apart from that required for evaluationhowever a considerably larger corpus would be required to overcome the sparse data problem for other rsa alternationssome funding for this work was provided by uk epsrc project grl53175 pset practical simplification of english textwe also acknowledge gerald gazdar for his helpful comments on this paper
A00-2034
using semantic preferences to identify verbal participation in role switching alternationswe propose a method for identifying diathesis alternations where a particular argument type is seen in slots which have different grammatical roles in the alternating formsthe method uses selectional preferences acquired as probability distributions over wordnetpreferences for the target slots are compared using a measure of distributional similaritythe method is evaluated on the causative and conative alternations but is generally applicable and does not require a priori knowledge specific to the alternationwe use skew divergence to compare the sense profile of one argument of a verb to another argument of the same verb to determine if the verb participates in an argument alternation involving the two positions
a stochastic parts program and noun phrase parser for unrestricted text there are five missing brackets which are indicated as quotquot or quot1quot words with a second np tag were identified as proper nouns in a prepass aat formerap topnn aidenn toin attorneynpnp generalnpnp edwinnpnp meesenpnp intercededvbd toto extendvb anat aircraftnn companynn governmentnn contractnn thenrb wentvbd intoin businessnm within aat lobbyistnn1 whowps workedvbd forin theat defensenn contractornn accordingin toin aat publishedvbn reportnn jamesnpnp enpnp jenkinsnpnp aat onetimejj seniorjj deputynn toin meesenpnp joinedvbd theat boardnn ofin directorsnns ofin transworldnpnp groupnpnp ltdnpnp onin aprilnpnp 28cd 1984cd theat chicagonpnp tribunenpnp reportecivbd inin itspp tuesdaynr editionsnns theat principaljj figurenn inin transworldnpnp wasbedz richardnpnp millmannpnp aat lobbyistnn forin fairchildnpnp industriesnpnp incnpnp virginianpnp de fensenn con 142 tractornn i theat tribunenpnp saidvbd 1 aat federaljj grandjj jurynn isbez in theat fairchildnpnp transactionnn andcc otherap actionsnns ofin meesenpnp andcc formerap whitenpnp housenpnp aidenn nofzigernpnp inin connectionnn within wedtechnpnp newnpnp yorknpnp defensenn companynn thatwps receivedvbd 250cd millioncd inin govemmentnn contractsnns issuedvbn withouten competitivejj biddingnn duringin theat reagannpnp administrationnn jenkinsnpnp leftvbd theat whitenpnp housenpnp inin 1984cd 1 andcc joinedvbd wedtechnpnp ascs itspp directornn ofin marketingnn twocd yearsnns laterrbr 1 deborahnpnp tuckernpnp aat spokeswomannn forin fairchildnpnp saidvbd fridaynr thatcs theat cornpanynn hadhvd beenben contactedvbn byin theat officenn ofin independentjj counselnn jamesnpnp mckaynpnp andcc subpoenasnns hadhvd beenben servedvbn onin fairchildnpnp tuckernpnp saidvbd theat investigationnn involvingin fairchildnpnp hadhvd beenben goingvbg onin aat numbernn ofin weeksnns andcc predatesvbz lastap weeknn exof1n mckaynpnp investigationnn toto includevb meesenpnp theat companynn isbez cooperatingvbg in1n theat investigationnn tuckernpnp saidvbd aat sourcenn closenn toin mckaynpnp saidvbd lastap weeknnquot thatcs meesenpnp is notbez underin cruninalaj investigationnn inin theat fairchildnpnp matternn butrb isbez aat witnessnn thenp tribunenpnp saidvbd mill actingvbg ascs aat lobbyistnn for1n theat chantillynpnp vanpbasednp companynn wentvbd toto seevb jenkinsnpnp inin 1982cd andcc urgedvbd himppo andcc meesenpnpi toto encouragevb theat airnpnp forcenpnp toe0 extendvb theat productionnn of11 fairchildnpnp a10np bombernn forin aat yearnn millmannpnp saidvbd thererb wasbedz aat lucrativejj marketnn inin thirdnpnp worldnpnp countriesnns butcc thatcs fairchildnpnp chancesnns wouldmd bebe limitedvbn ifcs theat airnpnp forcenpnp wasbedz not producingvbg theat planenn theat airnpnp forcenpnp hadhvd decidedvbn toto discontinuevb productionnn ofin theat a10np aat 1960seracd groundsupportnn attacknn bombernn atin theat timenn fairchildnpnp wasbedz hopingvbg toto sellvb a10snp abroadrb j theat tribunenpnp saidvbd theat newspapernn saidvbd onecd sourcenn reportedvbd thatcs aftercs millmannpnp madevbd hispps pitchnn j meesenpnp orderedvbd jenlcinsnpnp toto preparevb aat memonn onin behalfnn ofin fairchildnpnp memosnp signedvbd byin meesenpnp stressingvbg theat importancenn ofin fairchildnpnp arrangingvbg salesnns inin thirdnpnp worldnpnp countriesnns j werebed sentvbn toin theat statenpnp departmentnpnp andcc theat airnpnp forcenpnp millmannpnp diddod not returnvb telephonenn callsnns toen hispp officenn1 andcc referralnn numbersnns mondaynr i theat tribunenpnp saidvbd it is wellknown that part of speech depends on contextthe word quottablequot for example can be a verb in some contexts and a noun in others a program has been written which tags each word in an input sentence with the most likely part of speechthe program produces the following output for the two quottablequot sentences just mentioned at article nn noun bez present 3rd sg form of quotto bequot jj adjective notation is borrowed from francis and kucera pp68 part of speech tagging is an important practical problem with potential applications in many areas including speech synthesis speech recognition spelling correction proofreading query answering machine translation and searching large text data bases the author is particularly interested in speech synthesis applications where it is clear that pronunciation sometimes depends on part of speechconsider the following three examples where pronunciation depends on part of speechfirst there are words like quotwindquot where the noun has a different vowel than the verbthat is the noun quotwindquot has a short vowel as in quotthe wind is strongquot whereas the verb quotwindquot has a long vowel as in quotdo not forget to wind your watchquot secondly the pronoun quotthatquot is stressed as in quotdid you see thatquot unlike the complementizer quotthatquot as in quotit is a shame that he is leavingquot thirdly note the difference between quotoily fluidquot and quottransmission fluidquot as a general rule an adjectivenoun sequence such as quotoily fluidquot is typically stressed on the right whereas a nounnoun sequence such as quottransmission fluidquot is typically stressed on the leftthese are but three of the many constructions which would sound more natural if the synthesizer had access to accurate part of speech informationperhaps the most important application of tagging programs is as a tool for future researcha number of large projects such as cobuild have recently been collecting large corpora in order to better describe how language is actually used in practice quotfor the first time a dictionary has been compiled by the thorough examination of representative group of english texts spoken and written running to many millions of wordsthis means that in addition to all the tools of the conventional dictionary makers the dictionary is based on hard measureable evidencequot cobuild p xv it is likely that there will be more and more research projects collecting larger and larger corporaa reliable parts program might greatly enhance the value of these corpora to many of these researchersthe program uses a linear time dynamic programming algorithm to find an assignment of parts of speech to words that optimizes the product of lexical probabilities and contextual probabilities probability estimates were obtained by training on the tagged brown corpus francis and kucera a corpus of approximately 1000000 words with part of speech tags assigned laboriously by hand over many yearsprogram performance is encouraging a small 400 word sample is presented in the appendix and is judged to be 995 correctit is surprising that a local quotbottomupquot approach can perform so wellmost errors are attributable to defects in the lexicon remarkably few errors are related to the inadequacies of the extremely oversimplified grammar apparently quotlong distancequot dependences are not very important at least most of the timeone might have thought that ngram models were not adequate for the task since it is wellknown that they are inadequate for determining grammaticality quotwe find that no finitestate markov process that produces symbols with transition from state to state can serve as an english grammarfurthermore the particular subclass of such processes that produce norder statistical approximations to english do not come closer with increasing n to matching the output of an english grammarquot chomsky p 113 chomslcy conclusion was based on the observation that constructions such as have long distance dependencies that span across any fixed length window n thus ngram models are clearly inadequate for many natural language applicationshowever for the tagging application the ngram approximation may be acceptable since long distance dependencies do not seem to be very importantstatistical ngram models were quite popular in the 1950s and have been regaining popularity over the past few yearsthe ibm speech group is perhaps the strongest advocate of ngram methods especially in other applications such as speech recognitionrobert mercer has experimented with the tagging application using a restricted corpus and small vocabulary another group of researchers working in lancaster around the same time leech garside and atwell also found ngram models highly effective they report 967 success in automatically tagging the lob corpus using a bigram model modified with heuristics to cope with more important trigramsthe present work developed independently from the lob projectmany people who have not worked in computational linguistics have a strong intuition that lexical ambiguity is usually not much of a problemit is commonly believed that most words have just one part of speech and that the few exceptions such as quottablequot are easily disambiguated by context in most casesin contrast most experts in computational linguists have found lexical ambiguity to be a major issue it is said that practically any content word can be used as a noun verb or adjectivei and that local context is not always adequate to disambiguateintroductory texts are full of ambiguous sentences such as where no amount of syntactic parsing will helpthese examples are generally taken to indicate that the parser must allow for multiple possibilities and that grammar formalisms such as lr are inadequate for natural language since these formalisms cannot cope with ambiguitythis argument was behind a large set of objections to marcus quotlrlikequot deterministic parseralthough it is clear that an expert in computational linguistics can dream up arbitrarily hard sentences it may be as marcus suggested that most texts are not very hard in practicerecall that marcus hypothesized most decisions can be resolved by the parser within a small window and there are only a few problematic cases where the parser becomes confusedhe called these confusing cases quotgarden pathsquot by analogy with the famous example the horse raced past the barn fellwith just a few exceptions such as these quotgarden pathsquot marcus assumes there is almost always a unique quotbestquot interpretation which can be found with very limited resourcesthe proposed stochastic approach is largely compatible with this the proposed approach 1from an information theory point of view one can quantity ambiguity in bitsin the case of the brown tagged corpus the lexical entropy the conditional entropy of the part of speech given the word is about 025 bits per part of speechthis is considerably smaller than the contextual entropy the conditional entropy of the part of speech given the next two parts of speechthis entropy is estimated to be about 2 bits per part of speech assumes that it is almost always sufficient to assign each word a unique quotbestquot part of speech after reading introductory discussions of quotflying planes can be dangerousquot one might have expected that lexical ambiguity was so pervasive that it would be hopeless to try to assign just one part of speech to each word and in just one linear time pass over the input wordshowever the proposed stochastic method is considerably simpler than what marcus had in mindhis thesis parser used considerably more syntax than the proposed stochastic methodconsider the following pair described in marcus where it appears that the parser needs to look past an arbitrarily long noun phrase in order to correctly analyze quothavequot which could be either a tenseless main verb or a tensed auxiliary verb marcus rather unusual example can no longer be handled by fidditch a more recent marcusstyle parser with very large coveragein order to obtain such large coverage fidditch has had to take a more robustmodest view of lexical disambiguationwhereas marcus parsifal program distinguished patterns such as quothave np tenselessquot and quothave np pastparticiplequot most of fidditch diagnostic rules are less ambitious and look only for the start of a noun phrase and do not attempt to look past an arbitrarily long noun phrasefor example fidditch has the following lexical disambiguation rule which says that a preposition is more likely than a noun before a noun phrasemore precisely the rule says that if a nounpreposition ambiguous word is followed by something that starts a noun phrase then rule out the noun possibilitythis type of lexical diagnostic rule can be captured with bigram and trigram statistics it turns out that the sequence preposition determiner is much more common in the brown corpus than the sequence noun determiner most lexical disambiguation rules in fidditch can be reformulated in terms of bigram and trigram statistics in this waymoreover it is worth doing so because bigram and trigram statistics are much easier to obtain than fidditchtype disambiguation rules which are extremely tedious to program test and debugin addition the proposed stochastic approach can naturally take advantage of lexical probabilities in a way that is not easy to capture with parsers that do not make use of frequency informationconsider for example the word quotseequot which is almost always a verb but does have an archaic nominal usage as in quotthe holy seequot for practical purposes quotseequot should not be considered nounverb ambiguous in the same sense as truly ambiguous words like quotprogramquot quothousequot and quotwindquot the nominal usage of quotseequot is possible but not likelyif every possibility in the dictionary must be given equal weight parsing is very difficultdictionaries tend to focus on what is possible not on what is likelyconsider the trivial sentence quoti see a birdquot for all practical purposes every word in the sentence is unambiguousaccording to francis and kucera the word quotiquot appears as a pronoun in 5837 out of 5838 observations quotseequot appears as a verb in 771 out of 772 observations quotaquot appears as an article in 23013 out of 23019 observations c100 and quotbirdquot appears as a noun in 26 out of 26 observations however according to webster seventh new collegiate dictionary every word is ambiguousin addition to the desired assignments of tags the first three words are listed as nouns and the last as an intransitive verbone might hope that these spurious assignments could be ruled out by the parser as syntactically illformedunfortunately this is unlikely to workif the parser is going to accept noun phrases of the form similarly the parser probably also has to accept quotbirdquot as an intransitive verb since there is nothing syntactically wrong with these part of speech assignments are not wrong they are just extremely improbableconsider once again the sentence quoti see a birdquot the problem is to find an assignment of parts of speech to words that optimizes both lexical and contextual probabilities both of which are estimated from the tagged brown corpusthe lexical probabilities are estimated from the following frequencies the lexical probabilities are estimated in the obvious wayfor example the probability that quotiquot is a pronoun prob is estimated as the freqfreq or 58375838the probability that quotseequot is a verb is estimated to be 771772the other lexical probability estimates follow the same patternthe contextual probability the probability of observing part of speech x given the following two parts of speech y and z is estimated by dividing the trigram frequency xyz by the bigram frequency yzthus for example the probability of observing a verb before an article and a noun is estimated to be the ratio of the freq over the freq or 341253091 0064the probability of observing a noun in the same context is estimated as the ratio of freq over 53091 or 62953091 001the other contextual probability estimates follow the same patterna search is performed in order to find the assignment of part of speech tags to words that optimizes the product of the lexical and contextual probabilitiesconceptually the search enumerates all possible assignments of parts of speech to input wordsin this case there are four input words three of which are two ways ambiguous producing a set of 22218 possible assignments of parts of each of the eight sequences are then scored by the product of the lexical probabilities and the contextual probabilities and the best sequence is selectedin this case the first sequence is by far the bestin fact it is not necessary to enumerate all possible assignments because the scoring function cannot see more than two words awayin other words in the process of enumerating part of speech sequences it is possible in some cases to know that some sequence cannot possibly compete with another and can therefore be abandonedbecause of this fact only 0 paths will be enumeratedlet us illustrate this optimization with an example find all assignments of parts of speech to quotbirdquot and score the partial sequencehenceforth all scores are to be interpreted as log probabilitiesfind all assignments of parts of speech to quotaquot and scoreat this point there are two paths now find assignments of quotseequot and scoreat this point the number of paths seem to be growing exponentiallynow find assignments of quotiquot and scorenote however that it is no longer necessary to hypothesize that quotaquot might be a french preposition in because all four paths ppss vb in nn nn vb in nn ppss uh in nn and np uh at nn score less well than some other path and there is no way that any additional input could make any differencein particular the path ppss vb in nn scores less well than the path ppss vb at nn and additional input will not help ppss vb in nn because the contextual scoring function has a limited window of three parts of speech and that is not enough to see past the existing ppss and vbsimilar stochastic methods have been applied to locate simple noun phrases with very high accuracythe program inserts brackets into a sequence of parts of speech producing output such as the proposed method is a stochastic analog of precedence parsingrecall that precedence parsing makes use of a table that says whether to insert an open or close bracket between any two categories the proposed method makes use of a table that givvs the probabilities of an open and close bracket between all pairs of parts of speecha sample is shown below for the five parts of speech at nn nns vb in the table says for example that there is no chance of starting a noun phrases after an article and that there is a large probability of starting a noun phrase between a verb and an noun of training material selected from the brown corpusthe training material was parsed into noun phrases by laborious semiautomatic means it took about a manweek to prepare the training materialthe stochastic parser is given a sequence of parts of speech as input and is asked to insert brackets corresponding to the beginning and end of noun phrasesconceptually the parser enumerates all possible parsings of the input and scores each of them by the precedence probabilitiesconsider for example the input sequence nn vbthere are 5 possible ways to bracket this sequence each of these parsings is scored by multiplying 6 precedence probabilities the probability of an openclose bracket appearing in any one of the three positions the parsing with the highest score is returned as outputa small sample of the output is given in the appendixthe method works remarkably well considering how simple it isthere is some tendency to underestimate the number of brackets and run two noun phrases together as in np the time fairchildthe proposed method omitted only 5 of 243 noun phrase brackets in the appendix
A88-1019
a stochastic parts program and noun phrase parser for unrestricted textour part of speech tagger can now input unrestricted text and assign each word with the most likely part of speech with fairly reasonable accuracy and efficiencyour partofspeech tagger performs not only partofspeech analysis but it also identities the most simple kinds of noun phrases mostly sequences of determiners premodifiers and nominal heads by inserting brackets around them
applied text generation this paper presents the joyce system as an example of a fullyimplemented applicationoriented text generation systemjoyce covers the whole range of tasks associated with text generation from content selection to morphological processingit was developped as part of the interface of the software design environment ulyssesthe following design goals were set for it while we were able to exploit existing research for many of the design issues it turned out that we needed to develop our own approach to text planning this paper will present the system and attempt to show how these design objectives led to particular design decisionsthe structure of the paper is as followsin section 2 we will present the underlying application and give examples of the output of the systemin section 3 we will discuss the overall structure of joycewe then discuss the three main components in turn the text planner in section 4 the sentence planner in section 5 and the realizer in section 6we will discuss the text planner in some detail since it represents a new approach to the problemsection 7 traces the generation of a short textin section 8 we address the problem of portability and wind up by discussing some shortcomings of joyce in the conclusionthe joyce text generation system was developped part of the software design environment ulysses ulysses includes a graphical environment for the design of secure distributed software systemsthe use manipulates icons that symbolize components data ports of components and data flow between ports additional information princi pally about the security level of the components an ports is entered through menusthe design proceed hierarchically by topdown structural decompositionas a tool in the user interface joyce generates twc different types of texts about software designs the texts joyce generates are specifically conceived o as written textsthe text output is integrated into th graphical environment in such a way that much of th same interactive functionality is available either throng the text output window or through the graphical inter face windowfor example if a designer reads the desig annotation generated by joyce and notices that the secu rity level of a component has been entered wrong the the error can be corrected by clicking at the name of th component in the text window and accessing the appro priate menuas an example of the output of joyce consider the tex in figure 2it is an annotation of the component quothostquot the top level decomposition of this component is show in figure 1the text annotates the software design b describing its structure and interpreting it in terms c its security characteristicsthe text in figure 4 is generated by joyce in order to report the results of the flow analyzer the graphical representation of the underlying system can be seen in figure 3note that the structures of the two texts are quite different while the host text is structured according to more abstract categories such as design structure and functionality the black box text follows the path of the insecure flow through the componentjoyce has been fully implemented in common lisp and runs on the symbolics lisp machine and on sun workstationsa successor version has been ported to the apple macintoshthe multilevel host is a complex component of the stationit contains a kernel a tip a process a net handler and a group of managersthe process the tip the managers and the net handler communicate only through the kernelthe manifestly secure process and the managers perform auxiliary functionsthe process is lowlevelthe tip serves as interface to a user the net handler handles communication with a netthe security statuses of the tip the managers and the net handler have not yet been specifiedthe kernel is a complex componentits security status has not yet been specifiedthe kernel contains a message switch an address register and a locatorthe address register the locator and the message switch communicate directly with each otherthe lowlevel address register and the multilevel locator are databasesthe message switch handles communication with the tip the process the managers and the net handlerthe security status of the message switch has not yet been specifiedin the black box an insecure flow occursclassified information enters the black box through p6it is passed through the inputter to the merger which may upgrade it to topsecretthe merger passes it to the analyzer which has been assumed securethe analyzer downgrades it to secretit passes it through the incrementor to the formater which downgrades it when a classified corrected reading leaves through p34joyce consists of three separate modules which perform distinct tasks and access their own knowledge bases usually the different tasks of text generation are divided among two modules rather than threehowever there is a certain amount of disagreement about where the line between the two is to be drawnfor example mckeown text performs the tasks that joyce classifies as sentence planning as part of the realization process whereas meteer spokesman classifies them as part of text planning for a useful summary of the terminological issuesin this paper quottext planningquot will always be used in the narrow sense of quotcontent selection and organizationquotthe architecture of joyce is directly influenced by that of the semsyn system rosner divides the realization component into two parts the quotgenerator kernelquot and the quotgenerator front endquotthis distinction is mirrored exactly by the distinction between sentence planning and realization in joycethere are two main advantages to such a tripartite architecture one conceptual and the other practicalconceptually the advantage is that linguistic planning tasks are clearly separated from the actual grammar which comprises word order and morphological rulesthese rules can be stated independently of the formulation of purely semantic rules that determine lexical and syntactic choicesthis modularity makes the system more maintainablethe linguistic planning tasks should however be clearly separated from the textual planning tasks while the linguistic planning tasks are languagedependent the textual planning tasks appear not to be21note that the tasks meteer groups together as quotsyntaxquot choosing the syntactic structure and linearization are inseparable only in certain syntactic representationsin joyce the deepsyntactic representation encodes syntactic structure but not linear order 2we are not aware of any example in which different text plans are needed for different languagesthe fact that functionally similar texts may display different structures in different cultures should not be confused with languagespecific constraints on text structurethus if multilingual generation is desired text planning and sentence planning ought to be performed by distinct componentson a more practical level modularity in design and implementation can be exploited by parallel processing of independent moduleswhile the current implementations of joyce do not allow for parallel execution the incremental processing of parallel computing tasks on a serial machine is also advantageous as is argued in the wip project 3incrementality reduces the initial response time of the system this can be crucial if multiparagraph text is to be generated by an interface toolin the joyce system the text planner cedes control to the sentence planner as soon as the text planner has defined a propositiononce the sentence planner has constructed the dsyntr of a complete sentence it sends it to the realizer which generates the english sentencethus the first sentence is output by joyce shortly after the text generator is invoked text continues to be output approximately at reading speedthe effect is that a user of the text generator has the impression that he or she never has to wait for the system to respond even when it is generating lengthy textsthroughout the system processing is messagedriven in the sense of control lies in the input which is used to construct the next level of representationthere is no need for backtracking or feedback from one level of processing to an earlier oneas is argued by mcdonald et al such an architecture contributes to processing efficiencywe will now discuss the three modules of joyce in more detailprior to the design of the text planning component of joyce several existing approaches were studiedsince the structure of the descriptive text does not mirror the structure of the domain paris quotprocedural strategyquot cannot be used in generalhovy rstbased planner assumes that content selection has already been performed contrary to the situation in the ulysses application furthermore there are efficiency problems in a pure stripslike planning paradigmwe therefore found mckeown schemabased approach to be the most promisinghowever it turned out that general rhetorical schemas cannot adequately capture the structure of the intended textsin we argue that planning certain types of texts such as reports and descriptions requires domainspecific knowledge about how to communicate in that domainthat knowledge we call quotdomain communication knowledgequot for example in describing secure system designs 3incrementality within the realizer has little practical benefit when the realizer is reasonably fast its study is mainly motivated by psycholinguistic considerationstherefore there was no attempt in joyce to make the realizer incremental you must relate the security level of each component but not say the number of ports or their security levelsfurthermore the connectivity of components should be stated before their functionalityin the flow analyzer text the security levels of the components need not be communicated at all but if a component downgrades information it must be stated whether and why the component is securethis very precise knowledge about which domain information needs to communicated and in what order cannot simply be derived from general principleswe have also argued that in many existing text planning systems such dck has been encoded implicitlyin the interest of efficiency modularity and portability we have decided to represent dck explicitly in joycewe have developed a quotschema languagequot for easy representation of dck called dickens the schemas are similar in form to those used by mckeownbasically schemas can be seen as a description of text structurethe system however interprets each schema as a list of instructionsthe instructions can be calls to other schemas recursive calls to the same schema or they can be one of a set of special commands provided by the schema languageone special command produces a specific proposition and sends it to the sentence plannerother special commands support conditional branching and iterationduring execution each schema is associated with a particular subset of the domain representation which is called the focus in the ulysses application the focus always corresponds to one componentthere are special commands to shift the focusin addition to the focus which limits the domain representation from which information can be communicated a theme can be set which determines information structure within individual propositionsthe theme corresponds to mckeown quotlocal focusquotas has been widely recognized thematic structure affects issues such as grammatical voice at the linguistic levelin addition two further special commands were found to be necessary in order to perform text planning this type of revision is different from the revision discussed in and in these systems the linguistic specification of the target texts is revisedin joyce it is the text plan itself ie the prelinguistic representation of text content and structure that is subject to revisioninstead of being templates that limit the structure of the text to certain preconceived types the schemas are now an explicit and compact representation of domain communication knowledgethe sentence planner combines all those planning tasks that are specific to the target languageit receives propositions from the text planner and sends the dsyntr of complete sentences to the realizer for processingit has two main tasks first it chooses lexical and syntactic realizations by consulting the conceptualenglish dictionary second it determines sentence scope by merging the dsyntr of individual propositionswe will discuss each of these steps in turnthe conceptualenglish dictionary is implemented as a set of procedures that operate on the propositionseach proposition is mapped into the dsyntr of a clause lexicalization can take pragmatic factors into accountit can also refer to a history of lexicalizations if lexical variation is desiredafter a dsyntr has been constructed certain syntactic paraphrase operations are performed if necessary for example passivization if a grammatical object is the theme of the sentence or if the subject is absentthe second task of the sentence planner is to determine the scope of sentencescombining the linguistic realization of propositions into larger sentences is a crucial issue because it increases the quality of the generated textfor example the lowlevel address register and the multilevel locator are databases is significantly better than the four clauses from which it was formed the address register is a databaseit is lowlevelthe locator is a databaseit is multilevelan informal study in which subjects were asked to revise a text containing only singleproposition sentences supported the claim that longer sentences are preferred over shorter ones whenever possible and reasonablethe first question that arises is at what level propositions should be combinedto date the issue of sentence scoping has always been dealt with at a prelinguistic conceptual level or however different languages have different syntactic means of combining clauses clause combining must refer to the specific linguistic resources of the target languagetherefore in joyce the task is performed by the sentence planner rather than the text plannerjoyce performs the following syntactic clausecombining operations relative clause formation adjectival attachment and conjunctionconjunction includes multiple conjunctions of more than one clause and may lead to elision of repeated sentence elements for example in the example quoted above the lexeme data base occurs only once in the conjoined sentencethe second question that arises is how clause combination should be restrictedwe have identified stylistic and discourse constraintsthe stylistic constraints are constraints against the sentence becoming too long and a constraint on recursive embedding of relative clausesdiscourse constraints are imposed by the structure of the text clauses belonging to conceptually distinct text units should not be combinedthe text planner can send a special message called conceptualbreak to the sentence plannerit signals the beginning of a new textual unitthese special messages are triggered by appropriate indications in the dickens specifcation of the dckthe algorithm is as followsthe sentence planner maintains a quotcurrentquot dsyntreach incoming proposition is translated into a dsyntr which the sentence planner then attempts to merge with the current dsyntrif none of the clause combination strategies work or if stylistic heuristics interfere or if the incoming proposition is a conceptualbreak the current dsyntr is sent to the realizer and the new dsyntr becomes the current onethe process of clause combination can be very easily modeled at the dsyntr level relative clause formation and conjunction reduce to simple tree composition operationsissues such as word order in relative clauses the morphological form of the complementizer and conjunction reduction can be dealt with at further stages of processingthe linguistic component is based on meaningtext theory and is a reimplementation of polguere prolog implementation of a meaningtext model for english mtt defines three successive levels of representationwith each level of representation is associated a component which transforms the representation into the next higher leveleach component is implemented as a separate module in joyce words of a sentencesister nodes are unordered with respect to each otherthe nodes are labelled with lexemes which are annotated with featuresnumerical arc labels represent the syntactic arguments of the governing lexeme while attr represents the attributive relationan example is shown in figure 6note that the function words the is to are not yet represented the written form of the english sentencemorphological processing is done by a component closely based on sutras while linguistic realizers based on other theories coulc have been used this mttbased approach offers the following advantagesas an example consider the sample text in figure 4i describes the occurrence of an insecure flow in compo nent black boxthe texts that explain insecure flow are generated by a set of eight schemas one of which i shown in figure 7it is the first that is invokedspecial commands are preceded by a colon command not starting with a colon are calls to other schemas the arguments to special commands immediately follow the commandthe title special command generates a titlecommand theme sets the initial theme of the paragraph influencing issues such as passivizationthen follow three makepropos it ion commands which each produce one propositionthe first argument to makeproposition is the class of the propositionthe slots are typically filled with pointers into the domain representation of the application program focus is a pointer maintained by the text planner which refers to the global focus while getinformation and entryport are functions provided by the underlying application programnot all arguments must be filled by a makeproposition command the sentence planner will choose lexical and syntactic realizations accordinglythe text planner sends an insecureflow proposition to the sentence planner which translates it into a dsyntr tree and returns control to the text plannerthe text planner then proceeds to the next makeproposition command and sends the proposition shown in figure 8 to the sentence plannerwhen the sentence planner receives the enter proposition it translates it into the dsyntr tree shown in figure 9 which could be expressed as the clause information enters the black box through p6note that the choice of enter as verb is due to the fact that information is currently the theme if black box were the theme the choice would have been receivesthe sentence planner then tries to combine the new dsyntr with the current one this fails so the current dsyntr is sent to the realizer which prints out the first sentencethe new dsyntr becomes the current onecontrol is returned to the text planner which processes the third makepropos it ion command and sends the appropriate proposition to the sentence plannerthe sentence planner generates the clausal dsyntr tree shown in figure 10 it then attempts to combine the new clause with the quotcurrent dsyntrquot first using the adjectival attachment strategythis succeeds yielding the tree shown in figure 11it then returns control to the text planner since another clause could be merged with the current dsyntrthe text planner then calls schema conceptualbreakthe only effect of this schema is to send a conceptualbreak message to the sentence planner which thereupon sends its current dsyntr to the realizerthe realizer prints out the surface sentence classified information enters the black box through p6the last command of the schema first shifts the focus to nextcomponent which is the next component traversed by the insecure flowthe second argument of the shiftfocusandedit command designates the next schema to be calledthis command also initiates the editing processall propositions that are generated as a result of this command are kept on a list rather than sent to the sentence plannerwhen the command has been executed the list is edited by the function given as the third argument mergesenddatathe effect of this function is to combine two successive send propositions into a single new one so that two clauses such as the analyzer sends the information to the incrementor and the incrementor sends the information to the formater yield the analyzer sends the information to the formater through the incrementornote that this combination is not a linguistic one but a conceptual one since it relies on facts about sending data in this domain rather than on the syntax or lexical semantics about the verb sendit must therefore be performed by the text planner and not the sentence plannerporting is an important way to evaluate complete applied text generation systems since there is no canonical set of tasks that such a system must be able to perform and on which it can be testedjoyce was originally designed to generate only component descriptions the quotflow analyzerquot heuristic tool was added later to the system and the completely different type of text it required was a first successful test of joyce and its text planner in particularthe modular design of joyce proved beneficial during the porting to the new applicationthe following conceptually welldefined tasks were required during the development of the quotflow analyzerquot application 1since the flow analyzer is a new type of tool no corpus of texts was available for studyinstead sample texts were written by hand and critiqued by domain expertsthe texts were then revised and resubmitted to the expertsthe quotideal textquot that emerged was then analyzed and the dck needed to generate it expressed in terms of schemaswe interpret the cycle of writing critiquing and revising as a process of dck acquisition2new classes of proposition were definedthese include enter upgrade and downgradesome of the proposition classes from the earlier descriptive application could be reused such as send3the conceptualenglish dictionary was extended to account for the new proposition classes4several new lexical items were entered into the english lexiconfor example the english lexeme downgrade subcategorizes for two nouns and a propositional phrase obligatorily headed by tonote that those parts of joyce that deal with facts of english needed no attention we are currently working on porting a successor of joyce to several new applications including the generation of project management reportsinitial results including a prototype are encouragingwe are aware of several shortcomings of joyce which we will address in future versions of the system ple in text planning it appears to play an important role as a constraint on possible text structuresfurthermore it helps determine the use of connectives between rhetorically related clausesfinally it may determine when conceptual breaks occur in text structure which affect sentence scoping we are currently investigating the option of augmenting the dck schemas with rhetorical annotations the current form of the conceptualenglish dictionary is not satisfactory since the dictionary writer is too free in writing dictionary entriesfor example the dictionary could be used as a back door for the introduction of new content which the text planner was unable to planmeteer discusses the same problem in mckeown original text system an interface to the dictionary that is more restrictive is needed while it is possible to set a theme in the text plan thematic structure has not received sufficient attentionrules of thematic progression are not taken into considerationfurthermore clause combination is also sensitive to thematic structure which is currently not taken intc accountdespite these shortcomings joyce has proven to be a successful and useful tool in the ulysses user interfaceii has met the design objectives of speed and quality and our experience in porting the text generator to new task and to new applications indicates that joyce is a flexibl system that can adapt to a variety of text generatior tasks
A92-1006
applied text generationwe divide tasks in the generation process into three stages the text planner has access only to information about communicative goals the discourse context and semantics and generates a nonlinguistic representation of text structure and content the sentence planner chooses abstract linguistic resources it passes an abstract lexicosyntactic specification5 to the realizer which inflects adds function words and linearizes thus producing the surface string
a practical partofspeech tagger we present an implementation of a partofspeech tagger based on a hidden markov model the methodology enables robust and accurate tagging with few resource requirements only a lexicon and some unlabeled training text are required accuracy exceeds 96 we describe implementation strategies and optimizations which result in highspeed operation three applications for tagging are described phrase recognition word sense disambiguation and grammatical function assignment 1 desiderata many words are ambiguous in their part of speech for example quottagquot can be a noun or a verb however when a word appears in the context of other words the ambiguity is often reduced in quota tag is a partofspeech labelquot the quottagquot can only be a noun a tagger is a system that uses context to assign parts of speech to words automatic text tagging is an important first step in discovering the linguistic structure of large text corpora partofspeech information facilitates higherlevel analysis such as recognizing noun phrases and other patterns in text for a tagger to function as a practical component in a language processing system we believe that a tagger must be corpora contain ungrammatical constructions isolated phrases and nonlinguistic data corpora are also likely to contain words that are unknown to the tagger it is desirable that a tagger deal gracefully with these situations a tagger is to be used to analyze arbitrarily large corpora it must be efficientperforming in time linear in the number of words tagged any training required should also be fast enabling rapid turnaround with new corpora and new text genres a should attempt to assign the correct partofspeech tag to every word encountered a should be able to take advantage of linguistic insights one should be able to correct errors by supplying appropriate priori quothintsquot it should be possible to give different hints for different corpora effort required to retarget a tagger to new corpora new tagsets and new languages should be minimal 2 methodology 21 background several different approaches have been used for building text taggers greene and rubin used a rulebased approach in the taggit program greene and rubin 1971 which was an aid in tagging the brown corpus francis and kueera 1982 taggit disambiguated 77 of the corpus the rest was done manually over a period of several years more recently koskenniemi also used a rulebased approach implemented with finitestate machines koskenniemi 1990 statistical methods have also been used have been used with this model the first makes use of a tagged training corpus derouault and merialdo use a bootstrap method for training derouault and merialdo 1986 at first a relatively small amount of text is manually tagged and used to train a partially accurate model the model is then used to tag more text and the tags are manually corrected and then used to retrain the model church uses the tagged brown corpus for training church 1988 these models involve probabilities for each word in the lexicon so large tagged corpora are required for reliable estimation the second method of training does not require a tagged training corpus in this situation the baumwelch algorithm can be used baum 1972 under this regime the model is a markov model as state transitions are assumed to be unobservable jelinek has used this method for training a text tagger jelinek 1985 parameter smoothing can be conachieved using the method of interpolawhich weighted estimates are taken from secondand firstorder models and a uniform probability distribution jelinek and mercer 1980 kupiec used word equivclasses on parts of speech to pool data from individual words kupiec 1989b the most common words are still represented individually as sufficient data exist for robust estimation 133 however all other words are represented according to the set of possible categories they can assume in this manner the vocabulary of 50000 words in the brown corpus can be reduced to approximately 400 distinct ambiguity classes kupiec 1992 to further reduce the number of parameters a firstorder model can be employed in kupiec 1989a networks are used to selectively augment the context in a basic firstorder model rather than using uniformly secondorder dependencies 22 our approach we next describe how our choice of techniques satisfies the listed in section 1 the use of an complete flexibility in the choice of training corpora text from any desired domain can be used and a tagger can be tailored for use with a particular text database by training on a portion of that database lexicons containing alternative tag sets can be easily accommodated without any need for relabeling the training corpus affording further flexibility in the use of specialized tags as the resources required are simply a lexicon and a suitably large sample of ordinary text taggers can be built with minimal effort even for other languages such as french the use of ambiguity classes and a firstorder model reduces the number of parameters to be estimated without significant reduction in accuracy this also enables a tagger to be reliably trained using only moderate amounts of text we have produced reasonable results training on as few as 3000 sentences fewer parameters also reduce the time required for training relatively few ambiguity classes are sufficient for wide coverage so it is unlikely that adding new words to the lexicon requires retraining as their ambiguity classes are already accommodated vocabulary independence is achieved by predicting categories for words not in the lexicon using both context and suffix information probabilities corresponding to category sequences that never occurred in the training data are assigned small nonzero values ensuring that the model will accept any sequence of tokens while still providing the most likely tagging by using the fact that words are typically associated with only a few partofspeech categories and carefully ordering the computation the algorithms have linear complexity 3 hidden markov modeling the hidden markov modeling component of our tagger is implemented as an independent module following the specgiven in levinson et with special attention to space and time efficiency issues only firstorder modeling is addressed and will be presumed for the remainder of this discussion 31 formalism brief an a doubly stochastic process that generates sequence of symbols sls2 1 t 2the baumwelch algorithm and the viterbi algorithm involve operations on products of numbers constrained to be between 0 and 1since these products can easily underflow measures must be taken to now define t cicit and use a in place of a in equation 1 to define for the next iteration note that ein_i eet 1 for 1 t 1 where the scaled backward and forward probabilities and 0 can be exchanged for the unscaled probabilities in equations 35 without affecting the value of the ratiosto see this note that t cat and 0t where now in terms of the scaled probabilities equation 5 for example can be seen to be unchanged a slight difficulty occurs in equation 3 that can be cured by the addition of a new term cti in each product of the upper sum numerical instability in the viterbi algorithm can be ameliorated by operating on a logarithmic scale levinson et al 1983that is one maximizes the log probability of each sequence of state transitions care must be taken with zero probabilitieshowever this can be elegantly handled through the use of ieee negative infinity p754 1981as can be seen from equations 15 the time cost of training is 0similarly as given in equation 6 the viterbi algorithm is also 0however in partofspeech tagging the problem structure dictates that the matrix of symbol probabilities b is sparsely populatedthat is 0 if the ambiguity class corresponding to symbol j includes the partofspeech tag associated with state iin practice the degree of overlap between ambiguity classes is relatively low some tokens are assigned unique tags and hence have only one nonzero symbol probabilitythe sparseness of b leads one to consider restructuring equations 16 so a check for zero symbol probability can obviate the need for further computationequation 1 is already conveniently factored so that the dependence on bj is outside the inner sumhence if k is the average number of nonzero entries in each row of b the cost of computing equation 1 can be reduced to 0equations 24 can be similarly reduced by switching the order of iterationfor example in equation 2 rather than for a given t computing ot for each i one at a time one can accumulate terms for all i in parallelthe net effect of this rewriting is to place a b3 0 check outside the innermost iterationequations 3 and 4 submit to a similar approachequation 5 is already only 0hence the overall cost of training can be reduced to 0 which in our experience amounts to an order of magnitude speedup4 the time complexity of the viterbi algorithm can also be reduced to 0 by noting that b3 can be factored out of the maximization of equation 6adding up the sizes of the probability matrices a b and h it is easy to see that the storage cost for directly representing one model is proportional to nrunning the baumwelch algorithm requires storage for the sequence of observations the a and 3 probabilities the vector c and copies of the a and b matrices hence the grand total of space required for training is proportional to t 2nsince n and m are fixed by the model the only parameter that can be varied to reduce storage costs is t now adequate training requires processing from tens of thousands to hundreds of thousands of tokens kupiec 1989athe training set can be considered one long sequence it which case t is very large indeed or it can be broken up into a number of smaller sequences at convenient boundariesin firstorder hidden markov modeling the stochastic process effectively restarts at unambiguous tokens such as sentence and paragraph markers hence these tokens are convenient points at which to break the training setif the baumwelch algorithm is run separately on each piece the resulting trained models must be recombined in some wayone obvious approach is simply to averagehowever this fails if any two an equivalent approach maintains a mapping from states i to nonzero symbol probabilities and simply avoids in the inner iteration computing products which must be zero kupiec 1992 states are indistinguishable because states are then not matched across trained modelsit is therefore important that each state have a distinguished role which is relatively easy to achieve in partofspeech taggingour implementation of the baumwelch algorithm breaks up the input into fixedsized pieces of training textthe baumwelch algorithm is then run separately on each piece and the results are averaged togetherrunning the viterbi algorithm requires storage for the sequence of observations a vector of current maxes a scratch array of the same size and a matrix of ib indices for a total proportional to t n and a grand total of t nagain n and m are fixedhowever t need not be longer than a single sentence since as was observed above the hmm and hence the viterbi algorithm restarts at sentence boundariesan hmm for partofspeech tagging can be tuned in a variety of waysfirst the choice of tagset and lexicon determines the initial modelsecond empirical and a priori information can influence the choice of starting values for the baumwelch algorithmfor example counting instances of ambiguity classes in running text allows one to assign nonuniform starting probabilities in a for a particular tag realization as a particular ambiguity classalternatively one can state a priori that a particular ambiguity class is most likely to be the reflection of some subset of its component tagsfor example if an ambiguity class consisting of the open class tags is used for unknown words one may encode the fact that most unknown words are nouns or proper nouns by biasing the initial probabilities in banother biasing of starting values can arises from noting that some tags are unlikely to be followed by othersfor example the lexical item quottoquot maps to an ambiguity class containing two tags infinitivemarker and toaspreposition neither of which occurs in any other ambiguity classif nothing more were stated the hmm would have two states which were indistinguishablethis can be remedied by setting the initial transition probabilities from infinitivemarker to strongly favor transitions to such states as verbuninflected and adverbour implementation allows for two sorts of biasing of starting values ambiguity classes can be annotated with favored tags and states can be annotated with favored transitionsthese biases may be specified either as sets or as set complementsbiases are implemented by replacing the disfavored probabilities with a small constant and redistributing mass to the other possibilitiesthis has the effect of disfavoring the indicated outcomes without disallowing them sufficient converse data can rehabilitate these valuesin support of this and other work we have developed a system architecture for text access cutting et al 1991this architecture defines five components for such systems corpus which provides text in a generic manner analysis which extracts terms from the text index which stores term occurrence statistics and search which utilizes these statistics to resolve queriesthe partofspeech tagger described here is implemented as an analysis modulefigure 1 illustrates the overall architecture showing the tagger analysis implementation in detailthe tagger itself has a modular architecture isolating behind standard protocols those elements which may vary enabling easy substitution of alternate implementationsalso illustrated here are the data types which flow between tagger componentsas an analysis implementation the tagger must generate terms from textin this context a term is a word stem annotated with part of speechtext enters the analysis subsystem where the first processing module it encounters is the tokenizer whose duty is to convert text into a sequence of tokenssentence boundaries are also identified by the tokenizer and are passed as reserved tokensthe tokenizer subsequently passes tokens to the lexiconhere tokens are converted into a set of stems each annotated with a partofspeech tagthe set of tags identifies an ambiguity classthe identification of these classes is also the responsibility of the lexiconthus the lexicon delivers a set of stems paired with tags and an ambiguity classthe training module takes long sequences of ambiguity classes as inputit uses the baumwelch algorithm to produce a trained hmm an input to the tagging moduletraining is typically performed on a sample of the corpus at hand with the trained hmm being saved for subsequent use on the corpus at largethe tagging module buffers sequences of ambiguity classes between sentence boundariesthese sequences are disambiguated by computing the maximal path through the hmm with the viterbi algorithmoperating at sentence granularity provides fast throughput without loss of accuracy as sentence boundaries are unambiguousthe resulting sequence of tags is used to select the appropriate stemspairs of stems and tags are subsequently emittedthe tagger may function as a complete analysis component providing tagged text to search and indexing components or as a subsystem of a more elaborate analysis such as phrase recognitionthe problem of tokenization has been well addressed by much work in compilation of programming languagesthe accepted approach is to specify token classes with regular expressionsthese may be compiled into a single deterministic finite state automaton which partitions character streams into labeled tokens aho et al 1986 lesk 19751in the context of tagging we require at least two token classes sentence boundary and wordother classes may include numbers paragraph boundaries and various sorts of punctuation however for simplicity we will henceforth assume only words and sentence boundaries are extractedjust as with programming languages with text it is not always possible to unambiguously specify the required token classes with regular expressionshowever the addition of a simple lookahead mechanism which allows specification of right context ameliorates this aho et al 1986 lesk 1975for example a sentence boundary in english text might be identified by a period followed bywhitespace followed by an uppercase letterhowever the uppercase letter must not be consumed as it is the first component of the next tokena lookahead mechanism allows us to specify in the sentenceboundary regular expression that the final character matched should not be considered a part of the tokenthis method meets our stated goals for the overall systemit is efficient requiring that each character be examined only once it is easily parameterizable providing the expressive power to concisely define accurate and robust token classesthe lexicon module is responsible for enumerating parts of speech and their associated stems for each word it is givenfor the english word quotdoesquot the lexicon might return quotdo verbquot and quotdoe pluralnounquot it is also responsible for identifying ambiguity classes based upon sets of tagswe have employed a threestage implementation first we consult a manuallyconstructed lexicon to find stems and parts of speechexhaustive lexicons of this sort are expensive if not impossible to producefortunately a small set of words accounts for the vast majority of word occurencesthus high coverage can be obtained without prohibitive effortwords not found in the manually constructed lexicon are generally both open class and regularly inflectedas a second stage a languagespecific method can be employed to guess ambiguity classes for unknown wordsfor many languages word suffixes provide strong cues to words possible categoriesprobabalistic predictions of a word category can be made by analyzing suffixes in untagged text kupiec 1992 meteer et al 1991as a final stage if a word is not in the manually constructed lexicon and its suffix is not recognized a default ambiguity class is usedthis class typically contains all the open class categories in the languagedictionaries and suffix tables are both efficiently implementable as letter trees or tries knuth 1973 which require that each character of a word be examined only once during a lookupin this section we detail how our tagger meets the desiderata that we outlined in section 1the system is implemented in common lisp steele 1990all timings reported are for a sun sparcstation2the english lexicon used contains 38 tags and 174 ambiguity classes training was performed on 25000 words in articles selected randomly from grolier encyclopediafive iterations of training were performed in a total time of 115 cpu secondsfollowing is a time breakdown by component training average pseconds per token tokenizer lexicon 1 iteration 5 iterations total 640 400 680 3400 4600 tagging was performed on 115822 words in a collection of articles by the journalist dave barrythis required a total of of 143 cpu secondsthe time breakdown for this was as follows tagging average pseconds per token tokenizer lexicon viterbi total 604 388 233 1235 it can be seen from these figures that training on a new corpus may be accomplished in a matter of minutes and that tens of megabytes of text may then be tagged per hourwhen using a lexicon and tagset built from the tagged text of the brown corpus francis and kueera 1982 training on one half of the corpus and tagging the other 96 of word instances were assigned the correct tageight iterations of training were usedthis level of accuracy is comparable to the best achieved by other taggers church 1988 merialdo 1991the brown corpus contains fragments and ungrammaticalities thus providing a good demonstration of robustnessa tagger should be tunable so that systematic tagging errors and anomalies can be addressedsimilarly it is important that it be fast and easy to target the tagger to new genres and languages and to experiment with different tagsets reflecting different insights into the linguistic phenomena found in textin section 35 we describe how the hmm implementation itself supports tuningin addition our implementation supports a number of explicit parameters to facilitate tuning and reuse including specification of lexicon and training corpusthere is also support for a flexible tagsetfor example if we want to collapse distinctions in the lexicon such as those between positive comparative and superlative adjectives we only have to make a small change in the mapping from lexicon to tagsetsimilarly if we wish to make finer grain distinctions than those available in the lexicon such as case marking on pronouns there is a simple way to note such exceptionswe have used the tagger in a number of applicationswe describe three applications here phrase recognition word sense disambiguation and grammatical function assignmentthese projects are part of a research effort to use shallow analysis techniques to extract content from unrestricted textwe have constructed a system that recognizes simple phrases when given as input the sequence of tags for a sentencethere are recognizers for noun phrases verb groups adverbial phrases and prepositional phraseseach of these phrases comprises a contiguous sequence of tags that sat is fies a simple grammarfor example a noun phrase can be a unary sequence containing a pronoun tag or an arbitrar ily long sequence of noun and adjective tags possibly pre ceded by a determiner tag and possibly with an embeddec possessive markerthe longest possible sequence is founc conjunctions are not recognized as part of any phrase for example in the fragment quotthe cats and dogsquot quotthe catsquot and quotdogsquot will be recognized as two noun phrasesprepositional phrase attachment is not performed at this stage of processingthis approach to phrase recognition in some cases captures only parts of some phrases however our approach minimizes false positives so that we can rely on the recognizers resultspartofspeech tagging in and of itself is a useful tool in lexical disambiguation for example knowing that quotdigquot is being used as a noun rather than as a verb indicates the word appropriate meaningbut many words have multiple meanings even while occupying the same part of speechto this end the tagger has been used in the implementation of an experimental noun homograph disambiguation algorithm hearst 1991the algorithm performs supervised training over a large text corpus gathering lexical orthographic and simple syntactic evidence for each sense of the ambiguous nounafter a period of training catch word classifies new instances of the noun by checking its context against that of previously observed instances and choosing the sense for which the most evidence is foundbecause the sense distinctions made are coarse the disambiguation can be accomplished without the expense of knowledge bases or inference mechanismsinitial tests resulted in accuracies of around 90 for nouns with strongly distinct sensesthis algorithm uses the tagger in two ways to determine the part of speech of the target word and as a step in the phrase recognition analysis of the context surrounding the nounthe phrase recognizers also provide input to a system sopa sibun 1991 which recognizes nominal arguments of verbs specifically subject object and predicative argumentssopa does not rely on information specific to the particular verbs involvedthe first step in assigning grammatical functions is to partition the tag sequence of each sentence into phrasesthe phrase types include those mentioned in section 61 additional types to account for conjunctions complementizers and indicators of sentence boundaries and an quotunknownquot typeafter a sentence has been partitioned each simple noun phrase is examined in the context of the phrase to its left and the phrase to its righton the basis of this local context and a set of rules the noun phrase is marked as a syntactic subject object predicative or is not marked at alla label of predicative is assigned only if it can be determined that the governing verb group is a form of a predicating verb because this cannot always be determined some predicatives are labeled objectsif a noun phrase is labeled it is also annotated as to whether the governing verb is the closest verb group to the right or to the leftthe algorithm has an accuracy of approximately 80 in assigning grammatical functionswe would like to thank marti hearst for her contributions to this paper lauri karttunen and annie zaenen for their work on lexicons and kris halvorsen for supporting this project
A92-1018
a practical partofspeech taggerwe present an implementation of a partofspeech tagger based on a hidden markov modelthe methodology enables robust and accurate tagging with few resource requirementsonly a lexicon and some unlabeled training text are requiredaccuracy exceeds 96we describe implementation strategies and optimizations which result in highspeed operationthree applications for tagging are described phrase recognition word sense disambiguation and grammatical function assignmentour semisupervised model makes use of both labeled training text and some amount of unlabeled textwe train statistical models using unlabeled data with the expectation maximization algorithmwe report very high results for unsupervised pos tagging using hidden markov models by exploiting handbuilt tag dictionaries and equivalence classes
a simple rulebased part of speech tagger automatic part of speech tagging is an area of natural language processing where statistical techniques have been more successful than rulebased methods in this paper we present a simple rulebased part of speech tagger which automatically acquires its rules and tags with accuracy comparable to stochastic taggers the rulebased tagger has many advantages over these taggers including a vast reduction in stored information required the perspicuity of a small set of meaningful rules ease of finding and implementing improvements to the tagger and better portability from one tag set corpus genre or language to another perhaps the biggest contribution of this work is in demonstrating that the stochastic method is not the only viable method for part of speech tagging the fact that a simple rulebased tagger that automatically learns its rules can perform so well should offer encouragement for researchers to further explore rulebased tagging searching for a better and more expressive set of rule templates and other variations on the simple but effective theme described below there has been a dramatic increase in the application of probabilistic models to natural language processing over the last few yearsthe appeal of stochastic techniques over traditional rulebased techniques comes from the ease with which the necessary statistics can be automatically acquired and the fact that very little handcrafted knowledge need be built into the systemin contrast the rules in rulebased systems are usually difficult to construct and are typically not very robustone area in which the statistical approach has done particularly well is automatic part of speech tagging assigning each word in an input sentence its proper part of speech church 88 cutting et al 92 derose 88 deroualt and merialdo 86 garside et al87 jelinek 85 kupiec 89 meteer et al 911stochastic taggers have obtained a high degree of accuracy without performing any syntactic analysis on the inputthese stochastic part of speech taggers make use of a markov model which captures lexical and contextual informationthe parameters of the model can be estimated from tagged or untagged textonce the parameters of the model are estimated a sentence can then be automatically tagged by assigning it the tag sequence which is assigned the highest probability by the modelperformance is often enhanced with the aid of various higher level pre and postprocessing procedures or by manually tuning the modela number of rulebased taggers have been built klein and simmons 63 green and rubin 71 hindle 89klein and simmons 63 and green and rubin 71 both have error rates substantially higher than state of the art stochastic taggershindle 89 disambiguates words within a deterministic parserwe wanted to determine whether a simple rulebased tagger without any knowledge of syntax can perform as well as a stochastic tagger or if part of speech tagging really is a domain to which stochastic techniques are better suitedin this paper we describe a rulebased tagger which performs as well as taggers based upon probabilistic modelsthe rulebased tagger overcomes the limitations common in rulebased approaches to language processing it is robust and the rules are automatically acquiredin addition the tagger has many advantages over stochastic taggers including a vast reduction in stored information required the perspicuity of a small set of meaningful rules as opposed to the large tables of statistics needed for stochastic taggers ease of finding and implementing improvements to the tagger and better portability from one tag set or corpus genre to anotherthe tagger works by automatically recognizing and remedying its weaknesses thereby incrementally improving its performancethe tagger initially tags by assigning each word its most likely tag estimated by examining a large tagged corpus without regard to contextin both sentences below run would be tagged as a verb the run lasted thirty minutes3one of the two preceding words is tagged we run three miles every daythe initial tagger has two procedures built in to improve performance both make use of no contextual informationone procedure is provided with information that words that were not in the training corpus and are capitalized tend to be proper nouns and attempts to fix tagging mistakes accordinglythis information could be acquired automatically but is prespecified in the current implementationin addition there is a procedure which attempts to tag words not seen in the training corpus by assigning such words the tag most common for words ending in the same three lettersfor example blahblahous would be tagged as an adjective because this is the most common tag for words ending in ousthis information is derived automatically from the training corpusthis very simple algorithm has an error rate of about 79 when trained on 90 of the tagged brown corpus francis and kueera 82 and tested on a separate 5 of the corpus2 training consists of compiling a list of the most common tag for each word in the training corpusthe tagger then acquires patches to improve its performancepatch templates are of the form the initial tagger was trained on 90 of the corpus 5 was held back to be used for the patch acquisition procedure and 5 for testingonce the initial tagger is trained it is used to tag the patch corpusa list of tagging errors is compiled by comparing the output of the tagger to the correct tagging of the patch corpusthis list consists of triples indicating the number of times the tagger mistagged a word with taga when it should have been tagged with tagb in the patch corpusnext for each error triple it is determined which instantiation of a template from the prespecified set of pdtch templates results in the greatest error reductioncurrently the patch templates are change tag a to tag b when 8the previous word is capitalizedfor each error triple and patch we compute the reduction in error which results from applying the patch to remedy the mistagging of a word as taga when it should have been tagged tagbwe then compute the number of new errors caused by applying the patch that is the number of times the patch results in a word being tagged as tagb when it should be tagged tagathe net improvement is calculated by subtracting the latter value from the formerfor example when the initial tagger tags the patch corpus it mistags 159 words as verbs when they should be nounsif the patch change the tag from verb to noun if one of the two preceding words is tagged as a determiner is applied it corrects 98 of the 159 errorshowever it results in an additional 18 errors from changing tags which really should have been verb to nounthis patch results in a net decrease of 80 errors on the patch corpusthe patch which results in the greatest improvement to the patch corpus is added to the list of patchesthe patch is then applied in order to improve the tagging of the patch corpus and the patch acquisition procedure continuesthe first ten patches found by the system are listed below3the first patch states that if a word is tagged to and the following word is tagged at then switch the tag from to to inthis is because a noun phrase is much more likely to immediately follow a preposition than to immediately follow infinitive tothe second patch states that a tag should be switched from vbn to vbd if the preceding word is capitalizedthis patch arises from two facts the past verb tag is more likely than the past participle verb tag after a proper noun and is also the more likely tag for the second word of the sentence4 the third patch states that vbd should be changed to vbn if any of the preceding three words are tagged hvdonce the list of patches has been acquired new text can be tagged as followsfirst tag the text using the basic lexical taggernext apply each patch in turn to the corpus to decrease the error ratea patch which changes the tagging of a word from a to b only applies if the word has been tagged b somewhere in the training corpusnote that one need not be too careful when constructing the list of patch templatesadding a bad template to the list will not worsen performanceif a template is bad then no rules which are instantiations of that template will appear in the final list of patches learned by the taggerthis makes it easy to experiment with extensions to the taggerthe tagger was tested on 5 of the brown corpus including sections from every genrefirst the test corpus was tagged by the simple lexical taggernext each of the patches was in turn applied to the corpusbelow is a graph showing the improvement in accuracy from applying patchesit is significant that with only 71 patches an error rate of 51 was obtainedof the 71 patches 66 resulted in a reduction in the number of errors in the test corpus 3 resulted in no net change and 2 resulted in a higher number of errorsalmost all patches which were effective on the training corpus were also effective on the test corpusunfortunately it is difficult to compare our results with other published resultsin meteer et at91 an error rate of 34 on one domain wall street journal articles and 56 on another domain texts on terrorism in latin american countries is quotedhowever both the domains and the tag set are different from what we usechurch 88 reports an accuracy of quot9599 correct depending on the definition of correctquotwe implemented a version of the algorithm described by churchwhen trained and tested on the same samples used in our experiment we found the error rate to be about 45derose 88 quotes a 4 error rate however the sample used for testing was part of the training corpusgarside et al 87 reports an accuracy of 9697their probabilistic tagger has been augmented with a handcrafted procedure to pretag problematic quotidiomsquotthis procedure which requires that a list of idioms be laboriously created by hand contributes 3 toward the accuracy of their tagger according to derose 88the idiom list would have to be rewritten if one wished to use this tagger for a different tag set or a different corpusit is interesting to note that the information contained in the idiom list can be automatically acquired by the rulebased taggerfor example their tagger had difficulty tagging as old asan explicit rule was written to pretag as old as with the proper tagsaccording to the tagging scheme of the brown corpus the first as should be tagged as a qualifier and the second as a subordinating conjunctionin the rulebased tagger the most common tag for as is subordinating conjunctionso initially the second as is tagged correctly and the first as is tagged incorrectlyto remedy this the system acquires the patch if the current word is tagged as a subordinating conjunction and so is the word two positions ahead then change the tag of the current word to gualifier6 the rulebased tagger has automatically learned how to properly tag this quotidiomquot regardless of the precise rankings of the various taggers we have demonstrated that a simple rulebased tagger with very few rules performs on par with stochastic taggers6this was one of the 71 patches acquired by the rulebased taggerwe have presented a simple part of speech tagger which performs as well as existing stochastic taggers but has significant advantages over these taggersthe tagger is extremely portablemany of the higher level procedures used to improve the performance of stochastic taggers would not readily transfer over to a different tag set or genre and certainly would not transfer over to a different languageeverything except for the proper noun discovery procedure is automatically acquired by the rulebased tagger7 making it much more portable than a stochastic taggerif the tagger were trained on a different corpus a different set of patches suitable for that corpus would be found automaticallylarge tables of statistics are not needed for the rulebased taggerin a stochastic tagger tens of thousands of lines of statistical information are needed to capture contextual informationthis information is usually a table of trigram statistics indicating for all tags taga tag and tag the probability that tag follows taga and tagbin the rulebased tagger contextual information is captured in fewer than eighty rulesthis makes for a much more perspicuous tagger aiding in better understanding and simplifying further development of the taggercontextual information is expressed in a much more compact and understandable formas can be seen from comparing error rates this compact representation of contextual information is just as effective as the information hidden in the large tables of contextual probabilitiesperhaps the biggest contribution of this work is in demonstrating that the stochastic method is not the only viable approach for part of speech taggingthe fact that the simple rulebased tagger can perform so well should offer encouragement for researchers to further explore rulebased tagging searching for a better and more expressive set of patch templates and other variations on this simple but effective theme
A92-1021
a simple rulebased part of speech taggerautomatic part of speech tagging is an area of natural language processing where statistical techniques have been more successful than rulebased methodsin this paper we present a simple rulebased part of speech tagger which automatically acquires its rules and tags with accuracy comparable to stochastic taggersthe rulebased tagger has many advantages over these taggers including a vast reduction in stored information required the perspicuity of a small set of meaningful rules ease of finding and implementing improvements to the tagger and better portability from one tag set corpus genre or language to anotherperhaps the biggest contribution of this work is in demonstrating that the stochastic method is not the only viable method for part of speech taggingthe fact that a simple rulebased tagger that automatically learns its rules can perform so well should offer encouragement for researchers to further explore rulebased tagging searching for a better and more expressive set of rule templates and other variations on the simple but effective theme described belowour rule based pos tagging methods extract rules from training corpus and use these rules to tag new sentencewe also show that assigning the most common part of speech for each lexical item gives a baseline of 90 accuracy
termight identifying and translating technical terminology 1993 we that part of speech tagging and word alignment could have an important role in glossary construction for translation glossaries are extremely important for translation how would microsoft or some other software vendor want the term quotcharacter menuquot to be translated in their manuals technical terms are difficult for translators because they are generally not as familiar with the subject domain as either the author of the source text or the reader of the target text in many cases there may be a number of acceptable translations but it is important for the sake of consistency to standardize on a single one it would be unacceptable for a manual to use a variety of synonyms for a particular menu or button customarily translation houses make extensive jobspecific glossaries to ensure consistency and correctness of technical terminology for large jobs a glossary is a list of terms and their translations we will subdivide the task of constructing a glossary into two subtasks generating a list of terms and finding the translation equivalents the first task will be referred to as the monolingual task and the second as the bilingual task how should a glossary be constructed translation schools teach their students to read as much background material as possible in both the source and target languages an extremely timeconsuming the statistical corpusbased renaissance in computational linguistics has produced a number of interesting technologies including partofspeech tagging and bilingual word alignmentunfortunately these technologies are still not as widely deployed in practical applications as they might bepartofspeech taggers are used in a few applications such as speech synthesis and question answering word alignment is newer found only in a few places it is used at ibm for estimating parameters of their statistical machine translation prototype we suggest that part of speech tagging and word alignment could have an important role in glossary construction for translationglossaries are extremely important for translationhow would microsoft or some other software vendor want the term quotcharacter menuquot to be translated in their manualstechnical terms are difficult for translators because they are generally not as familiar with the subject domain as either the author of the source text or the reader of the target textin many cases there may be a number of acceptable translations but it is important for the sake of consistency to standardize on a single oneit would be unacceptable for a manual to use a variety of synonyms for a particular menu or buttoncustomarily translation houses make extensive jobspecific glossaries to ensure consistency and correctness of technical terminology for large jobsa glossary is a list of terms and their translationswe will subdivide the task of constructing a glossary into two subtasks generating a list of terms and finding the translation equivalentsthe first task will be referred to as the monolingual task and the second as the bilingual taskhow should a glossary be constructedtranslation schools teach their students to read as much background material as possible in both the source and target languages an extremely timeconsuming process as the introduction to hann text on technical translation indicates contrary to popular opinion the job of a technical translator has little in common with other linguistic professions such as literature translation foreign correspondence or interpretingapart from an expert knowledge of both languages all that is required for the latter professions is a few general dictionaries whereas a technical translator needs a whole library of specialized dictionaries encyclopedias and the source and target fields are standard though many other fields can also be found eg usage notes part of speech constraints comments etc technical literature in both languages he is more concerned with the exact meanings of terms than with stylistic considerations and his profession requires certain detective skills as well as linguistic and literary onesbeginners in this profession have an especially hard timethis book attempts to meet this requirementunfortunately the academic prescriptions are often too expensive for commercial practicetranslators need justintime glossariesthey cannot afford to do a lot of background reading and quotdetectivequot work when they are being paid by the wordthey need something more practicalwe propose a tool termight that automates some of the more tedious and laborious aspects of terminology researchthe tool relies on partofspeech tagging and wordalignment technologies to extract candidate terms and translationsit then sorts the extracted candidates and presents them to the user along with reference concordance lines supporting efficient construction of glossariesthe tool is currently being used by the translators at att business translation services termight may prove useful in contexts other than humanbased translationprimarily it can support customization of machine translation lexicons to a new domainin fact the arguments for constructing a jobspecific glossary for humanbased translation may hold equally well for an mtbased process emphasizing the need for a productivity toolthe monolingual component of termight can be used to construct terminology lists in other applications such as technical writing book indexing hypertext linking natural language interfaces text categorization and indexing in digital libraries and information retrieval while the bilingual component can be useful for information retrieval in multilingual text collections although partofspeech taggers have been around for a while there are relatively few practical applications of this technologythe monolingual task appears to be an excellent candidateas has been noticed elsewhere most technical terms can be found by looking for multiword noun phrases that satisfy a rather restricted set of syntactic patternswe follow justeson and katz who emphasize the importance of term frequency in selecting good candidate termsan expert terminologist can then skim the list of candidates to weed out spurious candidates and clichesvery simple procedures of this kind have been remarkably successfulthey can save an enormous amount of time over the current practice of reading the document to be translated focusing on tables figures index table of contents and so on and writing down terms that happen to catch the translator eyethis current practice is very laborious and runs the risk of missing many important termstermight uses a part of speech tagger to identify a list of candidate terms which is then filtered by a manual passwe have found however that the manual pass dominates the cost of the monolingual task and consequently we have tried to design an interactive user interface that minimizes the burden on the expert terminologistthe terminologist is presented with a list of candidate terms and corrects the list with a minimum number of key strokesthe interface is designed to make it easy for the expert to pull up evidence from relevant concordance lines to help identify incorrect candidates as well as terms that are missing from the lista single keypress copies the current candidate term or the content of any marked emacs region into the upperleft screenthe candidates are sorted so that the better ones are found near the top of the list and so that related candidates appear near one anothercandidate termsthe list of candidate terms contains both multiword noun phrases and single wordsthe multiword terms match a small set of syntactic patterns defined by regular expressions and are found by searching a version of the document tagged with parts of speech the set of syntactic patterns is considered as a parameter and can be adopted to a specific domain by the usercurrently our patterns match only sequences of nouns which seem to yield the best hit rate in our environmentsingleword candidates are defined by taking the list of all words that occur in the document and do not appear in a standard stoplist of quotnoisequot wordsgrouping and sorting of termsthe list of candidate terms is sorted to group together all noun phrase terms that have the same head word which is simply the last word of the term for our current set of noun phrase patternsthe order of the groups in the list is determined by decreasing frequency of the head word in the document which usually correlates with the likelihood that this head word is used in technical termssorting within groupsunder each head word the terms are sorted alphabetically according to reversed order of the wordssorting in this order reflects the order of modification in simple english noun phrases and groups together terms that denote different modifications of a more general term concordance linesto decide whether a candidate term is indeed a term and to identify multiword terms that are missing from the candidate list one must view relevant lines of the documentfor this purpose we present a concordance line for each occurrence of a term if however a term ti is contained in a longer term t2 then occurrences of t2 are not displayed for tithis way the occurrences of a general term are classified into disjoint sets corresponding to more specific terms leaving only unclassified occurrences under the general termin the case of point for example five specific terms are identified that account for 61 occurrences of point and accordingly for 61 concordance linesonly 20 concordance lines are displayed for the word point itself and it is easy to identify in them 5 occurrences of the term tarting point which is missing from the candidate list to facilitate scanning concordance lines are sorted so that all occurrences of identical preceding contexts of the head word like tarting are grouped togethersince all the words of the document except for stop list words appear in the candidate list as singleword terms it is guaranteed that every term that was missed by the automatic procedure will appear in the concordance linesin summary our algorithm performs the following steps using the monolingual component a terminologist at att business translation services constructs terminology lists at the impressive rate of 150200 terms per hourfor example it took about 10 hours to construct a list of 1700 terms extracted from a 300000 word documentthe tool has at least doubled the rate of constructing terminology lists which was previously performed by simpler lexicographic toolsalternative proposals are likely to miss important but infrequent termstranslations such as format disk dialog box and label disk dialog box which occur just oncein particular mutual information and other statistical methods such as and frequencybased methods such as exclude infrequent phrases because they tend to introduce too much noisewe have found that frequent head words are likely to generate a number of terms and are therefore more important for the glossary consider the frequent head word boxin the microsoft windows manual for example almost any type of box is a technical termby sorting on the frequency of the headword we have been able to find many infrequent terms and have not had too much of a problem with noise another characteristic of previous work is that each candidate term is scored independently of other termswe score a group of related terms rather than each term at a timefuture work may enhance our simple headword frequency score and may take into account additional relationships between terms including common words in modifying positionstermight uses a partofspeech tagger to identify candidate noun phrasesjusteson and katz only consult a lexicon and consider all the possible parts of speech of a wordin particular every word that can be a noun according to the lexicon is considered as a noun in each of its occurrencestheir method thus yields some incorrect noun phrases that will not be proposed by a tagger but on the other hand does not miss noun phrases that may be missed due to tagging errorsbilingual alignment methods have been used in statistical machine translation terminology research and translation aids bilingual lexicography wordsense disambiguation and information retrieval in a multilingual environment most alignment work was concerned with alignment at the sentence levelalgorithms for the more difficult task of word alignment were proposed in and were applied for parameter estimation in the ibm statistical machine translation system previously translated texts provide a major source of information about technical termsas isabelle argues quotexisting translations contain more solutions to more translation problems than any other existing resourcequot even if other resources such as general technical dictionaries are available it is important to verify the translation of terms in previously translated documents of the same customer to ensure consistency across documentsseveral translation workstations provide sentence alignment and allow the user to search interactively for term translations in aligned archives some methods use sentence alignment and additional statistics to find candidate translations of terms we suggest that word level alignment is better suitable for term translationthe bilingual component of termight gets as input a list of source terms and a bilingual corpus aligned at the word levelwe have been using the output of word_align a robust alignment program that proved useful for bilingual concordancing of noisy texts word_align produces a partial mapping between the words of the two texts skipping words that cannot be aligned at a given confidence level for each occurrence of a source term termight identifies a candidate translation based on the alignment of its wordsthe candidate translation is defined as the sequence of words between the first and last target positions that are aligned with any of the words of the source termin the example of figure 2 the candidate translation of optional parameters box is zone parametres optionnels since zone and optionnels are the first and last french words that are aligned with the words of the english termnotice that in this case the candidate translation is correct even though the word parameters is aligned incorrectlyin other cases alignment errors may lead to an incorrect candidate translation for a specific occurrence of the termit is quite likely however that the correct translation or at least a string that overlaps with it will be identified in some occurrences of the termtermight collects the candidate translations from all occurrences of a source term and sorts them in decreasing frequency orderthe sorted list is presented to the user followed by bilingual concordances for all occurrences of each candidate translation the user views the concordances to verify correct candidates or to find translations that are missing from the candidate listthe latter task becomes especially easy when a candidate overlaps with the correct translation directing the attention of the user to the concordance lines of this particular candidate which are likely to be aligned correctlya single keystroke copies a verified candidate translation or a translation identified as a marked emacs region in a concordance line into the appropriate place in the glossarywe evaluated the bilingual component of termight in translating a glossary of 192 terms found in the english and german versions of a technical manualthe correct answer was often the first choice or the second choice in the candidate listfor the remaining 53 of the terms the correct answer was always somewhere in the concordancesusing the interface the glossary was translated at a rate of about 100 terms per hoursmadja and van der eijk describe term translation methods that use bilingual texts that were aligned at the sentence leveltheir methods find likely translations by computing statistics on term cooccurrence within aligned sentences and selecting sourcetarget pairs with statistically significant associationswe found that explicit word alignments enabled us to identify translations of infrequent terms that would not otherwise meet statistical significance criteriaif the words of a term occur at least several times in the document then word_align is likely to align them correctly and termight will identify the correct translationif only some of the words of a term are frequent then termight is likely to identify a translation that overlaps with the correct one directing the user quickly to correctly aligned concordance lineseven if all the words of the term were not aligned by word_align it is still likely that most concordance lines are aligned correctly based on other words in the near contexttermight motivates future improvements in word alignment quality that will increase recall and precision of the candidate listin particular taking into account local syntactic structures and phrase boundaries will impose more restrictions on alignments of complete termsfinally termight can be extended for verifying translation consistency at the proofreading step of a translation job after the document has been translatedfor example in an englishgerman document pair the tool identified the translation of the term controls menu as menu steuerung in 4 out of 5 occurrencesin the fifth occurrence word_align failed to align the term correctly because another translation steuernienu was uniquely used violating the consistency requirementtermight or a similar tool can thus be helpful in identifying inconsistent translationswe have shown that terminology research provides a good application for robust natural language technology in particular for partofspeech tagging and wordalignment algorithmsalthough the output of these algorithms is far from perfect it is possible to extract from it useful information that is later corrected and augmented by a userour extraction algorithms emphasize completeness and identify also infrequent candidates that may not meet some of the statistical significance criteria proposed in the literatureto make the entire process efficient however it is necessary to analyze the user work process and provide interfaces that support itin many cases improving the way information is presented to the user may have a larger effect on productivity than improvements in the underlying natural language technologyin particular we have found the following to be very effective as the need for efficient knowledge acquisition tools becomes widely recognized we hope that this experience with termight will be found useful for other textrelated systems as wellwe would like to thank pat callow from att buiseness translation services for her indispensable role in designing and testing termightwe would also like to thank bala satish and jon helfman for their part in the project
A94-1006
termight identifying and translating technical terminologywe propose a semiautomatic tool termight that helps professional translators and terminologists identify technical terms and their translationsthe tool makes use of partofspeech tagging and wordalignment programs to extract candidate terms and their translationsalthough the extraction programs are far from perfect it is not too hard for the user to filter out the wheat from the chaffthe extraction algorithms emphasize completenessalternative proposals are likely to miss important but infrequent termstranslationsto reduce the burden on the user during the filtering phase candidates are presented in a convenient order along with some useful concordance evidence in an interface that is designed to minimize keystrokestermight is currently being used by the translators at att business translation services
does baumwelch reestimation help taggers in part of speech tagging by hidden markov model a statistical model is used to assign grammatical categories to words in a text early work in the field relied on a corpus which had been tagged by a human annotator to train the model recently cutting al suggest that training can be achieved with a minimal lexicon and a limited amount priori about probabilities by using an baumwelch reestimation to automatically refine the model in this paper i report two experiments designed to determine how much manual training information is needed the first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracy the second experiment reveals that there are three distinct patterns of baumwelch reestimation in two of the patterns the reestimation ultimately reduces the accuracy of the tagging rather than improving it the pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus and the corpus to be tagged heuristics for deciding how to use reestimation in an effective manner are given the conclusions are broadly in agreement with those of merialdo but give greater detail about the contributions of different parts of the model 1 background partofspeech tagging is the process of assigning grammatical categories to individual words in a corpus one widely used approach makes use of a statistical technique called a hidden markov model the model is defined by two collections of the probabilities express the probability that a tag follows the preceding and the the probability that a word has a given tag without regard to words on either side of it to tag a text the tags with nonzero probability are hypothesised for each word and the most probable sequence of tags given the sequence of words is determined from the probabilities two algorithms are commonly used known as the forwardbackward and viterbi algorithms fb assigns a probability to every tag on every word while viterbi prunes tags which cannot be chosen because their probability is lower than the ones of competing hypotheses with a corresponding gain in computational ef partofspeech tagging is the process of assigning grammatical categories to individual words in a corpusone widely used approach makes use of a statistical technique called a hidden markov model the model is defined by two collections of parameters the transition probabilities which express the probability that a tag follows the preceding one and the lexical probabilities giving the probability that a word has a given tag without regard to words on either side of itto tag a text the tags with nonzero probability are hypothesised for each word and the most probable sequence of tags given the sequence of words is determined from the probabilitiestwo algorithms are commonly used known as the forwardbackward and viterbi algorithmsfb assigns a probability to every tag on every word while viterbi prunes tags which cannot be chosen because their probability is lower than the ones of competing hypotheses with a corresponding gain in computational efficiencyfor an introduction to the algorithms see cutting et at or the lucid description by sharman there are two principal sources for the parameters of the modelif a tagged corpus prepared by a human annotator is available the transition and lexical probabilities can be estimated from the frequencies of pairs of tags and of tags associated with wordsalternatively a procedure called baumwelch reestimation may be used in which an untagged corpus is passed through the fb algorithm with some initial model and the resulting probabilities used to determine new values for the lexical and transition probabilitiesby iterating the algorithm with the same corpus the parameters of the model can be made to converge on values which are locally optimal for the given textthe degree of convergence can be measured using a perplexity measure the sum of plog2p for hypothesis probabilities p which gives an estimate of the degree of disorder in the modelthe algorithm is again described by cutting et ad and by sharman and a mathematical justification for it can be found in huang et atthe first major use of hmms for part of speech tagging was in claws in the 1970swith the availability of large corpora and fast computers there has been a recent resurgence of interest and a number of variations on and alternatives to the fb viterbi and bw algorithms have been tried see the work of for example church brill derose and kupiec one of the most effective taggers based on a pure hmm is that developed at xerox an important aspect of this tagger is that it will give good accuracy with a minimal amount of manually tagged training data96 accuracy correct assignment of tags to word token compared with a human annotator is quoted over a 500000 word corpusthe xerox tagger attempts to avoid the need for a handtagged training corpus as far as possibleinstead an approximate model is constructed by hand which is then improved by bw reestimation on an untagged training corpusin the above example 8 iterations were sufficientthe initial model set up so that some transitions and some tags in the lexicon are favoured and hence having a higher initial probabilityconvergence of the model is improved by keeping the number of parameters in the model downto assist in this low frequency items in the lexicon are grouped together into equivalence classes such that all words in a given equivalence class have the same tags and lexical probabilities and whenever one of the words is looked up then the data common to all of them is usedreestimation on any of the words in a class therefore counts towards reestimation for all of themthe results of the xerox experiment appear very encouragingpreparing tagged corpora either by hand is labourintensive and potentially errorprone and although a semiautomatic approach can be used it is a good thing to reduce the human involvement as much as possiblehowever some careful examination of the experiment is neededin the first place cutting et a do not compare the success rate in their work with that achieved from a handtagged training text with no reestimationsecondly it is unclear how much the initial biasing contributes the success rateif significant human intervention is needed to provide the biasing then the advantages of automatic training become rather weaker especially if such intervention is needed on each new text domainthe kind of biasing cutting et a describe reflects linguistic insights combined with an understanding of the predictions a tagger could reasonably be expected to make and the ones it could notthe aim of this paper is to examine the role that training plays in the tagging process by an experimental evaluation of how the accuracy of the tagger varies with the initial conditionsthe results suggest that a completely unconstrained initial model does not produce good quality results and that one the technique was originally developed by kupiec accurately trained from a handtagged corpus will generally do better than using an approach based on reestimation even when the training comes from a different sourcea second experiment shows that there are different patterns of reestimation and that these patterns vary more or less regularly with a broad characterisation of the initial conditionsthe outcome of the two experiments together points to heuristics for making effective use of training and reestimation together with some directions for further researchwork similar to that described here has been carried out by merialdo with broadly similar conclusionswe will discuss this work belowthe principal contribution of this work is to separate the effect of the lexical and transition parameters of the model and to show how the results vary with different degree of similarity between the training and test datathe experiments were conducted using two taggers one written in c at cambridge university computer laboratory and the other in c at sharp laboratoriesboth taggers implement the fb viterbi and bw algorithmsfor training from a handtagged corpus the model is estimated by counting the number of transitions from each tag i to each tag j the total occurrence of each tag i and the total occurrence of word w with tag iwriting these as f f and f respectively the transition probability from tag i to tag j is estimated as f f and the lexical probability as f f other estimation formulae have been used in the pastfor example claws normalises the lexical probabilities by the total frequency of the word rather than of the tagconsulting the baumwelch reestimation formulae suggests that the approach described is more appropriate and this is confirmed by slightly greater tagging accuracyany transitions not seen in the training corpus are given a small nonzero probabilitythe lexicon lists for each word all of tags seen in the training corpus with their probabilitiesfor words not found in the lexicon all openclass tags are hypothesised with equal probabilitiesthese words are added to the lexicon at the end of first iteration when reestimation is being used so that the probabilities of their hypotheses subsequently diverge from being uniformto measure the accuracy of the tagger we compare the chosen tag with one provided by a human annotatorvarious methods of quoting accuracy have been used in the literature the most common being the proport ion of words receiving the correct taga better measure is the proportion of ambiguous words which are given the correct tag where by ambiguous we mean that more than one tag was hypothesisedthe former figure looks more impressive but the latter gives a better measure of how well the tagger is doing since it factors out the trivial assignment of tags to nonambiguous wordsfor a corpus in which a fraction a of the words are ambiguous and p is the accuracy on ambiguous words the overall accuracy can be recovered from 1 a pa all of the accuracy figures quoted below are for ambiguous words onlythe training and test corpora were drawn from the lob corpus and the penn treebankthe hand tagging of these corpora is quite differentfor example the lob tagset used 134 tags while the penn treebank tagset has 48the general pattern of the results presented does not vary greatly with the corpus and tagset usedthe first experiment concerned the effect of the initial conditions on the accuracy using baumwelch reestimationa model was trained from a handtagged corpus in the manner described above and then degraded in various ways to simulate the effect of poorer training as followsdo undegraded lexical probabilities calculated from f f d1 lexical probabilities are correctly ordered so that the most frequent tag has the highest lexical probability and so on but the absolute values are otherwise unreliabled2 lexical probabilities are proportional to the overall tag frequencies and are hence independent of the actual occurrence of the word in the training corpusd3 all lexical probabilities have the same value so that the lexicon contains no information other than the possible tags for each wordto undegraded transition probabilities calculated from f f ti all transition probabilities have the same valuewe could expect to achieve d1 from say a printed dictionary listing parts of speech in order of frequencyperfect training is represented by case dotothe xerox experiments correspond to something between d1 and d2 and between to and ti in that there is some initial biasing of the probabilitiesfor the test four corpora were constructed from the lob corpus lobb from part b lobl from part l lobbg from parts b to g inclusive and lobl33 from parts b to j inclusivecorpus lobbj was used to train the model and lobb lobl and lobbg were passed through thirty iterations of the bw algorithm as untagged datain each case the best accuracy from the fb algorithm was notedas an additional test we tried assigning the most probable tag from the do lexicon completely ignoring tagtag transitionsthe results are summarised in table 1 for various corpora where f denotes the quotmost frequent tagquot testas an example of how these figures relate to overall accuracies lobb contains 3235 ambiguous tokens with respect to the lexicon from lobbj and the overall accuracy in the doto case is hence 9869the general pattern of the results is similar across the three test corpora with the only difference of interest being that case d3to does better for lobl than for the other two cases and in particular does better than cases dot1 and dltla possible explanation is that in this case the test data does not overlap with the training data and hence the good quality lexicons have less of an influenceit is also interesting that d3t1 does better than d2ft1the reasons for this are unclear and the results are not always the same with other corpora which suggests that they are not statistically significantseveral followup experiments were used to confirm the results using corpora from the penn treebank using equivalence classes to ensure that all lexical entries have a total relative frequency of at least 001 and using larger corporathe specific accuracies were different in the various tests but the overall patterns remained much the same suggesting that they are not an artifact of the tagset or of details of the textthe observations we can make about these results are as followsfirstly two of the tests d2t1 and d31t1 give very poor performancetheir accuracy is not even as good as that achieved by picking the most frequent tag it follows that if baumwelch reestimation is to be an effective technique the initial data must have either biasing in the transitions or in the lexical probabilities but it is not necessary to have both secondly training from a handtagged corpus always does best even when the test data is from a different source to the training data as it is for loblso perhaps it is worth investing effort in handtagging training corpora after all rather than just building a lexicon and letting reestimation sort out the probabilitiesbut how can we ensure that reestimation will produce a good quality modelwe look further at this issue in the next sectionduring the first experiment it became apparent that baumwelch reestimation sometimes decreases the accuracy as the iteration progressesa second experiment was conducted to decide when it is appropriate to use baumwelch reestimation at allthere seem to be three patterns of behaviour classical a general trend of rising accuracy on each iteration with any falls in accuracy being localit indicates that the model is converging towards an optimum which is better than its starting pointinitial maximum highest accuracy on the first iteration and falling thereafterin this case the initial model is of better quality than bw can achievethat is while bw will converge on an optimum the notion of optimality is with respect to the hmm rather than to the linguistic judgements about correct taggingearly maximum rising accuracy for a small number of iterations and then falling as in initial maximuman example of each of the three behaviours is shown in figure 1the values of the accuracies and the test conditions are unimportant here all we want to show is the general patternsthe second experiment had the aim of trying to discover which pattern applies under which circumstances in order to help decide how to train the modelclearly if the expected pattern is initial maximum we should not use bw at all if early maximum we should halt the process after a few iterations and if classical we should halt the process in a quotstandardquot way such as comparing the perplexity of successive modelsthe tests were conducted in a similar manner to those of the first experiment by building a lexicon and transitions from a hand tagged training corpus and then applying them to a test corpus with varying degrees of degradationfirstly four different degrees of degradation were used no degradation at all d2 degradation of the lexicon ti degradation of the transitions and the two togethersecondly we selected test corpora with varying degrees of similarity to the training corpus the same text text from a similar domain and text which is significantly differenttwo tests were conducted with each combination of the degradation and similarity using different corpora ranging in size from approximately 50000 words to 500000 wordsthe reestimation was allowed to run for ten iterationsthe results appear in table 2 showing the best accuracy achieved the iteration at which it occurred and the pattern of reestimation the patterns are summarised in table 3 each entry in the table showing the patterns for the two tests under the given conditionsalthough there is some variations in the readings for example in the quotsimilardotoquot case we can draw some general conclusions about the patterns obtained from different sorts of datawhen the lexicon is degraded the pattern is always classicalwith a good lexicon but either degraded transitions or a test corpus differing from the training corpus the pattern tends to be early maximumwhen the test corpus is very similar to the model then the pattern is initial maximumfurthermore examining the accuracies in table 2 in the cases of initial maximum and early maximum the accuracy tends to be significantly higher than with classical behaviourit seems likely that what is going on is that the model is converging to towards something of similar quotqualityquot in each case but when the pattern is classical the convergence starts from a lower quality model and improves and in the other cases it starts from a higher quality one and deterioratesin the case of early maximum the few iterations where the accuracy is improving correspond to the creation of entries for unknown words and the fine tuning of ones for known ones and these changes outweigh those produced by the reestimationfrom the observations in the previous section we propose the following guidelines for how to train a hmm for use in tagging able use bw reestimation with standard convergence tests such as perplexitywithout a lexicon some initial biasing of the transitions is needed if good results are to be obtainedsimilar results are presented by merialdo who describes experiments to compare the effect of training from a handtagged corpora and using the baumwelch algorithm with various initial conditionsas in the experiments above bw reestimation gave a decrease in accuracy when the starting point was derived from a significant amount of handtagged textin addition although merialdo does not highlight the point bw reestimation starting from less than 5000 words of handtagged text shows early maximum behaviourmerialdo conclusion is that taggers should be trained using as much handtagged text as possible to begin with and only then applying bw reestimation with untagged textthe step forward taken in the work here is to show that there are three patterns of reestimation behaviour with differing guidelines for how to use bw effectively and that to obtain a good starting point when a handtagged corpus is not available or is too small either the lexicon or the transitions must be biasedwhile these may be useful heuristics from a practical point of view the next step forward is to look for an automatic way of predicting the accuracy of the tagging process given a corpus and a modelsome preliminary experiments with using measures such as perplexity and the average probability of hypotheses show that while they do give an indication of convergence during reestimation neither shows a strong correlation with the accuracyperhaps what is needed is a quotsimilarity measurequot between two models m and m such that if a corpus were tagged with model m m is the model obtained by training from the output corpus from the tagger as if it were a handtagged corpushowever preliminary experiments using such measures as the kullbackliebler distance between the initial and new models have again showed that it does not give good predictions of accuracyin the end it may turn out there is simply no way of making the prediction without a source of information extrinsic to both model and corpusthe work described here was carried out at the cambridge university computer laboratory as part of esprit br project 7315 quotthe acquisition of lexical knowledgequot the results were confirmed and extended at sharp laboratories of europei thank ted briscoe for his guidance and advice and the an lp referees for their comments
A94-1009
does baumwelch reestimation help taggersin part of speech tagging by hidden markov model a statistical model is used to assign grammatical categories to words in a textearly work in the field relied on a corpus which had been tagged by a human annotator to train the modelmore recently cutting et al suggest that training can be achieved with a minimal lexicon and a limited amount of a priori information about probabilities by using an baumwelch reestimation to automatically refine the modelin this paper i report two experiments designed to determine how much manual training information is neededthe first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracythe second experiment reveals that there are three distinct patterns of baumwelch reestimationin two of the patterns the reestimation ultimately reduces the accuracy of the tagging rather than improving itthe pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus and the corpus to be taggedheuristics for deciding how to use reestimation in an effective manner are giventhe conclusions are broadly in agreement with those of merialdo but give greater detail about the contributions of different parts of the modelwe report an accuracy of 7549 8087 and 7912 for unsupervised wordbased hmm trained on parts of the lob corpora with a tag set of 134 tags
three heads are better than one machinereadable dictionary the lexicons used by the kbmt modules a large set of usergenerated bilingual glossaries as well as a gazetteer and a list of proper and organization namesthe outputs from these engines are recorded in a chart whose positions correspond to words in the source language inputas a result of the operation of each of the mt engines new edges are added to the chart each labeled with the translation of a region of the input string and indexed by this region beginning and end positionswe will refer to all of these edges as components for the remainder of this articlethe kbmt and ebmt engines also carry a quality score for each output elementthe kbmt scores are produced based on whether any questionable heuristics were used in the source analysis or target generationthe ebmt scores are produced using a technique based on human judgements as described in submittedfigure 1 presents a general view of the operation of our multiengine mt systemthe chart manager selects the overall best cover from the collection of candidate partial translations by normalizing each component quality score and then selecting the best combination of components with the help of the chart walk algorithmfigure 2 illustrates the result of this process on the example spanish sentence al momenta de su yenta a iberia viasa contaba con ocho aviones que tenzan en promedio 13 anos de vuelo which can be translated into english as at the moment of its sale to iberia viasa had eight airplanes which had on average thirteen years of flight this is a sentence from one of the 1993 arpa mt evaluation textsfor each component the starting and ending positions in the chart the corresponding source language words and alternative translations are shown as well as the engine and the engineinternal quality scoresinspection of these translations shows numerous problems for example at position 12 quotavionesquot is translated among other things as quotaircraftsquotit must be remembered that these were generated automatically from an online dictionary without any lexical feature marking or other human interventionit is well known that such automatic methods are at the moment less than perfect to say the leastin our current system this is not a major problem since the results go through a mandatory editing step as described belowthe chart manager normalizes the internal scores to make them directly comparablein the case of kbmt and ebmt the preexisting scores are modified while lexical transfer results are scored based on the estimated reliability of individual databases from 05 up to 15currently the kbmt scores are reduced by a constant except for known erroneous output which has its score set to zerothe internal ebmt scores range from 0 being perfect to 10000 being worthless but the scores are nonlinearso a region selected by a threshold is converted linearly into scores ranging from zero to a normalized maximum ebmt scorethe normalization levels were empirically determined in the initial experiment by having several individuals judge the comparative average quality of the outputs in an actual translation runin every case the base score produced by the scoring functions is currently multiplied by the length of the candidate in words on the assumption that longer items are betterwe intend to test a variety of functions in order to find the right contribution of the length factorfigure 3 presents the chart walk algorithm used to produce a single best nonoverlapping contiguous combination of the available component translations assuming correct component quality scoresthe code is organized as a recursive divideandconquer procedure to calculate the cover of a region of the input it is repeatedly split into two parts at each possible positioneach time the best possible cover for each part is recursively found and the two scores are combined to give a score for the chart walk containing the two best subwalksthese different splits are then compared with each other and with components from the chart spanning the whole region and the overall best result is without dynamic programming this would have a d 2 combinatorial time complexitydynamic programl 25 ming utilizes a large array to store partial results so that the best cover of any given subsequence is only computed once the second time that a recursive call would compute the same result it is retrieved from the array insteadthis reduces the time complexity to 0 and in practice it uses an insignificant part of total processing timeg 5 all possible combinations of components are cornd 2 pared this is not a heuristic method but an efficient exhaustive onethis is what assures that the chog 5 sen cover is optimalthis assumes in addition to the scores actually being correct that the scores are compositional in the sense that the combined score for a set of components really represents their quality as a groupthis might not be the case for example if gaps or overlaps are allowed in some cases we calculate the combined score for a sequence of d 2 components as the weighted average of their individual scoresweighting by length is necessary so that g 5 the same components when combined in a different order produce the same combined scoresotherwise the algorithm can produce inconsistent resultse 88 the chart walk algorithm can also be thought of as filling in the twodimensional dynamicprogramming arrayl figure 4 shows an intermediate point in the filling of the arrayin this figure each element is initially the best score of any single chart compod 2 nent covering the input region from word i to word j dashes indicate that no one component covers exnote that this array is a different data structure from the chart actly that regionafter processing each element is the score for the best set of components covering the input from word i to word j 2the array is upper triangular since the starting position of a component i must be less than or equal to its ending position jfor any position the score is calculated based on a combination of scores in the row to its left and in the column below it versus the previous contents of the array cell for its positionso the array must be filled from the bottomup and left to rightintuitively this is because larger regions must be built up from smaller regions within themfor example to calculate element we compute the lengthweighted averages of the scores of the best walks over the pair of elements and versus the pair and and compare them with the scores of any single chart components going from 8 to 10 and take the maximumreferring to figure 2 again this corresponds to a choice between combining the translations of viasa and contaba con versus combining the translations of viasa contaba and con element was itself previously built up from single word componentsthus we compare 3 733 with 3 30 and select the first 733the first wins because contaba con has a high score as an idiom from the glossaryfigure 5 shows the final arraywhen the element in the topright corner is produced the algorithm is finished and the associated set of components is the final chart walk result shown in figure 2it may seem that the scores should increase towards the topright cornerthis has not generally been the casewhile the system produces a number of highscoring short components many lowscoring components have to be included to span the entire inputsince the score is a weighted average these lowscoring components pull the combined score downa clear example can be seen at position which has a score of 15the scores above and to its right each average this 15 with a 5 for total values of 100 and the score continues to decrease with distance from this point as one moves towards the final score which does include the component for in the coverthe chartoriented integration of mt engines does not easily support deviations from the linear order of the source text elements as when discontinuous constituents translate contiguous strings or in the case of crosscomponent substring order differenceswe use a language pairdependent set of postprocessing rules to alleviate this results of multiengine mt were fed in our experiment into a translator workstation through which a translator either approved the system output or modified itthe main option for human interaction in tws currently is the component machineaided translation editor the user sees the original source language text in one editor window and phrases marked by double angle brackets in another each of which is the first translation from a candidate chosen by the chart walkmenus function keys and mouse clicks are used to perform both regular and enhanced editing actionsthe most important enhancement provided is the ability to select an alternate translation with a popup menu and instantly replace the system initially chosen candidate translation string which becomes the first alternative in this menu if it is used againthe alternate translations are the other translations from the chosen component3as mentioned above figure 2 shows the sets of candidates in the best chart walk that are presented as choices to the human user through the cmat editor in our exampleautomatically assessing the utility of the multiengine system relative to the engines taken separately would be a useful development toolthe best method we could find was counting the number of keystrokes in the tws to convert the outputs of individual engines and the multiengine configuration to a quotcanonicalquot human translationa sample test on a passage of 2060 characters from the june 1993 evaluation of pangloss is shown in figure 6the difference in keystrokes was calculated as follows one keystroke for deleting a character two the cmat editor may also include translations from other candidates lower in the menu if they have the same boundaries as the chosen candidate and the menu is not too long keystrokes for inserting a character three keystrokes for deleting a word three keystrokes plus the number of characters in the word being inserted for inserting a wordit is clear from the above table that the multiengine configuration works better than any of our available individual engines though it still does not reach the quality of a level 2 translatorit is also clear that using keystrokes as a measure is not very satisfactoryit would be much better to make the comparison against the closest member of a set of equivalent paraphrastic translations since there are many quotcorrectquot ways of translating a given inputhowever this is predicated on the availability of a quotparaphraserquot system developing which is not a trivial taskultimately a multiengine system depends on the quality of each particular enginewe expect the performance of kbmt and ebmt to growwe plan to use a standard regression mechanism to modify the scoring system based on feedback from having humans select the best covers for test textsthe current system is humanaidedwe have begun an experiment with a fullyautomated mode with the understanding that the quality will dropthe most important effect of this change is that accurate quality scores become much more important since the first choice becomes the only choicebesides improving the kbmt and ebmt scoring mechanisms we need to provide finer distinctions for the lexical transfer engine outputas the databases for this are quite large adding scores to individual entries is in the short run prohibitivewe have not as yet discovered any feasible automatic technique for generating such scoresinstead we are planning to use an english language model on the output in a manner similar to that done by speech and statistical translation systems statistically generating such a model is feasible since it does not rely on knowing correspondences between source and target languagesit is a weaker approach but should go some distance in selecting between otherwise indistinguishable outputsanother possible direction for future development would be to employ ideas from the area of heuristic search and only run the highestqualityscore engine on each unit of source textthis assumes that we can reliably estimate scores in advance and that the engines can be run on fragmentsa less ambitious version of this idea would be to run the lowscoring engines only where there are gaps in the normally highscoring engines
A94-1016
three heads are better than onemachine translation systems do not currently achieve optimal quality translation on free text whatever translation method they employour hypothesis is that the quality of mt will improve if an mt environment uses output from a variety of mt systems working on the same textin the latest version of the pangloss mt project we collect the results of three translation engines typically subsentential chunks in a chart data structuresince the individual mt systems operate completely independently their re sults may be incomplete conflicting or redundantwe use simple scoring heuristics to estimate the quality of each chunk and find the highestscore sequence of chunks this paper describes in detail the combining method presenting the algorithm and illustrations of its progress on one of many actual translations it has producedit uses dynamic programming to efficiently compare weighted averages of sets of adjacent scored component translationsthe current system operates primarily in a humanaided mt modethe translation delivery system and its associated postediting aide are briefly described as is an initial evaluation of the usefulness of this methodindividual mt engines will be reported separately and are not therefore described in detail herewe produce the first memt system by combining outputs from three different mt engines based on their knowledge of the inner workings of the engineswe develop a multiengine mt system which builds a chart using the translation units inside each input system and then uses a chart walk algorithm to find the best cover of the source sentence
a maximum entropy approach to identifying sentence boundaries we present a trainable model for identifying sentence boundaries in raw text given a corpus annotated with sentence boundaries our model learns to classify each occurrence of and as either a valid or invalid sentence boundary the training procedure requires no handcrafted rules lexica partofspeech tags or domainspecific information the model can therefore be trained easily on any genre of english and should be trainable on any other romanalphabet language performance is comparable to or better than the performance of similar systems but we emphasize the simplicity of retraining for new domains the task of identifying sentence boundaries in text has not received as much attention as it deservesmany freely available natural language processing tools require their input to be divided into sentences but make no mention of how to accomplish this others perform the division implicitly without discussing performance on first glance it may appear that using a short list of sentencefinal punctuation marks such as and is sufficienthowever these punctuation marks are not used exclusively to mark sentence breaksfor example embedded quotations may contain any of the sentenceending punctuation marks and is used as a decimal point in email addresses to indicate ellipsis and in abbreviationsboth and are somewhat less ambiguous the authors would like to acknowledge the support of arpa grant n6600194c6043 aro grant daah0494g0426 and nsf grant sbr8920230 but appear in proper names and may be used multiple times for emphasis to mark a single sentence boundarylexicallybased rules could be written and exception lists used to disambiguate the difficult cases described abovehowever the lists will never be exhaustive and multiple rules may interact badly since punctuation marks exhibit absorption propertiessites which logically should be marked with multiple punctuation marks will often only have one as summarized in for example a sentenceending abbreviation will most likely not be followed by an additional period if the abbreviation already contains one as a result we believe that manually writing rules is not a good approachinstead we present a solution based on a maximum entropy model which requires a few hints about what information to use and a corpus annotated with sentence boundariesthe model trains easily and performs comparably to systems that require vastly more informationtraining on 39441 sentences takes 18 minutes on a sun ultra sparc and disambiguating the boundaries in a single wall street journal article requires only 14 secondsto our knowledge there have been few papers about identifying sentence boundariesthe most recent work will be described in there is also a less detailed description of palmer and hearst system satz in the satz architecture uses either a decision tree or a neural network to disambiguate sentence boundariesthe neural network achieves 985 accuracy on a corpus of wall street journal we recommend these articles for a more comprehensive review of sentenceboundary identification work than we will be able to provide here articles using a lexicon which includes partofspeech tag informationby increasing the quantity old training data and decreasing the size of their test corpus palmer and hearst achieved performance of 89 with the neural networkthey obtained similar results using the decision treeall the results we will present for our algorithms are on their initial larger test corpusin riley describes a decisiontree based approach to the problemhis performance on i he brown corpus is 998 using a model learned from a corpus of 25 million wordsliberman and church suggest in that a system could be quickly built to divide newswire text into sentences with a nearly negligible error rate but do not actually build such a systemwe present two systems for identifying sentence boundariesone is targeted at high performance and uses some knowledge about the structure of english financial newspaper text which may not be applicable to text from other genres or in other languagesthe other system uses no domainspecific knowledge and is aimed at being portable across english text genres and roman alphabet languagespotential sentence boundaries are identified by scanning the text for sequences of characters separated by whitespace containing one of the symbols or we use information about the token containing the potential sentence boundary as well as contextual information about the tokens immediately to the left and to the rightwe also conducted tests using wider contexts but performance did not improvewe call the token containing the symbol which marks a putative sentence boundary the candidatehie portion of the candidate preceding the potential sentence boundary is called the prefix and the portion following it is called the suffixthe system that focused on maximizing performance used the following hints or contextual quottemplatesquot the templates specify only the form of the informationthe exact information used by the maximum entropy model for the potential sentence boundary marked by in corp in example 1 would be previouswordiscapitalized prefix corp suffixnull prefixfeaturecorporatedesignatorthe highly portable system uses only the identity of the candidate and its neighboring words and a list of abbreviations induced from the training data2 specifically the quottemplatesquot used are the information this model would use for example 1 would be previouswordanlp followingwordchairmon prefixcorp suffixnull prefixfeatureinducedabbreviationthe abbreviation list is automatically produced from the training data and the contextual questions are also automatically generated by scanning the training data with question templatesas a result no handcrafted rules or lists are required by the highly portable system and it can be easily retrained for other languages or text genresthe model used here for sentenceboundary detection is based on the maximum entropy model used for pos tagging in for each potential sentence boundary token we estimate a joint probability distribution p of the token and its surrounding context both of which are denoted by c occurring as an actual sentence boundarythe distribution is given by p ir f where b e no yes where the cri are the unknown parameters of the model and where each aj corresponds to a fi or a featurethus the probability of seeing an actual sentence boundary in the context c is given by pthe contextual information deemed useful for sentenceboundary detection which we described earlier must be encoded using featuresfor example a useful feature might be this feature will allow the model to discover that the period at the end of the word mr seldom occurs as a sentence boundarytherefore the parameter corresponding to this feature will hopefully boost the probability p if the prefix is mrthe parameters are chosen to maximize the likelihood of the training data using the generalized iterative scaling algorithmthe model also can be viewed under the maximum entropy framework in which we choose a distribution p that maximizes the entropy h where 5 is the observed distribution of sentenceboundaries and contexts in the training dataas a result the model in practice tends not to commit towards a particular outcome unless it has seen sufficient evidence for that outcome it is maximally uncertain beyond meeting the evidenceall experiments use a simple decision rule to classify each potential sentence boundary a potential sentence boundary is an actual sentence boundary if and only if p 5 where and where c is the context including the potential sentence boundarywe trained our system on 39441 sentences of wall street journal text from sections 00 through 24 of the second release of the penn treebank3 we corrected punctuation mistakes and erroneous sentence boundaries in the training dataperformance figures for our best performing system which used a handcrafted list of honorifics and corporate designators are shown in table 1the first test set wsj is palmer and hearst initial test data and the second is the entire brown corpuswe present the brown corpus performance to show the importance of training on the genre of text on which testing will be performedtable 1 also shows the number of sentences in each corpus the number of candidate punctuation marks the accuracy over potential sentence boundaries the number of false positives and the number of false negativesperformance on the wsj corpus was as we expected higher than performance on the brown corpus since we trained the model on financial newspaper textpossibly more significant than the system performance is its portability to new domains and languagesa trimmed down system which used no information except that derived from the training corpus performs nearly as well and requires no resources other than a training corpusits performance on the same two corpora is shown in table 2since 39441 training sentences is considerably more than might exist in a new domain or a language other than english we experimented with the quantity of training data required to maintain performancetable 3 shows performance on the wsj corpus as a function of training set size using the best performing system and the more portable systemas can seen from the table performance degrades as the quantity of training data decreases but even with only 500 example sentences performance is beter than the baselines of 64000 if a sentence boundary is guessed at every potential site and 784 if only tokenfinal instances of sentenceending punctuation are assumed to be boundarieswe have described an approach to identifying sentence boundaries which performs comparably to other stateoftheart systems that require vastly more resourcesfor example riley performance on the brown corpus is higher than ours but his system is trained on the brown corpus and uses thirty times as much data as our systemalso palmer l hearst system requires pos tag information which limits its use to those genres or languages for which there are either pos tag lexica or pos tag annotated corpora that could be used to train automatic taggersin comparison our system does not require pos tags or any supporting resources beyond the sentenceboundary annotated corpusit is therefore easy and inexpensive to retrain this system for different genres of text in english and text in other romanalphabet languagesfurthermore we showed that a small training corpus is sufficient for good performance and we estimate that annotating enough data to achieve good performance would require only several hours of work in comparison to the many hours required to generate pos tag and lexical probabilitieswe would like to thank david palmer for giving us the test data he and marti hearst used for their sentence detection experimentswe would also like to thank the anonymous reviewers for their helpful insights
A97-1004
a maximum entropy approach to identifying sentence boundarieswe present a trainable model for identify ing sentence boundaries in raw textgiven a corpus annotated with sentence boundaries our model learns to classify each occurrence of and as either a valid or invalid sentence boundarythe training procedure requires no handcrafted rules lexica partofspeech tags or domainspecific informationthe model can therefore be trained easily on any genre of english and should be trainable on any other romanalphabet languageperformance is compa rable to or better than the performance of similar systems but we emphasize the simplicity of retraining for new domainsour statistical system mxterminator employs simpler lexical features of the words to the left and right of the candidate period
a nonprojective dependency parser we describe a practical parser for unrestricted dependencies the parser creates links between words and names the links according to their syntactic functions we first describe the older constraint grammar parser where many of the ideas come from then we proceed to describe the central ideas of our new parser finally the parser is evaluated we are concerned with surfacesyntactic parsing of running textour main goal is to describe syntactic analyses of sentences using dependency links that show the headmodifier relations between wordsin addition these links have labels that refer to the syntactic function of the modifying worda simplified example is in figure 1 where the link between i and see denotes that i is the modifier of see and its syntactic function is that of subjectsimilarly a modifies bird and it is a determinerfirst in this paper we explain some central concepts of the constraint grammar framework from which many of the ideas are derivedthen we give some linguistic background to the notations we are using with a brief comparison to other current dependency formalisms and systemsnew formalism is described briefly and it is utilised in a small toy grammar to illustrate how the formalism worksfinally the real parsing system with a grammar of some 2 500 rules is evaluatedthe parser corresponds to over three manyears of work which does not include the lexical analyser and the morphological disambiguator both parts of the existing english constraint grammar parser the parsers can be tested via wwwour work is partly based on the work done with the constraint grammar framework that was originally proposed by fred karlsson a detailed description of the english constraint grammar is in karlsson et al the basic rule types of the constraint grammar 2 are remove and select for discarding and selecting an alternative reading of a wordrules also have contextual tests that describe the condition according to which they may be appliedfor example the rule discards a verb reading if the preceding word is unambiguously a determiner more than one such test can be appended to a rulethe rule above represents a local rule the test checks only neighbouring words in a foreknown position before or after the target wordthe test may also refer to the positions somewhere in the sentence without specifying the exact locationfor instancemeans that a nominal head may not appear anywhere to the left this quotanywherequot to the left or right may be restricted by barriers which restrict the area of the testbasically the barrier can be used to limit the test only to the current clause or to a constituent instead of the whole sentencein addition another test may be added relative to the unrestricted context position using keyword linkfor example the following rule discards the syntactic function ciobj the rule holds if the closest finite verb to the left is unambiguously a finite verb and there is no ditransitive verb or participle between the verb and the indirect objectif in addition the verb does not take indirect objects ie there is no svo0 in the same verb the ciobj reading will be discardedin essence the same formalism is used in the syntactic analysis in jarvinen and anttila after the morphological disambiguation all legitimate surfacesyntactic labels are added to the set of morphological readingsthen the syntactic rules discard contextually illegitimate alternatives or select legitimate onesthe syntactic tagset of the constraint grammar provides an underspecific dependency descriptionfor example labels for functional heads mark the word which is a head of a noun phrase having that function in the clause but the parent is not indicatedin addition the representation is shallow which means that eg objects of infinitives and participles receive the same type of label as objects of finite verbson the other hand the nonfinite verb forms functioning as objects receive only verbal labelswhen using the grammar formalism described above a considerable amount of syntactic ambiguity can not be resolved reliably and is therefore left pending in the parseas a consequence the output is not optimal in many applicationsfor example it is not possible to reliably pick headmodifier pairs from the parser output or collect arguments of verbs which was one of the tasks we originally were interested into solve the problems we developed a more powerful rule formalism which utilises an explicit dependency representationthe basic constraint gramthe convention in the constraint grammar is that the tags for syntactic functions begin with the 0sign mar idea of introducing the information in a piecemeal fashion is retained but the integration of different pieces of information is more efficient in the new system3 dependency grammars in a nutshell our notation follows the classical model of dependency theory introduced by lucien tesniere and later advocated by igor melauk in tesniere and melauk dependency notation every element of the dependency tree has a unique headthe verb serves as the head of a clause and the top element of the sentence is thus the main verb of the main clausein some other theories eghudson several heads are allowedprojectivity was not an issue for tesniere because he thought that the linear order of the words does not belong to the syntactic level of representation which comprises the structural order onlysome early formalisations cf have brought the strict projectivity requirement into the dependency frameworkthis kind of restriction is present in many dependencybased parsing systems but obviously any recognition grammar should deal with nonprojective phenomena to the extent they occur in natural languages as for example in the analysis shown in figure 2our system has no inbuilt restrictions concerning projectivity though the formalism allows us to state when crossing links are not permittedwe maintain that one is generally also interested in the linear order of elements and therefore it is presented in the tree diagramsbut for some purposes presenting all arguments in a canonical order might be more adequatethis however is a matter of output formatting for which the system makes several options availablethe verbs have a valency that describes the number and type of the modifiers they may havein valency theory usually complements and adjuncts are distinguishedour notation makes a difference between valency and subcategorisation the valency tells which arguments are expected the subcategorisation tells which combinations are legitimatethe valency merely provides a possibility to have an argumentthus a verb having three valency slots may have eg subcategorisation svoo or svocthe former denotes subject verb indirect object and object and the latter subject verb object and object complementthe default is a nominal type of complement but there might also be additional information concerning the range of possible complements eg the verb say may have an object which may also be realised as a toinfinitive clause whclause thatclause or quote structurethe adjuncts are not usually marked in the verbs because most of the verbs may have eg spatiotemporal argumentsinstead adverbial complements and adjuncts that are typical of particular verbs are indicatedfor instance the verb decide has the tag which means that the prepositional phrase on is typically attached to itthe distinction between the complements and the adjuncts is vague in the implementation neither the complements nor the adjuncts are obligatoryusually both the dependent element and its head are implicitly present in the constraint grammar type of rulehere we make this dependency relation explicitthis is done by declaring the heads and the dependents in the context testsfor example the subject label is chosen and marked as a dependent of the immediately following auxiliary in the following rule select if to get the full benefit of the parser it is also useful to name the valency slot in the rulethis has two effects the valency slot is unique ie no more than one subject is linked to a finite verb and we can explicitly state in rules which kind of valency slots we expect to be filledthe rule thus is of the formif the rule above works well in an unambiguous context but there is still need to specify more tolerant rules for ambiguous contextsthe rule differs from the previous rule in that it leaves the other readings of the noun intact and only adds a subject dependency while both the previous rules disambiguated the noun reading alsobut especially in the rule above the contextual test is far from being sufficient to select the subject reading reliablyinstead it leaves open a possibility to attach a dependency from another syntactic function ie the dependency relations remain ambiguousthe grammar tries to be careful not to introduce false dependencies but for an obvious reason this is not always possibleif several syntactic functions of a word have dependency relations they form a dependency foresttherefore when the syntactic function is not rashly disambiguated the correct reading may survive even after illegitimate linking as the global pruning later extracts dependency links that form consistent treeslinks formed between syntactic labels constitute partial trees usually around verbal nucleibut a new mechanism is needed to make full use of the structural information provided by multiple rulesonce a link is formed between labels it can be used by the other rulesfor example when a head of an object phrase is found and indexed to a verb the noun phrase to the right is probably an object complement it should have the same head as the existing object if the verb has the proper subcategorisation tag the following rule establishes a dependency relation of a verb and its object complement if the object already existsthe rule says that a dependency relation should be added but the syntactic functions should not be disambiguated the object complement opcompl0 is linked to the verb readings having the subcategorisation svocthe relation of the object complement and its head is such that the noun phrase to the left of the object complement is an object that has established a dependency relation to the verbnaturally the dependency relations may also be followed downwards but it is also possible to declare the last item in a chain of the links using the keywords top and bottomwe pursue the following strategy for linking and disambiguation in the new dependency grammarin practice these rules are most likely to cause errors apart from their linguistic interpretation often being rather obscuremoreover there is no longer any need to remove these readings explicitly by rules because the global pruning removes readings which have not obtained any quotextra evidencequotroughly one could say that the remove rules of the constraint grammar are replaced by the index rulesthe overall result is that the rules in the new framework are much more careful than those of engcgas already noted the dependency grammar has a big advantage over engcg in dealing with ambiguitybecause the dependencies are supposed to form a tree we can heuristically prune readings that are not likely to appear in such a treewe have the following hypotheses the dependency forest is quite sparse and a whole parse tree can not always be found pruning should favour large trees unlinked readings of a word can be removed when there is a linked reading present among the alternatives unambiguous subtrees are more likely to be correct than ambiguous ones and pruning need not force the words to be unambiguousinstead we can apply the rules iteratively and usually some of the rules apply when the ambiguity is reducedpruning is then applied again and so onfurthermore the pruning mechanism does not contain any language specific statistics but works on a topological basis onlysome of the most heuristic rules may be applied only after pruningthis has two advantages very heuristic links would confuse the pruning mechanism and words that would not otherwise have a head may still get onein this section we present a set of rules and show how those rules can parse the sentence quotjoan said whatever john likes to decide suits herquotthe toy grammar containing 8 rules is presented in figure 3the rules are extracted from the real grammar and they are then simplified some tests are omitted and some tests are made simplerthe grammar is applied to the input sentence in figure 4 where the tags are almost equivalent to those used by the english constraint grammar and the final result equals figure 2 where only the dependencies between the words and certain tags are printedsome comments concerning the rules in the toy grammar are in order the rule states the first noun phrase head label to the right is a subject link subj exists and is followed up to the finite verb in a verb chain which is then followed up to the main verbthen object or complement links are followed downwards to the last verbal reading if then a verb with subcategorisation for objects is encountered an object link from the whpronoun is formedthis kind of rule that starts from word a follows links up to word b and then down to word c introduces a nonprojective dependency link if word b is between words a and c note that the conditions top and bottom follow the chain of named link if any to the upper or lower end of a chain of a multiple links with the same nametherefore top vch ma inv always ends with the main verb in the verb chain whether this be a single finite verb like likes or a chain like would have been liked6the whclause itself may function as a subject object etctherefore there is a set of rules for each functionthe quotwhclause as subjectquot rule looks for a finite verb to the rightno intervening subject labels and clause boundaries are allowed rules 15 are applied in the first roundafter that the pruning operation disambiguates finite verbs and rule 6 will applypruning will be applied once againthe sentence is thus disambiguated both morphologically and morphosyntactically and a syntactic phosyntactic alternatives eg whatever is ambiguous in 10 waysthe subcategorisationvalency information is not printed here reading from each word belongs to a subtree of which the root is said or suits7the syntactic relationship between the verbs is established by a rule stating that the rightmost main verb is the object of a main verb to the left which allows such objects8finally there is a single main verb which is indexed to the root the evaluation was done using small excerpts of data not used in the development of the systemall text samples were excerpted from three different genres in the bank of english data american national public radio british books data and the independent figure 5 lists the samples their sizes and the average and maximum sentence lengthsthe measure is in words excluding punctuation size average max total in addition figure 5 shows the total processing time required for the syntactic analysis of the samplesthe syntactic analysis has been done in a normal pc with the linux operating systemthe pc has a pentium 90 mhz processor and 16 mb of memorythe speed roughly corresponds to 200 words in secondthe time does not include morphological analysis and disambiguation6 broadcast literature newspaper one obvious point of reference is the engcg syntax which shares a level of similar representation with an almost identical tagset to the new systemin addition both systems use the front parts of the engcg system for processing the inputthese include the tokeniser lexical analyser and morphological disambiguatorfigure 6 shows the results of the comparison of the engcg syntax and the morphosyntactic level of the dependency grammarbecause both systems leave some amount of the ambiguity pending two figures are given the success rate which is the percentage of correct morphosyntactic labels present in the output and the ambiguity rate which is the percentage of words containing more than one labelthe engcg results compare to those reported elsewhere the dg success rate is similar or maybe even slightly better than in engcgmore importantly the ambiguity rate is only about a quarter of that in the engcg outputthe overall result should be considered good in the sense that the output contains information about the syntactic functions not only partofspeech tagsthe major improvement over engcg is the level of explicit dependency representation which makes it possible to excerpt modifiers of certain elements such as arguments of verbsthis section evaluates the success of the level of dependenciesone of the crude measures to evaluate dependencies is to count how many times the correct head is foundthe results are listed in fig ure 7precision is and rereceived links call received correct links the difference between desired lnks precision and recalli is due to the fact that the parser does not force a head on every wordtrying out some very heuristic methods to assign heads would raise recall but lower precisiona similar measure is used in except that every word has a head ie the precision equals recall reported as 792we evaluated our parser against the selected dependencies in the test samplesthe samples being rather small only the most common dependencies are evaluated subject object and predicativethese dependencies are usually resolved more reliably than say appositions prepositional attachments etcthe results of the test samples are listed in figure 8it seems the parser leaves some amount of the words unlinked but what it has recognised is generally correct dekang lin has earlier used this kind of evaluation where precision and recall were for subjects 87 and 78 and for complements 84 and 72 respectivelythe results are not strictly comparable because the syntactic description is somewhat differentin this paper we have presented some main features of our new framework for dependency syntaxthe most important result is that the new framework allows us to describe nonprojective dependency grammars and apply them efficientlythis is a property that will be crucial when we will apply this framework to a language having free wordorderbasically the parsing framework combines the constraint grammar framework with a mechanism that adds dependencies between readings or tagsthis means that while the parser disambiguates it also builds up a dependency forest that in turn is reduced by other disambiguation rules and a global pruning mechanismthis setup makes it possible to operate on several layers of information and use and combine structural information more efficiently than in the original constraint grammar framework without any further disadvantage in dealing with ambiguityfirst preliminary evaluations are presentedcompared to the engcg syntactic analyser the output not only contains more information but it is also more accurate and explicitthe ambiguity rate is reduced to a quarter without any compromise in correctnesswe did not have access to other systems and care must be taken when interpreting the results which are not strictly comparablehowever the comparison to other current systems suggests that our dependency parser is very promising both theoretically and practicallywe are using atro voutilainen improved partofspeech disambiguation grammar which runs in the cg2 parservoutilainen and juha heikkild created the original engcg lexicon
A97-1011
a nonprojective dependency parserwe describe a practical parser for unrestricted dependenciesthe parser creates links between words and names the links according to their syntactic functionswe first describe the older constraint grammar parser where many of the ideas come fromthen we proceed to describe the central ideas of our new parserfinally the parser is evaluated
an annotation scheme for free word order languages we describe an annotation scheme and a tool developed for creating linguistically annotated corpora for nonconfigurational languages since the requirements for such a formalism differ from those posited for configurational languages several features have been added influencing the architecture of the scheme the resulting scheme reflects a stratificational notion of language and makes only minimal assumpabout the interrelation of the particu lar representational strata the work reported in this paper aims at providing syntactically annotated corpora for stochastic grammar inductionin particular we focus on several methodological issues concerning the annotation of nonconfigurational languagesin section 2 we examine the appropriateness of existing annotation schemeson the basis of these considerations we formulate several additional requirementsa formalism complying with these requirements is described in section 3section 4 deals with the treatment of selected phenomenafor a description of the annotation tool see section 5combining raw language data with linguistic information offers a promising basis for the development of new efficient and robust nlp methodsrealworld texts annotated with different strata of linguistic information can be used for grammar inductionthe datadrivenness of this approach presents a clear advantage over the traditional idealised notion of competence grammarcorpora annotated with syntactic structures are commonly referred to as trctbank5existing treebank annotation schemes exhibit a fairly uniform architecture as they all have to meet the same basic requirements namely descriptivity grammatical phenomena are to be described rather than explainedtheoryindependence annotations should not be influenced by theoryspecific considerationsnevertheless different theoryspecific representations shall be recoverable from the annotation cfmultistratal representation clear separation of different description levels is desirabledatadrivenness the scheme must provide representational means for all phenomena occurring in textsdisambiguation is based on human processing skills the typical treebank architecture is as follows structures a contextfree backbone is augmented with tracefiller representations of nonlocal dependenciesthe underlying argument sirlteilitc is not represented directly but can be recovered from the tree and tracefiller annotationssyntactic category is encoded in node labelsgrammatical functions constitute a complex label system partofspeech is annotated at word levelthus the contextfree constituent backbone plays a pivotal role in the annotation schemedue to the substantial differences between existing models of constituent structure the question arises of how the theory independencf requirement can be satisfiedat this point the importance of the underlying argument structure is emphasised treebanks of the format described in the above section have been designed for englishtherefore the solutions they offer are not always optimal for other language typesas for free word order languages the following features may cause problems sition between the two polesin light of these facts serious difficulties can be expected arising from the structural component of the existing formalismsdue to the frequency of discontinuous constituents in nonconfigurational languages the fillertrace mechanism would be used very often yielding syntactic trees fairly different from the underlying predicateargument structuresconsider the german sentence daran wird ihn anna erkennen di er weint atit will him anna recognise that he cries anna will recognise him at his cry a sample constituent structure is given below the fairly short sentence contains three nonlocal dependencies marked by coreferences between traces and the corresponding nodesthis hybrid representation makes the structure less transparent and therefore more difficult to annotateapart from this rather technical problem two further arguments speak against phrase structure as the structural pivot of the annotation scheme finally the structural handling of free word order means stating wellformedness constraints on structures involving many tracefiller dependencies which has proved tedioussince most methods of handling discontinuous constituents make the formalism more powerful the efficiency of processing deteriorates tooan alternative solution is to make argument structure the main structural component of the formalismthis assumption underlies a growing number of recent syntactic theories which give up the contextfree constituent backbone cf these approaches provide an adequate explanation for several issues problematic for phrasestructure grammars argument structure can be represented in terms of unordered trees in order to reduce their ambiguity potential rather simple flat trees should be employed while more information can be expressed by a rich system of function labelsfurthermore the required theoryindependence means that the form of syntactic trees should not reflect theoryspecific assumptions eg every syntactic structure has a unique headthus notions such as head should be distinguished at the level of syntactic functions rather than structuresthis requirement speaks against the traditional sort of dependency trees in which heads are represented as nonterminal nodes cfa tree meeting these requirements is given below adv v np np v cpl np v damn wird ihn anna erkennen dais er veinsuch a word order independent representation has the advantage of all structural information being encoded in a single data structurea uniform representation of local and nonlocal dependencies makes the structure more transparentwe distinguish the following levels of representation a contextfree constituent backbone can still be recovered from the surface string and argument structure by reattaching extracted structures to a higher nodeargument structure represented in terms of unordered treesgrammatical functions encoded in edge labels egsb mo hd syntactic categories expressed by category labels assigned to nonterminal nodes and by partofspeech tags assigned to terminalsa structure for is shown in fig2 schade dab kein arzt anwesend ist der pity that no doctor present is who sich auskennt is competent pity that no competent doctor is here note that the root node does not have a head descendant as the sentence is a predicative construction consisting of a subject and a predicate without a copulathe subject is itself a sentence in which the copula does occur and is assigned the tag hdthe tree resembles traditional constituent structuresthe difference is its word order independence structural units need not be contiguous substringsfor instance the extraposed relative clause is still treated as part of the subject npas the annotation scheme does not distinguish different bar levels or any similar intermediate categories only a small set of node labels is needed due to the rudimentary character of the argument structure representations a great deal of information has to be expressed by grammatical functionstheir further classification must reflect different kinds of linguistic information morphology category dependency type thematic role etchowever there is a tradeoff between the granularity of information encoded in the labels and the speed and accuracy of annotationin order to avoid inconsistencies the corpus is annotated in two stages basic annotation and nfirtellte714while in the first phase each annotator has to annotate structures as well as categories and functions the refinement call be done separately for each representation levelduring the first phase the focus is on annotating correct structures and a coarsegrained classification of grammatical functions which represent the following areas of information dependency type complements are further classified according to features such as category and case clausal complements accusative objects datives etcmodifiers are assigned the label mo separate labels are defined for dependencies that do not fit the complementmodifier dichotomy eg pre and postnominal genitives headed and nonheaded structures are distinguished by the presence or absence of a branch labeled hdmorphological information another set of labels represents morphological informationpm stands for morphological particle a label for german infinitival z7t and superlative amseparable verb prefixes are labeled svpduring the second annotation stage the annotation is enriched with information about thematic roles quantifier scope and anaphoric referenceas already mentioned this is done separately for each of the three information areasa phrase or a lexical item can perform multiple functions in a sentenceconsider er bat mich zu kommen he asked me to come in such cases an additional edge is drawn from the embedded vp node to the controller thus changing the syntactic tree into a graphwe call such additional edges secondary links and represent them as dotted lines see fig4 showing the structure of as theoryindependence is one of our objectives the annotation scheme incorporates a number of widely accepted linguistic analyses especially in the area of verbal adverbial and adjectival syntaxhowever some other standard analysts turn out to be problematic mainly due to the partial idealised character of competence grammars which often marginalise or ignore such important phenomena as deficient constructions appositions temporal expressions etcin the following paragraphs we give annotations for a number of such phenomenamost linguistic theories treat nps as structures headed by a unique lexical item however this idealised model needs several additional assumptions in order to account for such important phenomena as complex nominal np components or nominalised adjectives in different theories make different headedness predictionsin either a lexical nominalisation rule for the adjective gliickliche is stipulated or the existence of an empty nominal headmoreover the socalled dp analysis views the article der as the head of the phrasefurther differences concern the attachment of the degree modifier sehrbecause of the intended theoryindependence of the scheme we annotate only the common minimumwe distinguish an np kernel consisting of determiners adjective phrases and nounsall components of this kernel are assigned the label nk and treated as sibling nodesthe difference between the particular nk lies in the positional and partofspeech information which is also sufficient to recover theoryspecific structures from our underspecified representationsfor instance the first determiner among the nk can be treated as the specifier of the phrasethe head of the phrase can be determined in a similar way according to theoryspecific assumptionsin addition a number of clearcut np cornponents can be defined outside that juxtapositional kernel pre and postnominal genitives relative clauses clausal and sentential complements they are all treated as siblings of nk regardless of their position adjunct attachment often gives rise to structural ambiguities or structural uncertaintyhowever full or partial disambiguation takes place in context and the annotators do not consider unrealistic readingsin addition we have adopted a simple convention for those cases in which context information is insufficient for total disambiguation the highest possible attachment site is chosena similar convention has been adopted for constructions in which scope ambiguities have syntactic effects but a onetoone correspondence between scope and attachment does not seem reasonable cf focus particles such as only or alsoif the scope of such a word does not directly correspond to a tree node the word is attached to the lowest node dominating all subconstituents appearing in its scopea problem for the rudimentary argument structure representations is the use of incomplete structures in natural language ie phenomena such as coordination and ellipsissince a precise structural description of nonconstituent coordination would require a rich inventory of incomplete phrase types we have agreed on a sort of underspecified representations the coordinated units are assigned structures in which missing lexical material is not represented at the level of primary linksfig3 shows the representation of the sentence sic wurde von preubischen truppen besetzt she was by prussian troops occupied und 1887 dem preuliischen staat angegliedert and 1887 tothe prussian state incorporated it was occupied by prussian troops and incorporated into prussia in 1887 the category of the coordination is labeled cvp here where c stands for coordination and vp for the actual categorythis extra marking makes it easy to distinguish between normal and coordinated categoriesmultiple coordination as well as enumerations are annotated in the same wayan explicit coordinating conjunction need not be presentstructuresharing is expressed using secondary linksthe development of linguistically interpreted corpora presents a laborious and timeconsuming taskin order to make the annotation process more efficient extra effort has been put into the development of an annotation toolthe tool supports immediate graphical feedback and automatic error checkingsince our scheme permits crossing edges visualisation as bracketing and indentation would be insufficientinstead the complete structure should be representedthe tool should also permit a convenient handling of node and edge labelsin particular variable tagsets and label collections should be allowedas the need for certain functionalities becomes obvious with growing annotation experience we have decided to implement the tool in two stagesin the first phase the main functionality for building and displaying unordered trees is suppliedin the second phase secondary links and additional structural functions are supportedthe implementation of the first phase as described in the following paragraphs is completedas keyboard input is more efficient than mouse input most effort has been put in developing an efficient keyboard interfacemenus are supported as a useful way of getting help on commands and labelsin addition to pure annotation we can attach comments to structuresfigure 1 shows a screen dump of the toolthe largest part of the window contains the graphical representation of the structure being annotatedthe following commands are available the three tagsets used by the annotation tool are variable and are stored together with the corpusthis allows easy modification if neededthe tool checks the appropriateness of the inputfor the implementation we used tcltk version 41the corpus is stored in a sql databasethe degree of automation increases with the amount of data availablesentences annotated in previous steps are used as training material for further processingwe distinguish five degrees of automation so far about 1100 sentences of our corpus have been annotatedthis amount of data suffices as training material to reliably assign the grammatical functions if the user determines the elements of a phrase and its type grammatical functions are assigned using standard statistical partofspeech tagging methods and for a phrase q with children of type t ta and grammatical functions gga we use the lexical probabilities and the contextual probabilities the lexical and contextual probabilities are determined separately for each type of phraseduring annotation the highest rated grammatical function labels gi are calculated using the viterbi algorithm and assigned to the structure ie we calculate argmax11 pq pq to keep the human annotator from missing errors made by the tagger we additionally calculate the strongest competitor for each label g if its probability is close to the winner the assignment is regarded as unreliable and the annotator is asked to confirm the assignmentfor evaluation the already annotated sentences were divided into two disjoint sets one for training the other one for testing the procedure was repeated 10 times with different partitioningsthe tagger rates 90 of all assignments as reliable and carries them out fully automaticallyaccuracy for these cases is 97most errors are due to wrong identification of the subject and different kinds of objects in sentences and vpsaccuracy of the unreliable 10 of assignments is 75 ie the annotator has to alter the choice in 1 of 4 cases when asked for confirmationoverall accuracy of the tagger is 95owing to the partial automation the average annotation efficiency improves by 25 as the annotation scheme described in this paper focusses on annotating argument structure rather than constituent trees it differs from existing treebanks in several aspectsthese differences can be illustrated by a comparison with the penn treebank annotation schemethe following features of our formalism are then of particular importance the current tagset comprises only 16 node labels and 34 function tags yet a finely grinned classification will take place in the near futurewe have argued that the selected approach is better suited for producing high quality interpreted corpora in languages exhibiting free constituent orderin general the resulting interpreted data also are closer to semantic annotation and more neutral with respect to particular syntactic theoriesas modern linguistics is also becoming more aware of the importance of larger sets of naturally occurring data interpreted corpora are a valuable resource for theoretical and descriptive linguistic researchin addition the approach provides empirical material for psycholinguistic investigation since preferences for the choice of certain syntactic constructions linearizations and attachments that have been observed in online experiments of language production and comprehension can now be put in relation with the frequency of these alternatives in larger amounts of textssyntactically annotated corpora of german have been missing until nowin the second phase of the project verbmobil a treebank for 30000 german spoken sentences as well as for the same amount of english and japanese sentences will be createdwe will closely coordinate the further development of our corpus with the annotation work in verbmobil and with other german efforts in corpus annotationsince the combinatorics of syntactic constructions creates a demand for very large corpora efficiency of annotation is an important criterion for the success of the developed methodology and toolsour annotation tool supplies efficient manipulation and immediate visualization of argument structurespartial automation included in the current version significantly reduces the manna1 effortits extension is subject to further investigationsthis work is part of the dfg sonderforschungsbereich 378 rcsourceadaptim cogniiivc proccsses we wish to thank tania avgustinova berthold crysmann lars konieczny stephan oepen karel oliva christian weil3 and two anonymous reviewers for their helpful comments on the content of this paperwe also wish to thank robert macintyre and ann taylor for valuable discussions on the penn treebank annotationspecial thanks go to oliver plaehn who implemented the annotation tool and to our fearless annotators roland hendriks kerstin klockner thomas schulz and berndpaul simon
A97-1014
an annotation scheme for free word order languageswe describe an annotation scheme and a tool developed for creating linguistically annotated corpora for nonconfigurational languagessince the requirements for such a formalism differ from those posited for configurational languages several features have been added influencing the architecture of the schemethe resulting scheme reflects a stratificational notion of language and makes only minimal assumptions about the interrelation of the particular representational stratawe release the negra corpus a hand parsed corpus of german newspaper text containing approximately 20000 sentences
nymble a highperformance learning namefinder this paper presents a statistical learned approach to finding names and other nonrecursive entities in text using a variant of the standard hidden markov model we present our justification for the problem and our approach a detailed discussion of the model itself and finally the successful results of this new approach in the past decade the speech recognition community has had huge successes in applying hidden markov models or hmm to their problemsmore recently the natural language processing community has effectively employed these models for partofspeech tagging as in the seminal and other more recent efforts we would now propose that hmm have successfully been applied to the problem of namefindingwe have built a namedentity recognition system using a slightlymodified version of an hmm we call our system quotnymblequotto our knowledge nymble outperforms the best published results of any other learning namefinderfurthermore it performs at or above the 90 accuracy level often considered quotnearhuman performancequotthe system arose from the ne task as specified in the last message understanding conference where organization names person names location names times dates percentages and money amounts were to be delimited in text using sgmlmarkupwe will describe the various models employed the methods for training these models and the method for quotdecodingquot on test data to date we have successfully trained and used the model on both english and spanish the latter for met the multilingual entity taskthe basic premise of the approach is to consider the raw text encountered when decoding as though it had passed through a noisy channel where it had been originally marked with named entitiesthe job of the generative model is to model the original process which generated the nameclassannotated words before they went through the noisy channelmore formally we must find the most likely sequence of nameclasses given a sequence of words pr in order to treat this as a generative model and since the a priori probability of the word sequencethe denominatoris constant for any given sentence we can maximize equation 22 by maximizing the numerator alonei see ch2 for an excellent overview of the principles of information theory previous approaches have typically used manually constructed finite state patterns for every new language and every new class of new information to spot one has to write a new set of rules to cover the new language and to cover the new class of informationa finitestate pattern rule attempts to match against a sequence of tokens in much the same way as a general regular expression matcherin addition to these finitestate pattern approaches a variant of brill rules has been applied to the problem as outlined in the atomic elements of information extraction indeed of language as a wholecould be considered the who where when and how much in a sentencea namefinder performs what is known as surface or lightweightparsing delimiting sequences of tokens that answer these important questionsit can be used as the first step in a chain of processors a next level of processing could relate two or more named entities or perhaps even give semantics to that relationship using a verbin this way further processing could discover the quotwhatquot and quothowquot of a sentence or body of textfurthermore namefinding can be useful in its own right an internet query system might use namefinding to construct more appropriatelyformed queries quotwhen was bill gates bornquot could yield the query quotbill gatesquotbornalso namefinding can be directly employed for link analysis and other information retrieval problemswe will present the model twice first in a conceptual and informal overview then in a moredetailed formal description of it as a type of hmmthe model bears resemblance to scott miller novel work in the air traffic information system task as documented in figure 31 is a pictorial overview of our modelinformally we have an ergodic hmm with only eight internal states with two special states the start and endofsentence stateswithin each of the nameclass states we use a statistical bigram language model with the usual onewordperstate emissionthis means that the number of states in each of the nameclass states is equal to the vocabulary size i vi the generation of words and nameclasses proceeds in three steps these three steps are repeated until the entire observed word sequence is generatedusing the viterbi algorithm we efficiently search the entire space of all possible nameclass assignments maximizing the numerator of equation 22 prinformally the construction of the model in this manner indicates that we view each type of quotnamequot to be its own language with separate bigram probabilities for generating its wordswhile the number of wordstates within each nameclass is equal to i vi this quotinteriorquot bigram language model is ergodic ie there is a probability associated with every one of the 1v12transitionsas a parameterized trained model if such a transition were never observed the model quotbacks off to a lesspowerful model as described below in 333 on p 4throughout most of the model we consider words to be ordered pairs composed of word and wordfeature denoted the word feature is a simple deterministic computation performed on each word as it is added to or feature computation is an extremely small part of the implementation at roughly ten lines of codealso most of the word features are used to distinguish types of numbers which are languageindependent2 the rationale for having such features is clear in roman languages capitalization gives good evidence of names3 this section describes the model formally discussing the transition probabilities to the wordstates which quotgeneratequot the words of each nameclassas with most trained probabilistic models we looked up in the vocabularyit produces one of the fourteen values in table 31these values are computed in the order listed so that in the case of nondisjoint featureclasses such as containsdigitandalpha and containsdigitanddash the former will take precedencethe first eight features arise from the need to distinguish and annotate monetary amounts percentages times and datesthe rest of the features distinguish types of capitalization and all other words in particular the f irstword feature arises from the fact that if a word is capitalized and is the first word of the sentence we have no good information as to why it is capitalized the word feature is the one part of this model which is languagedependentfortunately the word have a most accurate most powerful model which will quotback off to a lesspowerful model when there is insufficient training and ultimately backoff to unigram probabilitiesin order to generate the first word we must make a transition from one nameclass to another as well as calculate the likelihood of that wordour intuition was that a word preceding the start of a nameclass and the word following a nameclass would be strong indicators of the subsequent and preceding nameclasses respectively2 nonenglish languages tend to use the comma and period in the reverse way in which english does ie the comma is a decimal point and the period separates groups of three digits in large numbershowever the reordering of the precedence of the two relevant wordfeatures had little effect when decoding spanish so they were left as is3 although spanish has many lowercase words in organization namessee 41 on p 6 for more detailsaccordingly the probabilitiy for generating the first word of a nameclass is factored into two parts pr prfirs i nc nc_ the top level model for generating all but the first word in a nameclass is pr there is also a magical quotendquot word so that the probability may be computed for any current word to be the final word of its nameclass ie pr ifind nc as one might imagine it would be useless to have the first factor in equation 31 be conditioned off of the end word so the probability is conditioned on the previous real word of the previous nameclass ie we compute w1 last observed word otherwise nc start of sentence note that the above probability is not conditioned on the wordfeature of w_1 the intuition of which is that in the cases where the previous word would help the model predict the next nameclass the world featurecapitalization in particularis not important quotmrquot is a good indicator of the next word beginning the person nameclass regardless of capitalization especially since it is almost never seen as quotmrquotthe calculation of the above probabilities is straightforward using eventssamplesize where c represents the number of times the events occurred in the training data ideally we would have sufficient training every event whose conditional probability we wish to calculatealso ideally we would have sufficient samples of that upon which each conditional probability is conditioned eg for pr we would like to have seen sufficient numbers of nc_ w1unfortunately there is rarely enough training data to compute accurate probabilities when quotdecodingquot on new data3331 unknown words the vocabulary of the system is built as it trainsnecessarily then the system knows about all words for which it stores bigram counts in order to compute the probabilities in equations 31 33the question arises how the system should deal with unknown words since there are three ways in which they can appear in a bigram as the current word as the previous word or as botha good answer is to train a separate unknown wordmodel off of heldout data to gather statistics of unknown words occurring in the midst of known wordstypically one holds out 1020 of one training for smoothing or unknown wordtrainingin order to overcome the limitations of a small amount of training dataparticularly in spanishwe hold out 50 of our data to train the unknown word model save these counts in training data file then hold out the other 50 and concatentate these bigram counts with the first unknown wordtraining filethis way we can gather likelihoods of an unknown word appearing in the bigram using all available training datathis approach is perfectly valid as we art trying to estimate that which we have not legitimately seen in trainingwhen decoding if either word of the bigram is unknown the model used to estimate the probabilities of equations 313 is the unknown word model otherwise it is the model from the normal trainingthe unknown wordmodel can be viewed as a first level of backoff therefore since it is used as a backup model when an unknown word is encountered and is necessarily not as accurate as the bigram model formed from the actual training3332 further backoff models and smoothing whether a bigram contains an unknown word or not it is possible that either model may not have seen this bigram in which case the model backs off to a lesspowerful lessdescriptive modeltable 32 shows a graphic illustration of the backoff scheme the weight for each backoff model is computed onthefly using the following formula if computing pr assign weight of a to the direct computation and a weight of to the backoff model where where quotold cquot is the sample size of the model from which we are backing offthis is a rather simple method of smoothing which tends to work well when there are only three or four levels of backoff4 this method also overcomes the problem when a backoff model has roughly the same amount of training as the current model via the first factor of equation 38 which essentially ignores the backoff model and puts all the weight on the primary model in such an equitrained situationas an exampledisregarding the first factorif we saw the bigram quotcome hitherquot once in training and we saw quotcome herequot three times and nowhere else did we see the word quotcomequot in the notaname class when computing pr we would back off to the unigram probability pr with a weight of since the number of unique outcomes for the wordstate for quotcomequot would be two and the total number of times quotcomequot had been the preceding word in a bigram would be four unlike a traditional hmm the probability of generating a particular word is 1 for each wordstate inside each of the nameclass statesan alternative and more traditionalmodel would have a small number of states within each nameclass each having perhaps some semantic signficance eg three states in the person nameclass representing a first middle and last name where each of these three states would have some probability associated with emitting any word from the vocabularywe chose to use a bigram language model because while less semantically appealing such ngram language models work remarkably well in practicealso as a first research attempt an ngram model captures the most general significance of the words in each nameclass without presupposing any specifics of the structure of names a la the person nameclass example abovemore important either approach is mathematically valid as long as all transitions out of a given state sum to oneall of this modeling would be for naught were it not for the existence of an efficient algorithm for finding the optimal state sequence thereby quotdecodingquot the original sequence of nameclassesthe number of possible state sequences for n states in an ergodic model for a sentence of m words is alm but using dynamic programming and an appropriate merging of multiple theories when they converge on a particular statethe viterbi decoding algorithma sentence can be quotdecodedquot in time linear to the number of tokens in the sentence 0 since we are interested in recovering the nameclass state sequence we pursue eight theories at every given step of the algorithminitially the wordfeature was not in the model instead the system relied on a thirdlevel backoff partofspeech tag which in turn was computed by our stochastic partofspeech taggerthe tags were taken at face value there were not kbest tags the system treated the partofspeech tagger as a quotblack boxquotalthough the partofspeech tagger used capitalization to help it determine propernoun tags this feature was only implicit in the model and then only after two levels of backoffalso the capitalization of a word was submerged in the muddiness of partofspeech tags which can quotsmearquot the capitalization probability mass over several tagsbecause it seemed that capitalization would be a good namepredicting feature and that it should appear earlier in the model we eliminated the reliance on partofspeech altogether and opted for the more direct wordfeature model described above in 3originally we had a very small number of features indicating whether the word was a number the first word of a sentence all uppercase initalcapitalized or lowercasewe then expanded the feature set to its current state in order to capture more subtleties related mostly to numbers due to increased performance on every test we kept the enlarged feature setcontrary to our expectations spanish contained many examples of lowercase words in organization and location namesfor example departamento could often start an organization name and adjectival placenames such as coreana could appear in locations and by convention are not capitalizedthe entire system is implemented in c atop a quothomebrewedquot generalpurpose class library providing a rapid codecompiletraintest cyclein fact many nlp systems suffer from a lack of software and computerscience engineering effort runtime efficiency is key to performing numerous experiments which in turn is key to improving performancea system may have excellent performance on a given task but if it takes long to compile andor run on test data the rate of improvement of that system will be miniscule compared to that which can run very efficientlyon a sparc20 or sgi indy with an appropritae amount of ram nymble can compile in 10 minutes train in 5 minutes and run at 6mbhrthere were days in which we had as much as a 15 reduction in error rate to borrow the performance measure used by the speech community where error rate 100 fmeasurein this section we report the results of evaluating the final version of the learning softwarewe report the results for english and for spanish and then the results of a set of experiments to determine the impact of the training set size on the algorithm performance in both english and spanishfor each language we have a heldout development test set and a heldout blind test setwe only report results on the blind test set for each respective languagethe scoring program measures both precision i recall terms borrowed from the informationretrieval community where number of correct responses and number responses number of correct responses number correct in key put informally recall measures the number of quothitsquot vs the number of possible correct answers as specified in the key file whereas precision measures how many answers were correct ones compared to the number of answers deliveredthese two measures of performance combine to form one measure of performance the fmeasure which is computed by the weighted harmonic mean of precision and recall rp where if represents the relative weight of recall to precision to our knowledge our learned namefinding system has achieved a higher fmeasure than any other learned system when compared to stateoftheart manual systems on similar dataour test set of english data for reporting results is that of the muc6 test set a collection of 30 wsj documents our spanish test set is that used for met comprised of articles from the news agency afptable 41 illustrates nymble performance as compared to the best reported scores for each categorywith any learning technique one of the important questions is how much training data is required to get acceptable performancemore generally how does performance vary as the training set size is increased or decreasedwe ran a sequence of experiments in english and in spanish to try to answer this question for the final model that was implementedfor english there were 450000 words of training databy that we mean that the text of the document itself was 450000 words longgiven this maximum size of training available to us we successfully divided the training material in half until we were using only one eighth of the original training set size or a training set of 50000 words for the smallest experimentto give a sense of the size of 450000 words that is roughly half the length of one edition of the wall street journalthe results are shown in a histogram in figure 41 belowthe positive outcome of the experiment is that half as much training data would have given almost equivalent performancehad we used only one quarter of the data or approximately 100000 words performance would have degraded slightly only about 12 percentreducing the training set size to 50000 words would have had a more significant decrease in the performance of the system however the performance is still impressive even with such a small training seton the other hand the result also shows that merely annotating more data will not yield dramatic improvement in the performancewith increased training data it would be possible to use even more detailed models that require more data and could achieve significantly improved overall system performance with those more detailed modelsfor spanish we had only 223000 words of training datawe also measured the performance of the system with half the training data or slightly more than 100000 words of textfigure 42 shows the resultsthere is almost no change in performance by using as little as 100000 words of training datatherefore the results in both languages were comparableas little as 100000 words of training data produces performance nearly comparable to handcrafted systemswhile our initial results have been quite favorable there is still much that can be done potentially to improve performance and completely close the gap between learned and rulebased namefinding systemswe would like to incorporate the following into the current modelwe have shown that using a fairly simple probabilistic model finding names and other numerical entities as specified by the muc tasks can be performed with quotnearhuman performancequot often likened to an f of 90 or abovewe have also shown that such a system can be trained efficiently and that given appropriately and consistently marked answer keys it can be trained on languages foreign to the trainer of the system for example we do not speak spanish but trained nymble on answer keys marked by native speakersnone of the formalisms or techniques presented in this paper is new rather the approach to this taskthe model itselfis wherein lies the noveltygiven the incredibly difficult nature of many nlp tasks this example of a learned stochastic approach to namefinding lends credence to the argument that the nlp community ought to push these approaches to find the limit of phenomena that may be captured by probabilistic finitestate methods
A97-1029
nymble a highperformance learning namefinderthis paper presents a statistical learned approach to finding names and other nonrecursive entities in text using a variant of the standard hidden markov modelwe present our justification for the problem and our approach a detailed discussion of the model itself and finally the successful results of this new approachwe develop nymble an hmmbased name tagging system operating in english and spanishnymble uses statistical learning to acquire a hidden markov model that recognises nes in text
disambiguation of proper names in text trieving information from full text using linguisknowledge in of the fifteenth online meeting new york may text processing applications such as machine translation systems information retrieval systems or naturallanguage understanding systems need to identify multiword expressions that refer to proper names of people organizations places laws and other entitieswhen encountering mrs candy hill in input text for example a machine translation system should not attempt to look up the translation of candy and hill but should translate mrs to the appropriate personal title in the target language and preserve the rest of the name intactsimilarly an information retrieval system should not attempt to expand candy to all of its morphological variants or suggest synonyms the need to identify proper names has two aspects the recognition of known names and the discovery of new namessince obtaining and maintaining a name database requires significant effort many applications need to operate in the absence of such a resourcewithout a database names need to be discovered in the text and linked to entities they refer toeven where name databases exist text needs to be scanned for new names that are formed when entities such as countries or commercial companies are created or for unknown names which become important when the entities they refer to become topicalthis situation is the norm for dynamic applications such as news providing services or internet information indexingthe next section describes the different types of proper name ambiguities we have observedsection 3 discusses the role of context and world knowledge in their disambiguation section 4 describes the process of name discovery as implemented in nominator a module for proper name recognition developed at the ibm tj watson research centersections 57 elaborate on nominator disambiguation heuristicsname identification requires resolution of a subset of the types of structural and semantic ambiguities encountered in the analysis of nouns and noun phrases in natural language processinglike common nouns and proper names exhibit structural ambiguity in prepositional phrase attachment and in conjunction scopea pp may be attached to the preceding np and form part of a single large name as in np midwest center ppfor npcomputer researchalternatively it may be independent of the preceding np as in npcarnegie hall ppfor npirwin berlin where for separates two distinct names carnegie hall and irwin berlinas with ppattachment of common noun phrases the ambiguity is not always resolved even in human sentence parsing the location of an organization for instance could be part of its name or an phrasesthe components of victoria and albert museum and ibm and bell laboratories look identical however and is part of the name of the museum in the first example but a conjunction joining two computer company names in the secondalthough this problem is well known a search of the computational literature shows that few solutions have been proposed perhaps because the conjunct ambiguity problem is harder than pp attachment for a method of conjunct identification that relies on syntactic category and semantic labelsimilar structural ambiguity exists with respect to the possessive pronoun which may indicate a relationship between two names or may constitute a component of a single name the resolution of structural ambiguity such as pp attachment and conjunction scope is required in order to automatically establish the exact boundaries of proper namesonce these boundaries have been established there is another type of wellknown structural ambiguity involving the internal structure of the proper namefor example professor of far eastern art john blake is parsed as professor of far eastern art john blake whereas professor art klein is professor art kleinproper names also display semantic ambiguityidentification of the type of proper nouns resembles the problem of sense disambiguation for common nouns where for instance state taken out of context may refer either to a government body or the condition of a person or entitya name variant taken out of context may be one of many types eg ford by itself could be a person an organization a make of car or a place entitytype ambiguity is quite common as places are named after famous people and companies are named after their owners or locationsin addition naming conventions are sometimes disregarded by people who enjoy creating novel and unconventional namesa store named mrtall and a woman named april wednesday come to mindlike common nouns proper nouns exhibit systematic metonymy united states refers either to a geographical area or to the political body which governs this area wall street journal refers to the printed object its content and the commercial entity that produces itin addition proper names resemble definite noun phrases in that their intended referent may be ambiguousthe man may refer to more than one male individual previously mentioned in the discourse or present in the nonlinguistic context j smith may similarly refer to more than one individual named joseph smith john smith jane smith etcsemantic ambiguity of names is very common because of the standard practice of using shorter names to stand for longer onesshared knowledge and context are crucial disambiguation factorsparis usually refers to the capital of france rather than a city in texas or the trojan prince but in a particular context such as a discussion of greek mythology the presumed referent changesbeyond the ambiguities that proper names share with common nouns some ambiguities are particular to names noun phrases may be ambiguous between a name reading and a common noun phrase as in candy the person name versus candy the food or the house as an organization versus a house referring to a buildingin english capitalization usually disambiguates the two though not at sentence beginnings at the beginning of a sentence the components and capitalization patterns of new coke and new sears are identical only world knowledge informs us that new coke is a product and sears is a companyfurthermore capitalization does not always disambiguate names from nonnames because what constitutes a name as opposed to a nonname is not always clearaccording to names which consist of proper nouns have unique referenceproper nouns differ in their linguistic behavior from common nouns in that they mostly do not take determiners or have a plural formhowever some names do take determiners as in the new york times in this case they quotare perfectly regular in taking the definite article since they are basically premodified count nounsthe difference between an ordinary common noun and an ordinary common noun turned name is that the unique reference of the name has been institutionalized as is made overt in writing by initial capital letterquot quirk et al description of names seems to indicate that capitalized words like egyptian or frenchmen are not namesit leaves capitalized sequences like minimum alternative tax annual report and chairman undetermined as to whether or not they are namesall of these ambiguities must be dealt with if proper names are to be identified correctlyin the rest of the paper we describe the resources and heuristics we have designed and implemented in nominator and the extent to which they resolve these ambiguitiesin general two types of resources are available for disambiguation context and world knowledgeeach of these can be exploited along a continuum from cheaper to computationally and manually more expensive usagecheaper models which include no context or world knowledge do very little disambiguationmore expensive models which use full syntactic parsing discourse models inference and reasoning require computational and human resources that may not always be available as when massive amounts of text have to be rapidly processed on a regular basisin addition given the current state of the art full parsing and extensive world knowledge would still not yield complete automatic ambiguity resolutionin designing nominator we have tried to achieve a balance between high accuracy and speed by adopting a model which uses minimal context and world knowledgenominator uses no syntactic contextual informationit applies a set of heuristics to a list of strings based on patterns of capitalization punctuation and location within the sentence and the documentthis design choice differentiates our approach from that of several similar projectsmost proper name recognizers that have been reported on in print either take as input text tagged by partofspeech and or perform syntactic andor morphological analysis on all words including capitalized ones that are part of candidate proper names and several and look in the local context of the candidate proper name for external information such as appositives or for humansubject verbs in order to determine the category of the candidate proper namenominator does not use this type of external contextinstead nominator makes use of a different kind of contextual information proper names cooccuring in the documentit is a fairly standard convention in an edited document for one of the first references to an entity to include a relatively full form of its namein a kind of discourse anaphora other references to the entity take the form of shorter more ambiguous variantsnominator identifies the referent of the full form and then takes advantage of the discourse context provided by the list of names to associate shorter more ambiguous name occurrences with their intended referentsin terms of world knowledge the most obvious resource is a database of known namesin fact this is what many commercially available name identification applications use a reliable database provides both accuracy and efficiency if fast lookup methods are incorporateda database also has the potential to resolve structural ambiguity for example if ibm and apple computers are listed individually in the database but ibm and apple computers is not it may indicate a conjunction of two distinct namesa database may also contain default world knowledge information eg with no other overriding information it may be safe to assume that the string mcdonald refers to an organizationbut even if an existing database is reliable names that are not yet in it must be discovered and information in the database must be overridden when appropriatefor example if a new name such as ibm credit corp occurs in the text but not in the database while ibm exists in the database automatic identification of ibm should be blocked in favor of the new name ibm credit corpif a name database exists nominator can take advantage of ithowever our goal has been to design nominator to function optimally in the absence of such a resourcein this case nominator consults a small authority file which contains information on about 3000 special name words and their relevant lexical featureslisted are personal titles organizational identifiers and names of large places also listed are exception words such as uppercase lexical items that are unlikely to be singleword proper names and lowercase lexical items that can be parts of proper namesin addition the authority file contains about 20000 first namesour choice of disambiguation resources makes nominator fast and robustthe precision and recall of nominator operating without a database of preexisting proper names is in the 90 while the processing rate is over 40mg of text per hour on a risc6000 machine for detailsthis efficient processing has been achieved at the cost of limiting the extent to which the program can understand the text being analyzed and resolve potential ambiguitymany wordsequences that are easily recognized by human readers as names are ambiguous for nominator given the restricted set of tools available to itin cases where nominator cannot resolve an ambiguity with relatively high confidence we follow the principle that noisy information is to be preferred to data omitted so that no information is lostin ambiguous cases the module is designed to make conservative decisions such as including nonnames or nonname parts in otherwise valid name sequencesit assigns weak types such as human or fails to assign a type if the available information is not sufficientin this section we give an overview of the process by which nominator identifies and classifies proper namesnominator first step is to build a list of candidate names for a documentnext plitting heuristics are applied to all candidate names for the purpose of breaking up complex names into smaller onesfinally nominator groups together name vanants that refer to the same entityafter information about names and their referents has been extracted from individual documents an aggregation process combines the names collected from all the documents into a dictionary or database of names representative of the document collectionwe illustrate the process of name discovery with an excerpt taken from a wall street journal article in the tipster cdrom collection paragraph breaks are omitted to conserve spacethe professional conduct of lawyers in other jurisdictions is guided by american bar association rules or by state bar ethics codes none of which permit nonlawyers to be partners in law firmsthe aba has steadfastly reserved the title of partner and partnership perks for those with law degreesbut robert jordan a partner at steptoe johnson who took the lead in drafting the new district bar code said the aba rules were viewed as quottoo restrictivequot by lawyers herequotthe practice of law in washington is very different from what it is in dubuquequot he saidsome of these nonlawyer employees are paid at partners levelsyet not having the partner title quotmakes nonlawyers working in law firms secondclass citizensquot said mr jordan of steptoe johnson before the text is processed by nominator it is analyzed into tokens sentences words tags and punctuation elementsnominator forms a candidate name list by scanning the tokenized document and collecting sequences of capitalized tokens as well as some special lowercase tokens such as conjunctions and prepositionsthe list of candidate names extracted from the sample document contains each candidate name is examined for the presence of conjunctions prepositions or possessive a set of heuristics is applied to determine whether each candidate name should be split into smaller independent namesfor example mr jordan of steptoe johnson is split into mr jordan and steptoe johnsonfinally nominator links together variants that refer to the same entitybecause of standard englishlanguage naming conventions mr jordan is grouped with robert jordanaba is grouped with american bar association as a possible abbreviation of the longer nameeach linked group is categorized by an entity type and assigned a canonical name as its identifierthe canonical name is the fullest least ambiguous label that can be used to refer to the entityit may be one of the variants found in the document or it may be constructed from components of different ones as the links are formed each group is assigned a typein the sample output shown below each canonical name is followed by its entity type and by the variants linked to itafter the whole document collection has been processed linked groups are merged across documents and their variants combinedthus if in one document president clinton was a variant of william clinton while in another document governor clinton was a variant of william clinton both are treated as variants of an aggregated william clinton groupin this minimal sense nominator uses the larger context of the document collection to learn more variants for a given namein the following sections we describe how ambiguity is resolved as part of the name discovery processwe identify three indicators of potential structural ambiguity prepositions conjunctions and possessive pronouns which we refer to as ambiguous operatorsin order to determine whether plitting should occur a name sequence containing an ambiguous operator is divided into three segments the operator the substring to its left and the substring to its rightthe splitting process applies a set of heuristics based on patterns of capitalization lexical features and the relative cope of operators to name sequences containing these operators to determine whether or not they should be split into smaller nameswe can describe the splitting heuristics as determining the scope of ambiguous operators by analogy to the standard linguistic treatment of quantifiersfrom nominator point of view all three operator types behave in similar ways and often interact when they cooccur in the same name sequence as in new york moma and the victoria and albert museum in londonthe scope of ambiguous operators also interacts with the cope of npheads if we define the scope of npheads as the constituents they dominatefor example in victoria and albert museum the conjunction is within the scope of the lexical head museum because museum is a noun that can take pp modification and hence premodification since premodifiers can contain conjunctions with modifiers within its scopethese two substrings are better balanced than the substrings of the food and drug administration where the left substring does not contain a strongscope nphead while the right one does because of the principle that noisy data is preferable to loss of information nominator does not split names if relative strength cannot be determinedas a result there occur in nominator output certain names such as american television 6 commuspecial treatment is required for words in sentenceinitial position which may be capitalized because they are part of a proper name or simply because they are sentence initialwhile the heuristics for splitting names are linguistically motivated and rulegoverned the heuristics for handling sentenceinitial names are based on patterns of word occurrence in the documentwhen all the names have been collected and split names containing sentenceinitial words are compared to other names on the listif the sentenceinitial candidate name also occurs as a nonsentenceinitial name or as a substring of it the candidate name is assumed to be valid and is retainedotherwise it is removed from the listfor example if white occurs at sentenceinitial position and also as a substring of another name it is keptif it is found only in sentenceinitial position white is discardeda more difficult situation arises when a sentenceinitial candidate name contains a valid name that begins at the second word of the stringif the preceding word is an adverb a pronoun a verb or a preposition it can safely be discardedthus a sentence beginning with yesterday columbia yields columbia as a namebut cases involving other parts of speech remain unresolvedif they are sentenceinitial nominator accepts as names both new sears and new coke it also accepts sentenceinitial five reagan as a variant of president reagan if the two cooccur in a documentin a typical document a single entity may be referred to by many name variants which differ in their degree of potential ambiguityas noted above paris and washington are highly ambiguous out of context but in well edited text they are often disambiguated by the occurrence of a single unambiguous variant in the same documentthus washington is likely to cooccur with either president washington or washington dc but not with bothindeed we have observed that if several unambiguous variants do cooccur as in documents that mention both the owner of a company and the company named after the owner the editors refrain from using a variant that is ambiguous with respect to bothto disambiguate highly ambiguous variants then we link them to unambiguous ones occurring within the same documentnominator cycles through the list of names identifying anchors or variant names that unambiguously refer to certain entity typeswhen an anchor is identified the list of name candidates is scanned for ambiguous variants that could refer to the same entitythey are linked to the anchorour measure of ambiguity is very pragmaticit is based on the confidence scores yielded by heuristics that analyze a name and determine the entity types it can refer toif the heuristic for a certain entity type results in a high condifence score we determine that the name unambiguously refers to this typeotherwise we choose the highest score obtained by the various heuristicsa few simple indicators can unambiguously determine the entity type of a name such as mr for a person or inc for an organizationmore commonly however several pieces of positive and negative evidence are accumulated in order to make this judgementwe have defined a set of obligatory and optional components for each entity typefor a human name these components include a professional title a personal title a first name middle name nickname last name and suffix the combination of the various components is inspectedsome combinations may result in a high negative score highly confident that this cannot be a person namefor example if the name lacks a personal title and a first name and its last name is listed as an organization word in the authority list it receives a high negative scorethis is the case with justice department or frank sinatra buildingthe same combination but with a last name that is not a listed organization word results in a low positive score as for justice johnson or frank sinatrathe presence or absence of a personal title is also important for determining confidence if present the result is a high confidence score no personal title with a known first name results in a low positive confidence score and no personal title with an unknown first name results in a zero score by the end of the analysis process justice department has a high negative score for person and a low positive score for organization resulting in its classification as an organizationbeverly hills by contrast has low positive scores both for place and for personnames with low or zero scores are first tested as possible variants of names with high positive scoreshowever if they are incompatible with any they are assigned a weak entity typethus in the absence of any other evidence in the document beverly hills is classified as a personthis analysis of course can be overridden by a name database listing beverly hills as a placefurther disambiguation may be possible during aggregation across documentsas mentioned before during aggregation linked groups from different documents are merged if their canonical forms are identicalas a rule their entity types should be identical as well to prevent a merge of boston and boston weak entity types however are allowed to merge with stronger entity typesthus jordan hills from one document is aggregated with jordan hills from another where there was sufficient evidence such as mr hills to make a firmer decisionan evaluation of an earlier version of nominator was performed on 88 wall street journal documents that had been set aside for testingwe chose the wall street journal corpus because it follows standard stylistic conventions especially capitalization which is essential for nominator to worknominator performance deteriorates if other conventions are not consistently followeda linguist manually identified 2426 occurrences of proper names which reduced to 1354 unique tokensof these nominator correctly identified the boundaries of 91 the precision rate was 92 for the 1409 names nominator identified in terms of semantic disambiguation nominator failed to assign an entity type to 21 of the names it identifiedthis high percentage is due to a decision not to assign a type if the confidence measure is too lowthe payoff of this choice is a very high precision rate 99 for the assignment of semantic type to those names that were disambiguated for detailsthe main reason that names remain untyped is insufficent evidence in the documentif ibm for example occurs in a document without international business machines nominator does not type it rather it let us later processes inspect the local context for further cluesthese processess form part of the talent tool set under development at the tj watson research centerthey take as their input text processed by nominator and further disambiguate untyped names appearing in certain contexts such as an appositive eg president of citibank corp other untyped names such as star bellied sneetches or george melloan business world are neither people places organizations nor any of the other legal or financial entities we categorize intomany of these uncategorized names are titles of articles books and other works of art that we currently do not handleambiguity remains one of the main challenges in the processing of natural language textefforts to resolve it have traditionally focussed on the development of fullcoverage parsers extensive lexicons and vast repositories of world knowledgefor some naturallanguage applications the tremendous effort involved in developing these tools is still required but in other applications such as information extraction there has been a recent trend towards favoring minimal parsing and shallow knowledge in its minimal use of resources nominator follows this trend it relies on no syntactic information and on a small semantic lexicon an authority list which could easily be modified to include information about new domainsother advantages of using limited resources are robustness and execution speed which are important in processing large amounts of textin another sense however development of a module like nominator still requires considerable human effort to discover reliable heuristics particularly when only minimal information is usedthese heuristics are somewhat domain dependent different generalizations hold for names of drugs and chemicals than those identified for names of people or organizationsin addition as the heuristics depend on linguistic conventions they are language dependent and need updating when stylistic conventions changenote for example the recent popularity of software names which include exclamation points as part of the namebecause of these difficulties we believe that for the forseeable future practical applications to discover new names in text will continue to require the sort of human effort invested in nominator
A97-1030
disambiguation of proper names in textidentifying the occurrences of proper names in text and the entities they refer to can be a difficult task because of the manytomany mapping between names and their referentswe analyze the types of ambiguity structural and semantic that make the discovery of proper names difficult in text and describe the heuristics used to disambiguate names in nominator a fullyimplemented module for proper name recognition developed at the ibm tj watson research centerwe use handwritten rules and knowledge bases to classify proper names into broad categories
a fast and portable realizer for text generation systems systems that generate natural language output as part of their interaction with a user have become a major area of research and developmenttypically natural language generation is divided into several phases namely text planning sentence planning and realization while text and sentence planning may sometimes be combined a realizer is almost always included as a distinct moduleit is in the realizer that knowledge about the target language resides realization is fairly well understood both from a linguistic and from a computational point of view and therefore most projects that use text generation do not include the realizer in the scope of their researchinstead such projects use an offtheshelf realizer among which penman and surgefuf are probably the most popularin this technical note and demo we present a new offtheshelf realizer realprorealpro is derived from previous systems but represents a new design and a completely new implementationrealpro has the following characteristics which we believe are unique in this combination we reserve a more detailed comparison with penman and fuf as well as with alethgengl for a more extensive paperthis technical note presents realpro concentrating on its structure its coverage its interfaces and its performancethe input to realpro is a syntactic dependency structureit is called the deepsyntactic structure or quotdsyntsquot for short and is inspired in this form by i meleuk meaningtext theory this representation has the following salient features lexemes which are in the lexicon are in uppercase those that are not are in lowercasefor lexemes not in the lexicon it is necessary to specify the word class this illustrates that function words need not be included in the input dsynts and that syntactic issues such as subjectverb and noundeterminer agreement are handled automaticallythe tree in figure 2 yields note that realpro does not perform the task of lexical choice the input to realpro must specify all meaningbearing lexemes including features for free pronominalizationalso realpro does not map any sort of semantic labels to syntactic categoriesthese tasks we assume are handled by a separate component this has the advantage that the sentence planner can be unabashedly domainspecific which is necessary in today applications since a broadcoverage implementation of a domainindependent theory of conceptual representations and their mapping to linguistic representations is still far from being realisticfurthermore there is no nondeterminism in realpro the input to realpro fully determines the output though the input is a very abstract linguistic representation which is well suited for interfacing with knowledgebased applicationsthis means that realpro gives the developer control over the output while taking care of the linguistic detailsthe architecture of realpro is based on meaningtext theory which posits a sequence of correspondences between different levels of representationin realpro each transformation is handled by a separate modulerealpro is really a realizer she will which allows for a configuration using specially formatted linguistic knowledge bases which state grammar rules lexical entries and feature defaultseach module draws on one or several lkbsthe lexicon is an lkb which is used by all componentsfigure 3 shows the architectureas mentioned in section 3 realpro is configured by specifying several lkbsthe system comes with lkbs for english french is currently under developmentnormally the user need not change the two grammar lkbs unless the grammar of the target sublanguage is not a subset of english however the user may want to extend the lexicon if a lexeme with irregular morphology is not in it yetfor example in order to generate saw for the past tense of to see the following entry would be added to the lexiconthe user may also want to change the defaultsfor example if in hisher application all sentences must be in past tense the user can set the default tense to be past rather than present as follows the english grammar currently covers a wide range of syntactic phenomena most of these points are illustrated by the input in figure 2phenomena currently not handled automatically include certain types of quotfancy syntaxquot such as clefts and itclefts as well as longdistance dependencies such as these are books which i think you should buy realpro is currently distributed with a socket interface which allows it to be run as a standalone serverit has an application programming interface available in c and java which can be used to integrate realpro in applicationsfor training debugging and demonstration purposes realpro can also be used in interactive mode to realize sentences from ascii files containing syntactic specificationsthe following asciibased specification corresponds to the dsynts of sentence in this definition parentheses are used to specify the scope of dependency while square brackets are used to specify features associated with a lexemerealpro can output text formatted as ascii html or rtfin addition realpro can also output an ascii representation of the dgraphs that a user application can format in applicationspecific waysthe following table shows the runtime for sentences of different lengthsthese sentences are all of the form this small girl often claims that that boy often claims that mary likes red wine where the middle clause that that boy often claims is iterated for the longer sentencesthe row labeled quotlengthquot refers to the length of the output string in wordsnote that the number of output words is equal to the number of nodes in the ssynts and furthermore the number of nodes in the ssynts is greater than or equal to the number of nodes in the dsyntsthe row labeled quotsecquot represents average execution time for the sentence of the given input length in seconds on a pc with a 150mhz pentium processor and 32 megs of ramlength 5 10 15 20 30 40 50 sec 11 17 20 28 44 58 72 we also tested the system on the syntactically rather varied and complex input of figure 2 the average runtime for this input is 031 seconds which is comparable to the runtime reported above for the 20 word sentencewe conclude that the uniformity of the syntactic constructions found in the sentences used in the above test sequence does not influence the resultsthe complexity of the generation algorithm derives primarily from the tree traversals which must be performed twice when passing from dsynts to ssynts and from ssynts to the dmorphslet n be the length of the output string at each node each rule in the appropriate grammar must be checked against the subtree rooted at that nodethis tree matching is in the general case exponential in n however in fact it is dependent on two variables the maximal size of grammar rules in the grammar and the branching factor of the input representationpresumably because of deeper facts about language the grammar rules are quite smallthe current grammar does not have any rules with more than three nodesthis reduces the tree matching algorithm to polynomial in n furthermore while the branching factor of the input tree can in theory be n 1 in practice it will be much smallerfor example all the input trees used in the tests discussed above have branching factors of no more than 5we thus obtain defacto linear performance which is reflected in the numbers given abovethe system is fully operational runs on pc as well as on unix work stations and is currently used in an application we have developed as well as in several ongoing projects realpro is licensed free of charge to qualified academic institutions and is licensed for a fee to commercial sitesthe development of realpro was partially supported by usaf rome laboratory under contracts f3060293c0015 f3060294c0124 and f3060292c0163 and by darpa under contracts f306029520005 and f3060296c0220we are grateful to r kittredge t korelsky d mccullough a nasr e reiter and m white as well as to three anonymous reviewers for helpful comments about earlier drafts of this technical note andor about realpro
A97-1039
a fast and portable realizer for text generation systemswe release a surface realizer realpro and it is intended as offtheshelf plugin realizerour realpro surface realizer which produces a surface linguistic utterance
automatic extraction of subcategorization from corpora of the system that are new the extractor classifier and evaluator the grammar consists of 455 phrase structure rule schemata in the format accepted by the parser operators it is hallow in that no atof which thetempt is made to fully analyse unbounded dependencies however the distinction between arguments and adjuncts is expressed following xbar theory by chomskyadjunction to maximal projections of adjuncts as opposed to government of arguments more all analyses are rooted so the grammar assigns global shallow and often purious analyses to many sentences there are 29 distinct values for vsubcat and 10 for psubcat these are analysed in patterns along with specific closedclass lemmas of arguments such as suband so forth to classify patterns as evidence for one of the 160 subcategorization classes each of these classes can be parameterized for specific predicates by for example different prepositions or particles currently the coverage of this grammarthe proportion of sentences for which at least one analysis is foundis 79 when applied to the susanne corpus a 138k word treebanked and balanced subset of the brown corpus wide coverage is important since information is acquired only from successful parses the combined throughput of the parsing components on a sun ultrasparc 1140 is around 50 words per cpu second 22 the extractor classifier and evaluator the extractor takes as input the ranked analyses from the probabilistic parser it locates the subanalyses around the predicate finding the constituents identified as complements inside each subanalysis and the subject clause preceding it instances of passive constructions are recognized and treated specially the extractor returns the predicate the vsubcat value and just the heads of the complements the subcategorization classes recognized by the classifier were obtained by manually merging the classes exemplified in the comlex syntax and anlt dictionaries and adding around 30 classes found by manual inspection of unclassifiable patterns for corpus examples during development of the system these consisted of some extra patterns for phrasal verbs with complex complementation and with flexible ordering of the prepositionparticle some for nonpassivizable patterns with a surface direct object and some for rarer combinations of governed preposition and complementizer combinations the classifier filters out as unclassifiable around 15 of patterns found by the extractor when run on all the patternsets extracted from the susanne corpus this demonstrates the value of the classifier as a filter of spurious analyses as well as providing both translation between extracted patterns and two existing subcategorization dictionaries and a definition of the target subcategorization dictionary the evaluator builds entries by taking the patterns for a given predicate built from successful parses and records the number of observations of each subcategorization class patterns provide several types of information which can be used to rank or select between patterns in the patternset for a given sentence exemplifying an instance of a predicate such as the ranking of the parse from which it was extracted or the proportion of subanalyses supporting a specific pattern currently we simply select the pattern supported by the highest ranked parse however we are experimenting with alternative approaches the resulting set of putative classes for a predicate are filtered following brent 358 by hypothesis testing on binomial frequency data evaluating putative entries on binomial frequency data requires that we record the total number of patternsets n for a given predicate and the number of these patternsets containing a pattern supporting an entry for given class m these figures are straightforwardly computed from the output of the classifier however we also require an estimate of the probability that a pattern for class i will occur with a verb which is not a member of subcategorization class i brent proposes estimating these probabilities experimentally on the basis of the behaviour of the extractor we estimate this probability more directly by first extracting the number of verbs which are members of each class in the anlt dictionary and converting this to a probability of class membership by dividing by the total number of verbs in the dictionary and secondly by multiplying the complement of these probabilities by the probability of a pattern for class i defined as the number of patterns for i extracted from the susanne corpus divided by the total number of patterns so p the probability of verb v not of class i occurring with a pattern for class i is i mi i am i the probability that m or more occurrences of patterns for i will occur with a verb which is not a member of i given n occurrences of that verb setting a threshold of less than or equal to 005 yields a 95 or better confidence that a high enough proportion of patterns for i have observed for the verb to be in class 23 discussion our approach to acquiring subcategorization classes is predicated on the following assumptions most sentences will not allow the application of all possible rules of english complementation some sentences will be unambiguous even given indeterminacy of the provides a detailed explanation and justification for the use of this measure fact 5 of sentences in susanne are assigned only a single analysis by the grammar many incorrect analyses will yield patterns which are unclassifiable and are thus filtered out arguments of a specific verb will occur with greater frequency than adjuncts the patternset generator will incorrectly output for certain classes more often than others and even a highest ranked for i is only a probabilistic cue for membership of i so membership should only be inferred if there are enough occurrences of patterns for i in the data to outweigh the error probability for i this simple automated hybrid linguisticstatistical approach contrasts with the manual linguistic analysis of the comlex syntax lexicog be made automatically for example optional arguments are often understood or implied if missing others are syntactic tests involving diathesis alternation possibilities which require recognition that the ame argument defined usually by semantic class thematic role is occurring across argument positions we hope to exploit this information where possible at a later stage in the development of our approach however recognizing samesimilar arguments requires considerable quantities of lexical data or the ability to backoff to lexical semantic classes at the moment we exploit linguistic information about the syntactic type obligatoriness and position of arguments as well as the set of possible subcategorization classes and combine this with statistical inference based on the probability of class membership and the frequency and reliability of patterns for classes 3 experimental evaluation 31 lexicon evaluation method in order to test the accuracy of our system and to provide empirical feedback for further development we took the susanne sec and lob corpora of correct classes found in the dictionary entry however since there are disagreements between the dictionaries and there are classes found in the corpus data that are not contained in either dictionary we report results relative both to a manually merged entry from anlt and comlex and also for seven of the verbs to a manual analysis of the actual corpus data the latter analysis is necessary because precision and recall measures against the merged entry will still tend to yield inaccurate results as the system cannot acquire classes not exemplified in the data and may acquire classes incorrectly absent from the dictionaries we illustrate these problems with reference to there is overlap but not agreement between the comlex and anlt entries thus predict that occur with a sentential complement and dummy subject but only anlt predicts the possibility of a wh complement and only comlex predicts the presence of a ppto argument with the sentential complement one anlt entry covers two comlex entries given the different treatment of the relevant complements but the classifier keeps them distinct the corpus for examples of further classes we judge valid in which take a and infinitive complement as in seems to to be insane a passive participle as in depressed comparison illustrates the problem of errors of omission common to computational lexicons constructed manually and also from dictionaries all classes for exemplified in the corpus data but for example eight classes are not present so comparison only to the merged entry would give an unreasonably low estimate of recall lexicon evaluation figure 2 gives the raw results for the merged entries and corpus analysis on each verb it shows the of positives correct classes proby our system positives incorrect proposed by our system and negatives classes not proposed by our system as judged against the merged entry and for seven of the verbs against the corpus analysis it also shows in the final column the number of sentences from which classes were extracted dictionary corpus precision recall 657 766 355 434 figure 3 type precision and recall ranking accuracy ask 750 begin 1000 believe 667 because 1000 give 700 seem 750 swing 833 mean 814 figure 4 ranking accuracy of classes figure 3 gives the type precision and recall of our system recognition of subcategorization classes as evaluated against the merged dictionary entries and against the manually analysed corpus data the frequency distribution of classes is highly skewed for example for there are 107 instances of the most common class in the corpus data but only 6 instances in total of the least common four classes more generally for the manually analysed verbs almost 60 of the false negatives have only one or two exemplars each in the corpus citations none of them are returned by because the binomial filter always rejects classes hypothesised on the basis of such little evidence in figure 4 we estimate the accuracy with which our system ranks true positive classes against the correct ranking for the seven verbs whose corpus input was manually analysed we compute this measure by calculating the percentage of pairs of classes at positions st n m in the system ranking that are ordered the same in the correct ranking this gives us an estimate of the accuracy of the relative frequencies of classes output by the system for each of the seven verbs for which we undertook a corpus analysis we calculate the token recall of our system as the percentage of true positives in the corpus this gives us an estimate of the parsing performance that would result from providing a parser with entries built using the system shown in figure 5 further evaluation of the results for these seven verbs reveals that the filtering phase is the weak in the systeni there are only 13 negatives which the system failed to propose each exemplified in the data by a mean of 45 examples on the other there are 67 negatives by an mean of 71 examples which should ide 360 merged tp fp entry corpus tp fp data no of fn fn sentences ask 9 0 18 9 0 10 390 begin 4 1 7 4 1 7 311 believe 4 4 11 4 4 8 230 because 2 3 6 2 3 5 95 expect 6 5 3 223 find 5 7 15 645 give 5 2 11 5 2 5 639 help 6 3 8 223 like 3 2 7 228 move 4 3 9 217 produce 2 1 3 152 provide 3 2 6 217 seem 8 1 4 8 1 4 534 swing 4 0 10 4 0 8 45 totals 65 34 118 36 11 47 4149 figure 2 raw results for test of 14 verbs token recall ask 785 begin 738 believe 345 because 921 give 922 seem 847 swing 392 mean 809 figure 5 token recall have been accepted by the filter and 11 should have been rejected the performance of the filter for classes with less than 10 exemplars is around chance and a simple heuristic of accepting all classes with more than 10 exemplars would have produced broadly similar results for these verbs the filter may well be performing poorly because the probability of generating a subcategorization class for a given verb is often lower than the error probability for that class 33 parsing evaluation in addition to evaluating the acquired subcategorization information against existing lexical resources we have also evaluated the information in the context of an actual parsing system in particular we wanted to establish whether the subcategorization frequency information for individual verbs could be used to improve the accuracy of a parser that uses statistical techniques to rank analyses the experiment used the same probabilistic parser and tag sequence grammar as are present in the acquisition system although the experiment does not in any way rely on the mean recall precision crossings baseline lexicalised 100 707 723 093 714 729 figure 6 geig evaluation metrics for parser against susanne bracketings parsers or grammars being the same we randomly selected a test set of 250 incoverage sentences from the susanne treebank retagged with possibly multiple tags per word and measured the baseline accuracy of the unlexicalized parser on the sentences using the now standard parsevalgeig evaluation metrics of mean crossing brackets per sentence and bracket recall and precision and retrieved all citations of them in the lob corpus plus susanne with the 250 test sentences excluded we acquired subcategorization and associated frequency information from the citations in the process successfully parsing 380k words we then parsed the test set with each verb subcategorization possibility weighted by its raw frequency score and using the naive addone smoothing technique to allow for omitted possibilities the geig measures for the lexicalized parser show a 7 improvement in the crossing bracket score the existing test corpus this is not statisti carroll briscoe use the same test set although the baseline results reported here differ slightly due to differences in the mapping from parse trees to susannecompatible bracketings 361 significant at the 95 level p if the pattern of differences were maintained over a larger test set of 470 sentences it would be significant we expect that a more sophisticated smoothing technique a larger acquisition corpus and extensions to the system to deal with nominal and adjectival predicates would improve accuracy still further nevertheless this experiment demonstrates that lexicalizing a grammarparser with subcategorization frequencies can appreciably improve the accuracy of parse ranking 4 related work brent approach to acquiring subcategorization is based on a philosophy of only exploiting unambiguous and determinate information in unanalysed corpora he defines a number of lexical patterns which reliably cue one of five subcategorization classes brent does not report comprehensive results but for one class sentential complement verbs he achieves 96 precision and 76 recall at classifying individual tokens of 63 distinct verbs as exemplars or nonexemplars of this class he does not attempt to rank different classes for a given verb al utilise a pos tagged corpus and finitestate np parser to recognize and calculate the relative frequency of six subcategorization classes they report an accuracy rate of 83 at classifying 1565 classifiable tokens of 33 distinct verbs in running text and suggest that incorrect noun phrase boundary detection accounts for the majority of errors they report that for 32 verbs their system correctly predicts the most frequent class and for 30 verbs it correctly predicts the second most frequent class if there was one our system rankings include all classes for each verb from a total of 160 classes and average 814 correct manning conducts a larger experiment also using a pos tagged corpus and a finitestate np parser attempting to recognize sixteen distinct complementation patterns he reports that for a test sample of 200 tokens of 40 verbs in running text the acquired subcategorization dictionary listed the appropriate entry for 163 cases giving a token recall of 82 he also reports a comparison of acquired entries for verbs to the entries given in the advanced dictionary of current english 1989 on which his system achieves a precision of 90 and a recall of 43 his system averages 348 subentries less then half the number produced in our experiment it is not clear what level of evidence the performance of manning system is based on but the system was applied to 41 million words of text and the verbs are all common so it is likely that considerably more exemplars of each verb were available 5 conclusions and further work the experiment and comparison reported above suggests that our more comprehensive subcategorization class extractor is able both to assign classes to individual verbal predicates and also to rank them according to relative frequency with comparable accuracy to extant systems we have also demonstrated that a subcategorization dictionary built with the system can improve the accuracy of a probabilistic parser by an appreciable amount the system we have developed is straightforwardly extensible to nominal and adjectival predicates the existing grammar distinguishes nominal and adjectival arguments from adjuncts structurally so all that is required is extension of the classifier developing an analogous system for another language would be harder but not infeasible similar taggers and parsers have been developed for a number of languages but no extant subcategorization dictionaries exist to our knowledge therefore the lexical statistics we utilize for statistical filtering would need to be estimated perhaps using the technique described by brent however the entire approach to filtering needs improvement as evaluation of our results demonstrates that it is the weakest link in our current system our system needs further refinement to narrow some subcategorization classes for example to choose between differing control options with predicative complements it also needs supplementing with information about diathesis alternation possibilities and semantic selection preferences on argument heads grishman sterling poznanski sanfilippo resnik ribas and others have shown that it is possible to acquire selection preferences from parsed data our system already gathers head lemmas in patterns so any of these approaches could be applied in principle in future work we intend to extend the system in this direction the ability to recognize that argument slots of different subcategorization classes for the same predicate share semantic restrictionspreferences would assist recognition that the predicate undergoes specific alternations this in turn assisting inferences about control equi and raising predicate subcategorization is a key component of a lexical entry because most if not all recent syntactic theories project syntactic structure from the lexicontherefore a widecoverage parser utilizing such a lexicalist grammar must have access to an accurate and comprehensive dictionary encoding the number and category of a predicate arguments and ideally also information about control with predicative arguments semantic selection preferences on arguments and so forth to allow the recovery of the correct predicateargument structureif the parser uses statistical techniques to rank analyses it is also critical that the dictionary encode the relative frequency of distinct subcategorization classes for each predicateseveral substantial machinereadable subcategorization dictionaries exist for english either built largely automatically from machinereadable versions of conventional learners dictionaries or manually by linguists dictionary boguraev et al the comlex syntax dictionary grishman et alunfortunately neither approach can yield a genuinely accurate or comprehensive computational lexicon because both rest ultimately on the manual efforts of lexicographers linguists and are therefore prone to errors of omission and commission which are hard or impossible to detect automatically furthermore manual encoding is labour intensive and therefore it is costly to extend it to neologisms information not currently encoded or other languagesthese problems are compounded by the fact that predicate subcategorization is closely associated to lexical sense and the senses of a word change between corpora sublanguages andor subject domains in a recent experiment with a widecoverage parsing system utilizing a lexicalist grammatical framework briscoe carroll observed that half of parse failures on unseen test data were caused by inaccurate subcategorization information in the anlt dictionarythe close connection between sense and subcategorization and between subject domain and sense makes it likely that a fully accurate tatic subcategorization dictionary of a language is unattainable in any casemoreover although schabes and others have proposed lexicalized probabilistic grammars to improve the accuracy of parse ranking no widecoverage parser has yet been constructed incorporating probabilities of different subcategorizations for individual predicates because of the problems of accurately estimating themthese problems suggest that automatic construction or updating of subcategorization dictionaries from textual corpora is a more promising avenue to pursuepreliminary experiments acquiring a few verbal subcategorization classes have been reported by brent manning and ushioda et atin these experiments the maximum number of distinct subcategorization classes recognized is sixteen and only ushioda et at attempt to derive relative subcategorization frequency for individual predicateswe describe a new system capable of distinguishing 160 verbal subcategorization classesa superset of those found in the anlt and comlex syntax dictionariesthe classes also incorporate information about control of predicative arguments and alternations such as particle movement and extrapositionwe report an initial experiment which demonstrates that this system is capable of acquiring the subcategorization classes of verbs and the relative frequencies of these classes with comparable accuracy to the less ambitious extant systemswe achieve this performance by exploiting a more sophisticated robust statistical parser which yields complete though hallow parses a more comprehensive subcategorization class classifier and a priori estimates of the probability of membership of these classeswe also describe a smallscale experiment which demonstrates that subcategorization class frequency information for individual verbs can be used to improve parsing accuracythe system consists of the following six components which are applied in sequence to sentences containing a specific predicate in order to retrieve a set of subcategorization classes for that predicate for example building entries for attribute and given that one of the sentences in our data was the tagger and lemmatizer return is parsed successfully by the probabilistic lr parser and the ranked analyses are returnedthen the patternset extractor locates the subanalyses containing attribute and constructs a patternsetthe highest ranked analysis and pattern for this example are shown in figure 12patterns encode the value of the vsubcat feature from the vp rule and the head lemma of each argumentin the case of pp arguments the pattern also encodes the value of psubcat from the pp rule and the head lemma of its complementin the next stage of processing patterns are classified in this case giving the subcategorization class corresponding to transitive plus pp with nonfinite clausal complementthe system could be applied to corpus data by first sorting sentences into groups containing instances of a specified predicate but we use a different strategy since it is more efficient to tag lemmatize and parse a corpus just once extracting patternsets for all predicates in each sentence then to classify the patterns in all patternsets and finally to sort and recombine patternsets into sets of patternsets one set for each distinct predicate containing patternsets of just the patterns relevant to that predicatethe tagger lemmatizer grammar and parser have been described elsewhere so we provide only brief relevant details here concentrating on the description of the components of the system that are new the extractor classifier and evaluatorthe grammar consists of 455 phrase structure rule schemata in the format accepted by the parser operatorsit is hallow in that no atof which thetempt is made to fully analyse unbounded dependencieshowever the distinction between arguments and adjuncts is expressed following xbar theory by chomskyadjunction to maximal projections of adjuncts as opposed to government of arguments furthermore all analyses are rooted so the grammar assigns global shallow and often purious analyses to many sentencesthere are 29 distinct values for vsubcat and 10 for psubcat these are analysed in patterns along with specific closedclass head lemmas of arguments such as it whether and so forth to classify patterns as evidence for one of the 160 subcategorization classeseach of these classes can be parameterized for specific predicates by for example different prepositions or particlescurrently the coverage of this grammarthe proportion of sentences for which at least one analysis is foundis 79 when applied to the susanne corpus a 138k word treebanked and balanced subset of the brown corpuswide coverage is important since information is acquired only from successful parsesthe combined throughput of the parsing components on a sun ultrasparc 1140 is around 50 words per cpu secondthe extractor takes as input the ranked analyses from the probabilistic parserit locates the subanalyses around the predicate finding the constituents identified as complements inside each subanalysis and the subject clause preceding itinstances of passive constructions are recognized and treated speciallythe extractor returns the predicate the vsubcat value and just the heads of the complements the subcategorization classes recognized by the classifier were obtained by manually merging the classes exemplified in the comlex syntax and anlt dictionaries and adding around 30 classes found by manual inspection of unclassifiable patterns for corpus examples during development of the systemthese consisted of some extra patterns for phrasal verbs with complex complementation and with flexible ordering of the prepositionparticle some for nonpassivizable patterns with a surface direct object and some for rarer combinations of governed preposition and complementizer combinationsthe classifier filters out as unclassifiable around 15 of patterns found by the extractor when run on all the patternsets extracted from the susanne corpusthis demonstrates the value of the classifier as a filter of spurious analyses as well as providing both translation between extracted patterns and two existing subcategorization dictionaries and a definition of the target subcategorization dictionarythe evaluator builds entries by taking the patterns for a given predicate built from successful parses and records the number of observations of each subcategorization classpatterns provide several types of information which can be used to rank or select between patterns in the patternset for a given sentence exemplifying an instance of a predicate such as the ranking of the parse from which it was extracted or the proportion of subanalyses supporting a specific patterncurrently we simply select the pattern supported by the highest ranked parsehowever we are experimenting with alternative approachesthe resulting set of putative classes for a predicate are filtered following brent by hypothesis testing on binomial frequency dataevaluating putative entries on binomial frequency data requires that we record the total number of patternsets n for a given predicate and the number of these patternsets containing a pattern supporting an entry for given class m these figures are straightforwardly computed from the output of the classifier however we also require an estimate of the probability that a pattern for class i will occur with a verb which is not a member of subcategorization class i brent proposes estimating these probabilities experimentally on the basis of the behaviour of the extractorwe estimate this probability more directly by first extracting the number of verbs which are members of each class in the anlt dictionary and converting this to a probability of class membership by dividing by the total number of verbs in the dictionary and secondly by multiplying the complement of these probabilities by the probability of a pattern for class i defined as the number of patterns for i extracted from the susanne corpus divided by the total number of patternsso p the probability of verb v not of class i occurring with a pattern for class i is the probability of the event happening m or more times is thus p is the probability that m or more occurrences of patterns for i will occur with a verb which is not a member of i given n occurrences of that verbsetting a threshold of less than or equal to 005 yields a 95 or better confidence that a high enough proportion of patterns for i have been observed for the verb to be in class i3our approach to acquiring subcategorization classes is predicated on the following assumptions probabilistic cue for membership of i so membership should only be inferred if there are enough occurrences of patterns for i in the data to outweigh the error probability for ithis simple automated hybrid linguisticstatistical approach contrasts with the manual linguistic analysis of the comlex syntax lexicographers who propose five criteria and five heuristics for argumenthood and six criteria and two heuristics for adjuncthood culled mostly from the linguistics literaturemany of these are not exploitable automatically because they rest on semantic judgements which cannot be made automatically for example optional arguments are often understood or implied if missingothers are syntactic tests involving diathesis alternation possibilities which require recognition that the ame argument defined usually by semantic class thematic role is occurring across argument positionswe hope to exploit this information where possible at a later stage in the development of our approachhowever recognizing samesimilar arguments requires considerable quantities of lexical data or the ability to backoff to lexical semantic classesat the moment we exploit linguistic information about the syntactic type obligatoriness and position of arguments as well as the set of possible subcategorization classes and combine this with statistical inference based on the probability of class membership and the frequency and reliability of patterns for classesin order to test the accuracy of our system and to provide empirical feedback for further development we took the susanne sec and lob corpora a total of 12 million wordsand extracted all sentences containing an occurrence of one of fourteen verbs up to a maximum of 1000 citations of eachthese verbs listed in figure 2 were chosen at random subject to the constraint that they exhibited multiple complementation patternsthe sentences containing these verbs were tagged and parsed automatically and the extractor classifier and evaluator were applied to the resulting the binomial distribution gives the probability of an event with probability p happening exactly m times out of n attempts successful analysesthe citations from which entries were derived totaled approximately 70k wordsthe results were evaluated against a merged entry for these verbs from the anlt and comlex syntax dictionaries and also against a manual analysis of the corpus data for seven of the verbsthe process of evaluating the performance of the system relative to the dictionaries could in principle be reduced to an automated report of type precision and recall however since there are disagreements between the dictionaries and there are classes found in the corpus data that are not contained in either dictionary we report results relative both to a manually merged entry from anlt and comlex and also for seven of the verbs to a manual analysis of the actual corpus datathe latter analysis is necessary because precision and recall measures against the merged entry will still tend to yield inaccurate results as the system cannot acquire classes not exemplified in the data and may acquire classes incorrectly absent from the dictionarieswe illustrate these problems with reference to seem where there is overlap but not agreement between the comlex and anlt entriesthus both predict that seem will occur with a sentential complement and dummy subject but only anlt predicts the possibility of a wh complement and only comlex predicts the presence of a ppto argument with the sentential complementone anlt entry covers two comlex entries given the different treatment of the relevant complements but the classifier keeps them distinctthe corpus data for seem contains examples of further classes which we judge valid in which seem can take a pptol and infinitive complement as in he seems to me to be insane and a passive participle as in he seemed depressedthis comparison illustrates the problem of errors of omission common to computational lexicons constructed manually and also from machinereadable dictionariesall classes for seem are exemplified in the corpus data but for ask for example eight classes are not present so comparison only to the merged entry would give an unreasonably low estimate of recallfigure 2 gives the raw results for the merged entries and corpus analysis on each verbit shows the number of true positives correct classes proposed by our system false positives incorrect classes proposed by our system and false negatives correct classes not proposed by our system as judged against the merged entry and for seven of the verbs against the corpus analysisit also shows in the final column the number of sentences from which classes were extractedfigure 3 gives the type precision and recall of our system recognition of subcategorization classes as evaluated against the merged dictionary entries and against the manually analysed corpus data the frequency distribution of the classes is highly skewed for example for believe there are 107 instances of the most common class in the corpus data but only 6 instances in total of the least common four classesmore generally for the manually analysed verbs almost 60 of the false negatives have only one or two exemplars each in the corpus citationsnone of them are returned by the system because the binomial filter always rejects classes hypothesised on the basis of such little evidencein figure 4 we estimate the accuracy with which our system ranks true positive classes against the correct ranking for the seven verbs whose corpus input was manually analysedwe compute this measure by calculating the percentage of pairs of classes at positions st n m in the system ranking that are ordered the same in the correct rankingthis gives us an estimate of the accuracy of the relative frequencies of classes output by the systemfor each of the seven verbs for which we undertook a corpus analysis we calculate the token recall of our system as the percentage of true positives in the corpusthis gives us an estimate of the parsing performance that would result from providing a parser with entries built using the system shown in figure 5further evaluation of the results for these seven verbs reveals that the filtering phase is the weak link in the systenithere are only 13 true negatives which the system failed to propose each exemplified in the data by a mean of 45 exampleson the other hand there are 67 false negatives supported by an estimated mean of 71 examples which should ideally have been accepted by the filter and 11 false positives which should have been rejectedthe performance of the filter for classes with less than 10 exemplars is around chance and a simple heuristic of accepting all classes with more than 10 exemplars would have produced broadly similar results for these verbsthe filter may well be performing poorly because the probability of generating a subcategorization class for a given verb is often lower than the error probability for that classin addition to evaluating the acquired subcategorization information against existing lexical resources we have also evaluated the information in the context of an actual parsing systemin particular we wanted to establish whether the subcategorization frequency information for individual verbs could be used to improve the accuracy of a parser that uses statistical techniques to rank analysesthe experiment used the same probabilistic parser and tag sequence grammar as are present in the acquisition system although the experiment does not in any way rely on the parsers or grammars being the samewe randomly selected a test set of 250 incoverage sentences from the susanne treebank retagged with possibly multiple tags per word and measured the baseline accuracy of the unlexicalized parser on the sentences using the now standard parsevalgeig evaluation metrics of mean crossing brackets per sentence and bracket recall and precision see figure 65next we collected all words in the test corpus tagged as possibly being verbs and retrieved all citations of them in the lob corpus plus susanne with the 250 test sentences excludedwe acquired subcategorization and associated frequency information from the citations in the process successfully parsing 380k wordswe then parsed the test set with each verb subcategorization possibility weighted by its raw frequency score and using the naive addone smoothing technique to allow for omitted possibilitiesthe geig measures for the lexicalized parser show a 7 improvement in the crossing bracket score over the existing test corpus this is not statistically significant at the 95 level although if the pattern of differences were maintained over a larger test set of 470 sentences it would be significantwe expect that a more sophisticated smoothing technique a larger acquisition corpus and extensions to the system to deal with nominal and adjectival predicates would improve accuracy still furthernevertheless this experiment demonstrates that lexicalizing a grammarparser with subcategorization frequencies can appreciably improve the accuracy of parse rankingbrent approach to acquiring subcategorization is based on a philosophy of only exploiting unambiguous and determinate information in unanalysed corporahe defines a number of lexical patterns which reliably cue one of five subcategorization classesbrent does not report comprehensive results but for one class sentential complement verbs he achieves 96 precision and 76 recall at classifying individual tokens of 63 distinct verbs as exemplars or nonexemplars of this classhe does not attempt to rank different classes for a given verbushioda et al utilise a pos tagged corpus and finitestate np parser to recognize and calculate the relative frequency of six subcategorization classesthey report an accuracy rate of 83 at classifying 1565 classifiable tokens of 33 distinct verbs in running text and suggest that incorrect noun phrase boundary detection accounts for the majority of errorsthey report that for 32 verbs their system correctly predicts the most frequent class and for 30 verbs it correctly predicts the second most frequent class if there was oneour system rankings include all classes for each verb from a total of 160 classes and average 814 correctmanning conducts a larger experiment also using a pos tagged corpus and a finitestate np parser attempting to recognize sixteen distinct complementation patternshe reports that for a test sample of 200 tokens of 40 verbs in running text the acquired subcategorization dictionary listed the appropriate entry for 163 cases giving a token recall of 82 he also reports a comparison of acquired entries for the verbs to the entries given in the oxford advanced learner dictionary of current english on which his system achieves a precision of 90 and a recall of 43his system averages 348 subentries less then half the number produced in our experimentit is not clear what level of evidence the performance of manning system is based on but the system was applied to 41 million words of text and the verbs are all common so it is likely that considerably more exemplars of each verb were availablethe experiment and comparison reported above suggests that our more comprehensive subcategorization class extractor is able both to assign classes to individual verbal predicates and also to rank them according to relative frequency with comparable accuracy to extant systemswe have also demonstrated that a subcategorization dictionary built with the system can improve the accuracy of a probabilistic parser by an appreciable amountthe system we have developed is straightforwardly extensible to nominal and adjectival predicates the existing grammar distinguishes nominal and adjectival arguments from adjuncts structurally so all that is required is extension of the classifierdeveloping an analogous system for another language would be harder but not infeasible similar taggers and parsers have been developed for a number of languages but no extant subcategorization dictionaries exist to our knowledge therefore the lexical statistics we utilize for statistical filtering would need to be estimated perhaps using the technique described by brent however the entire approach to filtering needs improvement as evaluation of our results demonstrates that it is the weakest link in our current systemour system needs further refinement to narrow some subcategorization classes for example to choose between differing control options with predicative complementsit also needs supplementing with information about diathesis alternation possibilities and semantic selection preferences on argument headsgrishman sterling poznanski sanfilippo resnik ribas and others have shown that it is possible to acquire selection preferences from parsed dataour system already gathers head lemmas in patterns so any of these approaches could be applied in principlein future work we intend to extend the system in this directionthe ability to recognize that argument slots of different subcategorization classes for the same predicate share semantic restrictionspreferences would assist recognition that the predicate undergoes specific alternations this in turn assisting inferences about control equi and raising
A97-1052
automatic extraction of subcategorization from corporawe describe a novel technique and implemented system for constructing a subcategorization dictionary from textual corporaeach dictionary entry encodes the relative frequency of occurrence of a comprehensive set of subcategorization classes for englishan initial experiment on a sample of 14 verbs which exhibit multiple complementation patterns demonstrates that the technique achieves accuracy comparable to previous approaches which are all limited to a highly restricted set of subcategorization classeswe also demonstrate that a subcategorization dictionary built with the system improves the accuracy of a parser by an appreciable amountwe use a grammar and a sophisticated parsing tool for argumentadjunct distinction
base noun phrase translation using web data and the them algorithm we consider here the problem of base noun phrase translation we propose a new method to perform the task for a given base np we first search its translation candidates from the web we next determine the possible translation from among the candidates using one of the two methods that we have developed in one method we employ an ensemble of nave bayesian classifiers constructed with the them algorithm in the other method we use tfidf vectors also constructed with the them algorithm experimental results indicate that the coverage and accuracy of our method are significantly better than those of the baseline methods relying on existing technologies we address here the problem of base np translation in which for a given base noun phrase in a source language we are to find out its possible translation in a target language we define a base np as a simple and nonrecursive noun phrasein many cases base nps represent holistic and nondivisible concepts and thus accurate translation of them from one language to another is extremely important in applications like machine translation cross language information retrieval and foreign language writing assistancein this paper we propose a new method for base np translation which contains two steps translation candidate collection and translation selectionin translation candidate collection for a given base np in the source language we look for its translation candidates in the target languageto do so we use a wordtoword translation dictionary and corpus data in the target language on the webin translation selection we determine the possible translation from among the candidateswe use nonparallel corpus data in the two languages on the web and employ one of the two methods which we have developedin the first method we view the problem as that of classification and employ an ensemble of nave bayesian classifiers constructed with the them algorithmwe will use themnbcensembleto denote this method hereafterin the second method we view the problem as that of calculating similarities between context vectors and use tfidf vectors also constructed with the them algorithmwe will use themtfidfto denote this methodexperimental results indicate that our method is very effective and the coverage and top 3 accuracy of translation at the final stage are 914 and 798 respectivelythe results are significantly better than those of the baseline methods relying on existing technologiesthe higher performance of our method can be attributed to the enormity of the web data used and the employment of the them algorithm21 translation with nonparallelcorpora a straightforward approach to word or phrase translation is to perform the task by using parallel bilingual corpora parallel corpora are however difficult to obtain in practiceto deal with this difficulty a number of methods have been proposed which make use of relatively easily obtainable nonparallel corpora within these methods it is usually assumed that a number of translation candidates for a word or phrase are given and the problem is focused on translation selectionall of the proposed methods manage to find out the translation of a given word or phrase on the basis of the linguistic phenomenon that the contexts of a translation tend to be similar to the contexts of the given word or phrasefung and yee for example proposed to represent the contexts of a word or phrase with a realvalued vector in which one element corresponds to one word in the contextsin translation selection they select the translation candidates whose context vectors are the closest to that of the given word or phrasesince the context vector of the word or phrase to be translated corresponds to words in the source language while the context vector of a translation candidate corresponds to words in the target language and further the words in the source language and those in the target language have a manytomany relationship it is necessary to accurately transform the context vector in the source language to a context vector in the target language before distance calculationthe vectortransformation problem was not however wellresolved previouslyfung and yee assumed that in a specific domain there is only onetoone mapping relationship between words in the two languagesthe assumption is reasonable in a specific domain but is too strict in the general domain in which we presume to perform translation herea straightforward extension of fung and yees assumption to the general domain is to restrict the manytomany relationship to that of manytoone mapping this approach however has a drawback of losing information in vector transformation as will be describedfor other methods using nonparallel corpora see also 22 translation using web dataweb is an extremely rich source of data for natural language processing not only in terms of data size but also in terms of data type recently a new trend arises in natural language processing which tries to bring some new breakthroughs to the field by effectively using web data nagata et al for example proposed to collect partial parallel corpus data on the web to create a translation dictionarythey observed that there are many partial parallel corpora between english and japanese on the web and most typically english translations of japanese terms are parenthesized and inserted immediately after the japanese terms in documents written in japaneseour method for base np translation comprises of two steps translation candidate collection and translation selectionin translation candidate collection we look for translation candidates of a given base npin translation selection we find out possible translation from the translation candidatesin this paper we confine ourselves to translation of nounnoun pairs from english to chinese our method however can be extended to translations of other types of base nps between other language pairs31 translation candidate collectionwe use heuristics for translation candidate collectionfigure 1 illustrates the process of collecting chinese translation candidates for an english base np information agewith the heuristics1input information age2consult englishchinese word translation dictionaryinformation age 3compositionally create translation candidates inchinese obtain the document frequencies of them 10000 10 0 5output candidates having nonzero documentfrequencies and the document frequencies 10000 10 figure 1translation candidate collection 32 translation selection themnbcensemble we view the translation selection problem as that of classification and employ themnbcensemble to perform the taskfor the ease of explanation we first describe the algorithm of using only themnbc and next extend it to that of using themnbcensemblebasic algorithm let e denote the base np to be translated and c the set of its translation candidates suppose that kc let c represent a random variable on c let e denote a set of words in english and c a set of words in chinesesuppose that ncme and let e represent a random variable on e and c a random variable on c figure 2 describes the algorithminput e c contexts containing e contexts containing all cc 1create a frequency vector 11 estimate with maximum likelihood estimation the prior probability 11 where 1them algorithm we define a relation between e and c as cer which represents the links in a translation dictionarywe further define we estimate the parameters of the distribution by using the expectation and maximization algorithm initially we set for all cc 1finally we calculate in this way we can transform the frequency vector in english ee ee cc ecpef ecpef cep ecpefcp cepcp cepcp ecp to select the candidates c whose posterior probabilities are the largest equation is based on bayesrule and the assumption that the data in d are independently generated from ccccp where 1is an additional parameter used to emphasize the prior informationif we ignore the first term in equation then the use of one themnbc turns out to select the candidate whose frequency vector is the closest to the transformed vector d in terms of kl divergence themnbcensemble to further improve performance we use an ensemble of themnbcs while the classifiers are constructed on the basis of the data in different contexts with different window sizesmore specifically we calculate where s1 where cdfcidf1ccc1cidfccidfc1 create a frequency vector 1cidfccidfc1cosctfidfctfidfthe frequency vectors of context words for information ageand are given in a and d in figure 5 respectivelyif for each english word we only retain the link connecting to the chinese translation with the largest frequency to establish a manytoone mapping and transform vector a from english to chinese we obtain vector b it turns out however that vector b is quite different from vector d although they should be similar to each otherwe will refer to this method as major translationhereafterwith them vector a in figure 5 is transformed into vector c which is much closer to vector d as expectedspecifically them can split the frequency of a word in english and distribute them into its translations in chinese in a theoretically sound way note that if we assume a manytoone mapping s i icp s cp 1 relationship then the use of them turns out to be equivalent to that of major translation35 combinationin order to further boost the performance of translation we propose to also use the translation method proposed in nagata et al specifically we combine our method with that of nagata et alby using a backoff strategyfigure 6 illustrates the process of collecting chinese translation candidates for an english base np information asymmetrywith nagata et als methodin the combination of the two methods we first use nagata et als method to perform translation if we cannot find translations we next use our methodwe will denote this strategy backoffwe conducted experiments on translation of the base nps from english to chinesewe extracted base nps from the encarta 1 english corpus using the tool developed by xun et althere were about 1 httpencartamsncomdefaultasp 3000 base nps extractedin the experiments weused the hit englishchinese word translation dictionary2 the dictionary contains about 76000 chinese words 60000 english words and 118000 translation linksas a web search engine we used google five translation experts evaluated the translation results by judging whether or not they were acceptablethe evaluations reported below are all based on their judgements41 basic experimentin the experiment we randomly selected 1000 base nps from the 3000 base npswe next used our method to perform translation on the 1000 phrasesin translation selection we employed themnbcensemble and themtfidftable 1best translation result for each method accuracy top 1 top 3 coverage themnbcensemble 617 803 prior 576 776 mtnbcensemble 599 781 themklensemble 459 723 themnbc 608 789 themtfidf 619 808 mttfidf 582 776 themtf 558 778 899 table 1 shows the results in terms of coverage and top n accuracyhere coverage is defined as the percentage of phrases which have translations selected while top n accuracy is defined as the percentage of phrases whose selected top n translations include correct translationsfor themnbcensemble we set the in to be 5 on the basis of our preliminary experimental resultsfor themtfidf we used the nonweb data described in section 44 to estimate idf values of wordswe used contexts with window sizes of 1 3 5 7 9 112 the dictionary is created by the harbin institute of technologya b c d figure 5example of frequency vector transformation 1input information asymmetry2search the english base np on web sites in chineseand obtain documents as follows information asymmetry 3find the most frequently occurring chinese phrasesimmediately before the brackets containing the english base np using a suffix tree 4output the chinese phrases and their documentfrequencies 5 5 figure 6nagata et als method figure 7translation results figure 7 shows the results of themnbcensemble and themtfidf in which for themnbcensemble window sizedenotes that of the largest within an ensembletable 1 summarizes the best results for each of thempriorand mttfidfare actually baseline methods relying on the existing technologiesin prior we select candidates whose prior probabilities are the largest equivalently document frequencies obtained in translation candidate collection are the largestin mttfidf we use tfidf vectors transformed with major translationour experimental results indicate that both themnbcensemble and themtfidf significantly outperform prior and mttfidf when appropriate window sizes are chosenthe pvalues of the sign tests are 000056 and 000133 for themnbcensemble 000002 and 000901 for themtfidf respectivelywe next removed each of the key components of themnbcensemble and used the remaining components as a variant of it to perform translation selectionthe key components are distance calculation by kl divergence them prior probability and ensemblethe variants thus respectively make use of the baseline method prior an ensemble of nave bayesian classifiers based on major translation an ensemble of thembased kl divergence calculations and themnbcfigure 7 and table 1 show the resultswe see that themnbcensemble outperforms all of the variants indicating that all the components within themnbcensemble play positive roleswe removed each of the key components of themtfidf and used the remaining components as a variant of it to perform translation selectionthe key components are idf value and themthe variants thus respectively make use of thembased frequency vectors the baseline method mttfidffigure 7 and table 1 show the resultswe see that themtfidf outperforms both variants indicating that all of the components within themtfidf are neededcomparing the results between mtnbcensemble and themnbcensemble and the results between mttfidf and themtfidf we see that the uses of the them algorithm can indeed help to improve translation accuraciestable 2sample of translation outputs base np translation calcium ion adventure tale lung cancer aircraft carrier adult literacy table 2 shows translations of five base nps as output by themnbcensemble in which the translations marked with were judged incorrect by human expertswe analyzed the reasons for incorrect translations and found that the incorrect translations were due to no existence of dictionary entry noncompositional translation ranking error 42 our method vs nagata et als methodtable 3translation results accuracy top 1 top 3 coverage our method 617 803 899 nagata et als 720 760 105 we next used nagata et als method to perform translationfrom table 3 we can see that the accuracy of nagata et als method is higher than that of our method but the coverage of it is lowerthe results indicate that our proposed backoff strategy for translation is justifiable43 combinationin the experiment we tested the backoff strategy table 4 shows the resultsthe backoff strategy table 4translation results accuracy top 1 top 3 coverage backoff 629 797 backoff 622 798 914 helps to further improve the results whether themnbcensemble or themtfidf is used44 web data vs nonweb datato test the effectiveness of the use of web data we conducted another experiment in which we performed translation by using nonweb datathe data comprised of the wall street journal corpus in english and the peoples daily corpus in chinese we followed the backoff strategy as in section 43 to translate the 1000 base npstable 5translation results accuracydata top 1 top 3 coverage web 629 797 914 nonweb 569 747 793 web 622 798 914 nonweb 515 714 785 the results in table 5 show that the use of web data can yield better results than nonuse of it although the sizes of the nonweb data we used were considerably large in practicefor nagata et als method we found that it was almost impossible to find partialparallel corpora in the nonweb datathis paper has proposed a new and effective method for base np translation by using web data and the them algorithmexperimental results show that it outperforms the baseline methods based on existing techniques mainly due to the employment of themexperimental results also show that the use of web data is more effective than nonuse of itfuture work includes further applying the proposed method to the translation of other types of base nps and between other language pairsacknowledgements we thank ming zhou changning huang jianfeng gao and ashley chang for many helpful discussions on this research projectwe also acknowledge shenjie li for help with program coding
C02-1011
base noun phrase translation using web data and the them algorithmwe consider here the problem of base noun phrase translationwe propose a new method to perform the taskfor a given base np we first search its translation candidates from the webwe next determine the possible translation from among the candidates using one of the two methods that we have developedin one method we employ an ensemble of naive bayesian classifiers constructed with the them algorithmin the other method we use tfidf vectors also constructed with the them algorithmexperimental results indicate that the coverage and accuracy of our method are significantly better than those of the baseline methods relying on existing technologiesin our method translation candidates of a term are compositionally generated by concatenating the translation of the constituents of the term and are reranked by measuring contextual similarity against the source language term
efficient support vector classifiers for named entity recognition named entity recognition is a task in whichproper nouns and numerical information are extracted from documents and are classified into cat egories such as person organization and date it is a key technology of information extraction and opendomain question answering first we showthat an ne recognizer based on support vector ma chines gives better scores than conventional systems however offtheshelf svm classifiers are too inefficient for this task therefore we present a method that makes the system substantially fasterthis approach can also be applied to other similar tasks such as chunking and partofspeech tagging we also present an svmbased feature selec tion method and an efficient training method named entity recognition is a task in whichproper nouns and numerical information in a docu ment are detected and classified into categories suchas person organization and dateit is a key technol ogy of information extraction and opendomain question answering we are building a trainable opendomain question answering system called saiqaiiin this paper we show that an ne recognizer based on support vector machines gives better scores thanconventional systemssvms have given high per formance in various classification tasks however it turned out that offtheshelf svm classifiers are too inefficient for ne recognitionthe recognizer runs at a rate of only 85 bytessec on an athlon 13 ghz linux pc while rulebased systems can process several kilobytes in a secondthe major reason is the inefficiency of svm classifiersthere are otherreports on the slowness of svm classifiersanother svmbased ne recognizer is 08 sentencessec on a pentium iii 933 mhz pcan svmbased partofspeech tagger is 20 tokenssec on an alpha 21164a 500 mhz processorit is difficult to use such slow systems in practical applicationsin this paper we present a method that makes the ne system substantially fasterthis method can also be applied to other tasks in natural languageprocessing such as chunking and pos tagginganother problem with svms is its incomprehensibil ityit is not clear which features are important or how they workthe above method is also useful for finding useless featureswe also mention a method to reduce training time11 support vector machinessuppose we have a set of training data for a two class problem where ffflfi is a feature vector of the ffi th sample in the training data and is the label forthe samplethe goal is to find a decision func tion that accurately predicts for unseen a nonlinear svm classifier gives a decision function sign for an input vector where 0 213 546879 6 here means is a member of a cer tain class and means is not a mem ber7 s are called support vectors and are repre sentatives of training examples is the numberof support vectorstherefore computational com plexity of is proportional to support vectorsand other constants are determined by solving a cer tain quadratic programming problem4687 is akernel that implicitly maps vectors into a higher di mensional spacetypical kernels use dot products 4687 acbed7 a polynomial kernel of degree fis given by bg hijkg l we can use vari mm m m n m m m m m m m m m n m o o o o o n o o o o o o o o o o o o m positive example o negative example n m n o support vectors figure 1 support vector machine ous kernels and the design of an appropriate kernel for a particular application is an important research issuefigure 1 shows a linearly separable casethe de cision hyperplane defined by prq separatespositive and negative examples by the largest mar ginthe solid line indicates the decision hyperplaneand two parallel dotted lines indicate the margin be tween positive and negative examplessince such aseparating hyperplane may not exist a positive pa rameter s is introduced to allow misclassificationssee vapnik 12 svmbased ne recognitionas far as we know the first svmbased ne system was proposed by yamada et al for japanesehis system is an extension of kudos chunking sys tem that gave the best performance at conll2000 shared tasksin theirsystem every word in a sentence is classified sequentially from the beginning or the end of a sen tencehowever since yamada has not compared it with other methods under the same conditions it is not clear whether his ne system is better or nothere we show that our svmbased ne system ismore accurate than conventional systemsour sys tem uses the viterbi search instead of sequential determinationfor training we use crl data which was prepared for irex it has about 19000 nes in 1174 articleswe also use additional data by isozaki both datasets are based on mainichi newspapers 1994 and 1995 cdromswe use irexs formal test data calledgeneral that has 1510 named entities in 71 ar ticles from mainichi newspaper of 1999systems are compared in terms of generals fmeasure 1httpcsnyueducsprojectsproteusirexwhich is the harmonic mean of recalland preci sionand is defined as followsrecall m precision m where m is the number of nes correctly extracted and classified by the systemwe developed an svmbased ne system by following our ne system based on maximum entropy modeling we sim ply replaced the me model with svm classifiersthe above datasets are processed by a morphological analyzer chasen 2212it tokenizes a sen tence into words and adds pos tagschasen uses about 90 pos tags such as commonnoun and locationnamesince most unknown words are proper nouns chasens parameters for unknownwords are modified for better resultsthen a char acter type tag is added to each wordit uses 17character types such as allkanji and small integersee isozaki for detailsnow japanese ne recognition is solved by theclassification of words for instance the words in president george herbert bush saidclinton is are classified as follows president other george personbegin her bert personmiddle bush personend said other clinton personsingle is otherin this way the first word of a persons name is labeled as personbeginthe last word is labeled as personendother words in the nameare personmiddleif a persons name is expressed by a single word it is labeled as person singleif a word does not belong to any namedentities it is labeled as othersince irex de fines eight ne classes words are classified into 33 categorieseach sample is represented by 15 features be because each word has three features and two preceding words and two succeeding words are also used for context dependencealthough infrequent features are usually removed to prevent overfitting we use all features because svms are robusteach sample is represented by a long binary vector ie a sequence of 0 and 1 for instance bushin the above example is represented by a 2httpchasenaistnaraacjp vector pygz_ gz a described belowonly 15 elements are 1bdcfe8ghji current word is not alicebdc klghme current word is bushbdc nghji current word is not charlie bdcfeopikpqpghme current pos is a proper noun bdcfeopinipghji current pos is not a verb bdc nqresre ghji previous word is not henrybdc nqreskghme previous word is herberthere we have to consider the following problemsfirst svms can solve only a twoclass problemtherefore we have to reduce the above multi class problem to a group of twoclass problemssecond we have to consider consistency among word classes in a sentencefor instance a word classified as personbegin should be followed by personmiddle or personendit impliesthat the system has to determine the best combina tions of word classes from numerous possibilitieshere we solve these problems by combining exist ing methodsthere are a few approaches to extend svms to cover t class problemshere we employ the oneclass versus all othersapproachthat is each clas sifier is trained to distinguish members of a class v from nonmembersin this method two or more classifiers may give to an unseen vector or no classifier may give one common way to avoid such situations is to compare you values and to choose the class index v of the largest you the consistency problem is solved by the viterbi searchsince svms do not output probabilities we use the svmsigmoid method that is we use a sigmoid function wxg jyzil g to map you to a probabilitylike valuethe output of the viterbi search is adjusted by a postprocessor for wrong word boundariesthe adjustment rules are also statistically determined 13 comparison of ne recognizerswe use a fixed value q9q fmeasures are not very sensitive to unless is too smallwhenwe used 1038986 training vectors generals f measure was 8964 for q and 9003 for 6q9q we employ the quadratic kernel because it gives the best resultspolynomial kernels of degree 1 2 and 3 resulted in 8303 8831 fmeasure rgdt me svm 0 20 40 60 80 100 120 crl data e 76 78 80 82 84 86 88 90 number of nes in training data figure 2 fmeasures of ne systems and 8704 respectively when we used 569994 training vectorsfigure 2 compares ne recognizers in terms ofgenerals fmeasuressvmin the figure in dicates fmeasures of our system trained by kudos tinysvm0073 with sq it attained 8504 when we used only crl datame indicates our me system and rgdtindicates a rulebasedmachine learning system according to this graph svmis better than the other sys temshowever svm classifiers are too slowfa mous svmlight 350 took 12 days to classify 569994 vectors derived from 2 mb documentsthat is it runs at only 19 bytessectinysvms classifier seems best optimized among publicly available svm toolkits but it still works at only 92 bytessecin this section we investigate the because of this in efficiency and propose a solutionall experiments are conducted for training data of 569994 vectorsthe total size of the original news articles was 2 mb and the number of nes was 39022according to the definition of a classifier has to process support vectors for each table 1 shows s for different word classesaccording to this table classi fication of one word requires s dot products with 228306 support vectors in 33 classifierstherefore the classifiers are very slowwe have never seensuch large s in svm literature on pattern recogni tionthe reason for the large s is word featuresinother domains such as character recognition dimen 3httpclaistnaraacjptakukusoftwaretinysvm sion is usually fixedhowever in the ne task increases monotonically with respect to the size of the training datasince svms learn combinations of features tends to be very largethis tendencywill hold for other tasks of natural language pro cessing toohere we focus on the quadratic kernel bg ig that yielded the best score in the above experimentssuppose gz_ gz a hasonly nonzero elementsthe dot prod uct of and 7 5 z_ z is given by fi 1 gz z hence id7 w fi 0 1 gz z fi 0 1 gz z we can rewrite as followsfi 0 1 _ zgzm zgzfi 0 1 fi 0 1 rz bgzgzb_ where 1 3 z 1 3 5 z_ z 1 3 p8z prz b 1 3 z zb_ for binary vectors it can be simplified as 0 9l 1 _c z0 91 z b where z zm zy 0 5l 1 3 9z b 0 _ l 1 1 3 now can be given by summing up zfor every nonzero element gz and z b for every nonzero pair gzgzb accordingly we only need to add wjr z con stants to get therefore we can expect thismethod to be much faster than a nave implementa tion that computes tens of thousands of dot products at run timewe call this method xqktable 1 compares tinysvm and xqk in terms of cpu time taken to apply 33 classifiers to process the training dataclasses are sorted by small numbers in parentheses indicate the initializationtime for reading support vectors 7 and allocat ing memoryxqk requires a longer initialization time in order to prepare and for instancetinysvm took 1149026 seconds in to tal for applying others classifier to all vectors in the training dataits initialization phase took 213 seconds and all vectors in the training data were classified in 1148813 sec ondson the other hand xqk took 22528 secondsin total and its initialization phase took 17417 sec ondstherefore 569994 vectors were classified in5111 secondsthe initialization time can be disre garded because we can reuse the above coefficentsconsequently xqk is 2248 times faster than tinysvm for othertinysvm took 6 hours to process all the word classes whereas xqk took only 17 minutesxqk is 102 times faster than svmlight 350 which took 12 daysxqk makes the classifiers faster but mem ory requirement increases from 1 to 1 fl zr where is the num ber of nonzero elements in 7 therefore removalof useless features would be beneficialconven tional svms do not tell us how an individual feature works because weights are given not to features but to 4687 however the above weights clarify how a feature or a feature pair workswe can use this fact for feature selection after the trainingwe simplify by removing all features that satisfy 8zf rz bf przb kthe largest that does not change the number of misclassifications for the training data is found by using the binary searchfor each word classwe call this method xqkfsthis approx imation slightly degraded generals fmeasure from 8831 to 8803table 2 shows the reduction of features that ap pear in support vectorsclasses are sorted by the numbers of original featuresfor instance otherhas 56220 features in its support vectorsaccord ing to the binary search its performance did notchange even when the number of features was re duced to 21852 at kqqr9rtable 1 reduction of cpu time by xqk word class tinysvm xqk speed up svmlight other 64970 1148813 5111 2248 2998652 artifactmiddle 14171 137285 4132 332 666626 locationsingle 13019 120929 3824 316 610054 organizmiddle 12050 98739 3793 260 557082 total 228306 2175423 101920 213 10446631 table 2 reduction of features by xqkfs word class number of features number of nonzero weights seconds other 56220 21852 1512827 892228 4231 artififactmiddle 22090 4410 473923 164632 3047 locationsingle 17169 3382 366961 123808 2772 organizmiddle 17123 9959 372784 263695 3102 organizend 15214 3073 324514 112307 2687 total 307721 75455 6669664 2650681 76310 the total number of features was reduced by 75and that of weights was reduced by 60the ta ble also shows cpu time for classification by the selected featuresxqkfs is 285 times faster than tinysvmalthough the reduction of features is significant the reduction of cpu time is moderate because most of the reducedfeatures are infrequent oneshowever simple re duction of infrequent features without consideringweights damages the systems performancefor instance when we removed 5066 features that ap peared four times or less in the training data themodified classifier for organizationend misclassified 103 training examples whereas the origi nal classifier misclassified only 19 exampleson theother hand xqkfs removed 12141 features with out an increase in misclassifications for the training dataxqk can be easily extended to a more generalquadratic kernel bg vlv g and to nonbinary sparse vectorsxqkfs can be used to se lect useful features before training by other kernelsas mentioned above we conducted an experiment for the cubic kernel by using all featureswhen we trained the cubic kernel classifiers by us ing only features selected by xqkfs tinysvms classification time was reduced by 40 because was reduced by 38generals fmeasure was slightly improved from 8704 to 8710onthe other hand when we trained the cubic ker nel classifiers by using only features that appeared three times or more tinysvms classification time was reduced by only 14 and the fmeasure was slightly degraded to8685therefore we expect xqkfs to be use ful as a feature selection method for other kernels when such kernels give much better results than the quadratic kernelsince training of 33 classifiers also takes a longtime it is difficult to try various combinations of pa rameters and featureshere we present a solution for this problemin the training time calculation of bdr bdr bd for various s is dominantconventional systems save time by caching the resultsby analyzing tinysvms classifier we found that they can be calculated more efficientlyfor sparse vectors most svm classifiers use a sparse dot product algorithm that compares nonzero elements of and those of 7 to get bed7 in however is common to all dot products in bd7 bd 7 therefore we can implement a faster classifierthat calculates them concurrentlytinysvms clas sifier prepares a list fi2si z that contains all 7 s whose th coordinates are not zeroin addition counters for d7 p d7 are prepared because dot products of binary vectors are integersthen for each nonzero gz the counters are incremented for all 7 fi2si zby checking only members of fi2si z for nonzero gz the classifier is not bothered by fruitless cases gzq 8zyq orgzwq zyq therefore tinysvms clas sifier is faster than other classifiersthis method is applicable to any kernels based on dot productsfor the training phase we can build fi2si zthat contains all s whose th coordinates are notzerothen bd bd can be efficiently calculated because is commonthis i am provement is effective especially when the cache is small andor the training data is largewhen we used a 200 mb cache the improved system took only 13 hours for training by the crl data while tinysvm and svmlight took 30 hours and 46hours respectively for the same cache sizeal though we have examined other svm toolkits we could not find any system that uses this approach in the training phasethe above methods can also be applied to othertasks in natural language processing such as chunk ing and pos tagging because the quadratic kernels give good resultsutsuro et al report that a combination of two ne recognizers attained f 8407 butwrong word boundary cases are excludedour system attained 8504 and word boundaries are auto matically adjustedyamada also reports that fis bestalthough his sys tem attained f 837 for 5fold crossvalidation of the crl data our system attained 868since we followedisozakis implementation our system is different from yamadas system in the fol lowing points 1 adjustment of word boundaries 2chasens parameters for unknown words 3 char acter types 4 use of the viterbi searchfor efficient classification burges and scholkopf propose an approximation method that uses reduced set vectorsinstead of support vectorssince the size of the reduced set vectors is smaller than classifiers become more efficient but the computational cost to determine the vectors is verylargeosuna and girosi propose two meth odsthe first method approximates by support vector regression but this method is applicable onlywhen s is large enoughthe second method reformulates the training phaseour approach is sim pler than these methodsdowns et al try to reduce the number of support vectors by using linear dependencewe can also reduce the runtime complexity of a multiclass problem by cascading svms in the form of a binary tree or a directacyclic graph yamada and mat sumoto applied such a method to their nesystem and reduced its cpu time by 39this ap proach can be combined with our svm classifersne recognition can be regarded as a variablelength multiclass problemfor this kind of prob lem probabilitybased kernels are studied for more theoretically wellfounded methods our svmbased ne recognizer attained f 9003this is the best score as far as we knowsince it was too slow we made svms fasterthe improved classifier is 21 times faster than tinysvmand 102 times faster than svmlightthe i am proved training program is 23 times faster than tinysvm and 35 times faster than svmlightwe also presented an svmbased feature selectionmethod that removed 75 of featuresthese methods can also be applied to other tasks such as chunk ing and pos taggingacknowledgmentwe would like to thank yutaka sasaki for the training datawe thank members of knowledge pro cessing research group for valuable comments and discussionwe also thank shigeru katagiri and kenichiro ishii for their support
C02-1054
efficient support vector classifiers for named entity recognitionnamed entity recognition is a task in which proper nouns and numerical information are extracted from documents and are classified into categories such as person organization and dateit is a key technology of information extraction and opendomain question answeringfirst we show that an ne recognizer based on support vector machines gives better scores than conventional systemshowever offtheshelf svm classifiers are too inefficient for this tasktherefore we present a method that makes the system substantially fasterthis approach can also be applied to other similar tasks such as chunking and partofspeech taggingwe also present an svmbased feature selection method and an efficient training methodwe propose kernel expansion that is used to transform the ddegree polynomial kernel based classifier into a linear one with a modified decision functionwe propose an xqk which can make their namedentity recognizer drastically fast
a graph model for unsupervised lexical acquisition this paper presents an unsupervised method forassembling semantic knowledge from a partof speech tagged corpus using graph algorithms the graph model is built by linking pairs of words which participate in particular syntacticrelationships we focus on the symmetric relationship between pairs of nouns which occur to gether in lists an incremental clusterbuilding algorithm using this part of the graph achieves82 accuracy at a lexical acquisition task evaluated against wordnet classes the model naturally realises domain and corpus specific am biguities as distinct components in the graph surrounding an ambiguous word semantic knowledge for particular domains isincreasingly important in nlpmany applications such as wordsense disambiguation in formation extraction and speech recognitionall require lexiconsthe coverage of handbuilt lexical resources such as wordnet has increased dramatically in re cent years but leaves several problems andchallengescoverage is poor in many criti cal rapidly changing domains such as current affairs medicine and technology where much time is still spent by human experts employed to recognise and classify new termsmostlanguages remain poorly covered in compari son with englishhandbuilt lexical resourceswhich cannot be automatically updated can of ten be simply misleadingfor example using wordnet to recognise that the word apple refers to a fruit or a tree is a grave error in the many situations where this word refers to a computer manufacturer a sense which wordnet does notcoverfor nlp to reach a wider class of appli cations in practice the ability to assemble andupdate appropriate semantic knowledge auto matically will be vitalthis paper describes a method for arranging semantic information into a graph where the nodes are words and the edges represent relationships be tween wordsthe paper is arranged as followssection 2 reviews previous work on semanticsimilarity and lexical acquisitionsection 3 de scribes how the graph model was built from the postagged british national corpussection 4 describes a new incremental algorithm used to build categories of words step by step from thegraph modelsection 5 demonstrates this algo rithm in action and evaluates the results againstwordnet classes obtaining stateoftheart re sultssection 6 describes how the graph modelcan be used to recognise when words are polysemous and to obtain groups of words represen tative of the different sensesmost work on automatic lexical acquisition has been based at some point on the notion of semantic similaritythe underlying claim is that words which are semantically similar occurwith similar distributions and in similar con texts the main results to date in the field of au tomatic lexical acquisition are concerned withextracting lists of words reckoned to belong to gether in a particular category such as vehicles or weapons roark and charniak de scribe a generic algorithmfor extracting suchlists of similar words using the notion of seman tic similarity as follows 1for a given category choose a smallset of exemplars 2count cooccurrence of words andseed words within a corpus these counts to select new seed words 4return to step 2 and iterate n times5use a figure of merit to rank wordsfor category membership and output a ranked list algorithms of this type were used by riloff and shepherd and roark and charniak reporting accuracies of 17 and 35 respectivelylike the algorithm we present in section 5 the similarity measure used in these cases was based on co occurrence in listsboth of these works evaluated their resultsby asking humans to judge whether items generated were appropriate members of the cate gories soughtriloff and shepherd also give some credit for related wordsone problem with these techniques is the danger of infectionsonce any incorrect or outofcategory word has been admitted theneighbours of this word are also likely to be ad mittedin section 4 we present an algorithmwhich goes some way towards reducing such in fectionsthe early results have been improved upon byriloff and jones where a mutual boot strappingapproach is used to extract words in particular semantic categories and expression patterns for recognising relationships betweenthese words for the purposes of information extractionthe accuracy achieved in this experiment is sometimes as high as 78 and is there fore comparable to the results reported in this paperanother way to obtain wordsenses directly from corpora is to use clustering algorithms on featurevectors clustering techniques can also be used to discriminate between different senses of an ambiguous worda general problem for such cluster ing techniques lies in the question of how many clusters one should have ie how many senses are appropriate for a particular word in a given domain lins approach to this problem isto build a similarity tree of words related to a target word different senses of duty can be discerned as different subtrees of this similarity treewepresent a new method for wordsense discrimi nation in section 6postagged corpus in this section we describe how a graph a collection of nodes and links was built to represent the relationships between nounsthemodel was built using the british national cor pus which is automatically tagged for parts of speechinitially grammatical relations between pairsof words were extractedthe relationships ex tracted were the following noun verb verb noun adjective nounnoun noun noun andor noun the last of these relationships often occurs when the pair of nouns is part of a listsince lists are usually comprised of objects which are similar in some way these relationships have been used to extract lists of nouns with similar properties in this paper we too focus on nouns cooccurring in liststhis is be because the noun andor noun relationship is the only symmetric relationship in our model andsymmetric relationships are much easier to ma nipulate than asymmetric onesour full graph contains many directed links between words of different parts of speechinitial experiments with this model show considerable promise but are at too early a stage to be reported upon yetthus the graph used in most of this paper repre sents only nounseach node represents a noun and two nodes have a link between them if they cooccur separated by the conjunctions and or or and each link is weighted according to the number of times the cooccurrence is observedvarious cutoff functions were used to deter mine how many times a relationship must be observed to be counted as a link in the grapha wellbehaved option was to take the top nneighbours of each word where n could be determined by the userin this way the link weighting scheme was reduced to a linkranking schemeone consequence of this decision was that links to more common words were preferred over links to rarer wordsthis decision may have effectively boosted precision at the expense of recall because the preferred links are to fairlycommon and more stable wordsre search is need to reveal theoretically motivatedor experimentally optimal techniques for select ing the importance to assign to each link the choices made in this area so far are often of an ad hoc naturethe graph used in the experiments described has 99454 nodes and 587475 linksthere were roughly 400000 different types tagged as nouns in the corpus so the graph model represents about one quarter of these nouns including most of the more common onesextracting categories of similar words in this section we describe a new algorithm for adding the most similar nodeto an existingcollection of nodes in a way which incremen tally builds a stable clusterwe rely entirelyupon the graph to deduce the relative importance of relationshipsin particular our algo rithm is designed to reduce socalled infections where the inclu sion of an outofcategory word which happens to cooccur with one of the category words can significantly distort the final listhere is the process we use to select and add the most similar nodeto a set of nodes definition 1 let a be a set of nodes and let n the neighbours of a be the nodes which are linked to any a a aanthe best new node is taken to be the node b na with the highest proportion of links to nmore precisely for each you na let the affinity between you and a be given by the ratio n n n the best new node b n a is the node which maximises this affinity scorethis algorithm has been built into an online demonstration where the user inputs a givenseed word and can then see the cluster of re lated words being gradually assembledthe algorithm is particularly effective atavoiding infections arising from spurious co occurrences and from ambiguityconsider forexample the graph built around the word ap ple in figure 6suppose that we start with the seedlist apple orange bananahowever many times the string apple and novelloccurs in the corpus the novell node will not be addedto this list because it does not have a link to or ange banana or any of their neighbours except for appleone way to summarise the effect of this decision is that the algorithm adds words to clusters depending on type frequency rather than token frequencythis avoids spurious links due to particular idioms rather than geniune semantic similarityin this section we give examples of lexical cat egories extracted by our method and evaluatethem against the corresponding classes in word net51 methodologyour methodology is as followsconsider an intuitive category of objects such as musical instrumentsdefine the wordnet classor wordnet categoryof musical instruments tobe the collection of synsets subsumed in word net by the musical instruments synsettake a protypical exampleof a musical instrument such as pianothe algorithm defined in gives a way of finding the n nodes deemed to be most closely related to the piano nodethesecan then be checked to see if they are members of the wordnet class of musical instru mentsthis method is easier to implement and less open to variation than human judgementswhile wordnet or any other lexical resource isnot a perfect arbiter it is hoped that this exper iment procedure is both reliable and repeatablethe ten classes of words chosen were crimes places tools vehicles musical instruments clothes diseases body parts academic subjects and foodstuffsthe classes were chosen beforethe experiment was carried out so that the re sults could not be massaged to only use thoseclasses which gave good results and and so were included for comparisonhaving chosen these classes 20 words were retrieved using asingle seedword chosen from the class in ques tionthis list of words clearly depends on the seed word chosenwhile we have tried to optimise this choice it depends on the corpus and thethe modelthe influence of semantic proto type theory is apparent in this process a link we would like to investigate in more detailit is possible to choose an optimal seed word for a particular category it should be possible to compare these optimal seed wordswith the prototypessuggested by psychologi cal experiments 52 resultsthe results for a list of ten classes and proto typical words are given in table 1words which are correct members of the classes sought arein roman type incorrect results are in italicsthe decision between correctness and in correctness was made on a strict basis for thesake of objectivity and to enable the repeata bility of the experiment words which are in wordnet were counted as correct results only if they are actual members of the wordnet class in questionthus brigandage is not regarded as a crime even though it is clearly an act ofwrongdoing orchestra is not regarded as a musical instrument because it is a collection of in struments rather than a single instrument etc the only exceptions we have made are the terms wynd and planetology whichare not in wordnet but are correct nonethe lessthese conditions are at least as stringent as those of previous experiments particularly those of riloff and shepherd who also give credit for words associated with but not belonging to a particular categorywith these conditions our algorithm re trieves only 36 incorrect terms out of a total of 200 giving an accuracy of 8253 analysisour results are an order of magnitude better than those reported by riloff and shepherd and roark and charniak whoreport average accuracies of 17 and 35 re spectivelysince the algorithms used are in many waysvery similar this improvement demands expla nationsome of the difference in accuracy can be at tributed to the corpora usedthe experiments in were performed on the 500000 word muc4 corpus and those of were performedusing muc4 and the wall street journal cor pus our model was built using the british national corpus on the other hand our modelwas built using only a partofspeech tagged cor pusthe high accuracy achieved thus questions the conclusion drawn by roark and charniak that parsing is invaluableour results clearly indicate that a large postagged corpusmay be much better for automatic lexical ac quisition than a small fullyparsed corpusthis claim could of course be tested by comparing techniques on the same corpusto evaluate the advantage of using pos infor mation we compared the graph model with asimilarity thesaurus generated using latent se mantic indexing a bagofwordsapproach on the samecorpusthe same number of nouns was re trieved for each class using the graph model and lsithe lsi similarity thesaurus obtained an accuracy of 31 much less than the graph models 82this is because lsi retrieves words which are related by context but are not in the same class for example the neighbours of piano found using lsi cosinesimilarity on the bnc corpus include words such as composermusic bach concerto and dance which are re lated but certainly not in the same semantic classthe incremental clustering algorithm of def inition works well at preventing infectionsclass seed word neighbours produced by graph model crimes murder crime theft arson importuning incest fraud larceny parricideburglary vandalism indecency violence offences abuse brig andage manslaughter pillage rape robbery assault lewdness places park path village lane viewfield church square road avenue garden castle wynd garage house chapel drive crescent home place cathedral street tools screwdriver chisel naville nail shoulder knife drill matchstick morgenthau gizmo hand knee elbow mallet penknife gallie leg arm sickle bolster hammer vehicle conveyance train tram car driver passengers coach lorry truck aeroplane coons plane trailer boat taxi pedestrians vans vehicles jeep bus buses helicopter musical instruments piano fortepiano orchestra marimba clarsach violin cizek viola oboeflute horn bassoon culbone mandolin clarinet equiluz contra bass saxophone guitar cello clothes shirt chapeaubras cardigan trousers breeches skirt jeans boots pair shoes blouse dress hat waistcoat jumper sweater coat cravat tie leggings diseases typhoid malaria aids polio cancer disease atelectasis illnesses cholerahiv deaths diphtheria infections hepatitis tuberculosis cirrho sis diptheria bronchitis pneumonia measles dysentery body parts stomach head hips thighs neck shoulders chest back eyes toes breasts knees feet face belly buttocks haws ankles waist legs academic subjectsphysics astrophysics philosophy humanities art religion science politics astronomy sociology chemistry history theology eco nomics literature maths anthropology culture mathematics geography planetology foodstuffs cake macaroons confectioneries cream rolls sandwiches croissant buns scones cheese biscuit drinks pastries tea danish butter lemonade bread chocolate coffee milk table 1 classes of similar words given by the graph modeland keeping clusters within one particular classthe notable exception is the tools class where the word hand appears to introduce infectionin conclusion it is clear that the graph modelcombined with the incremental clustering algo rithm of definition 1 performs better than mostprevious methods at the task of automatic lex ical acquisitionso far we have presented a graph model built upon noun cooccurrence which performs much better than previously reported methods at the task of automatic lexical acquisitionthis isan important task because assembling and tuning lexicons for specific nlp systems is increas ingly necessarywe now take a step furtherand present a simple method for not only as sembling words with similar meanings but for empirically recognising when a word has several meaningsrecognising and resolving ambiguity is an important task in semantic processingthe traditional word sense disambiguation problem addresses only the ambiguityresolution part of the problem compiling a suit able list of polysemous words and their possiblesenses is a task for which humans are tradition ally needed this makes traditional wsd an intensively supervised and costly processbreadth of cover age does not in itself solve this problem general lexical resources such as wordnet can provide too many senses many of which are rarely used in particular domains or corpora the graph model presented in this paper suggests a new method for recognising relevant polysemywe will need a small amount of termi nology from graph theory definition 2 let g be a graph where v is the set of vertices of g and e v v is the set of edges of g two nodes v1 vn are said to be connected if there exists a path v1 v2 vn1 vn such that e for 1 j n connectedness is an equivalence relationthe equivalence classes of the graph g un der this relation are called the components of g we are now in a position to define the senses of a word as represented by a particular graphdefinition 3 let g be a graph of words closely related to a seedword w and let g w be the subgraph which results from the removal of the seednode w the connected components of the subgraph g w are the senses of the word w with respect to the graph g as an illustrative example consider the localgraph generated for the word apple the re moval of the apple node results in three separate components which represent the different senses of apple fruit trees and computersdefinition 3 gives an extremely good model of the senses of apple found in the bncthe intuitive notion of ambiguity being pre sented is as followsan ambiguous word often connects otherwise unrelated areas of meaningdefinition 3 recognises the ambiguity of apple because this word is linked to both banana and novell words which otherwise have nothing to do with one anotherit is wellknown that any graph can be thought of as a collection of featurevectors forexample by taking the rowvectors in the adja cency matrix theremight therefore be fundamental similarities be tween our approach and methods which rely on similarities between featurevectorsextra motivation for this technique is pro vided by wordsense disambiguationthestandard method for this task is to use hand labelled data to train a learning algorithm which will often pick out particular words as bayesian classifiers which indicate one sense or the otherclearly the words in the different componentsin diagram 6 can potentially be used as classi fiers for just this purpose obviating the need fortimeconsuming human annotationthis tech nique will be assessed and evaluated in future experimentsdemonstrationan online version of the graph model and the in cremental clustering algorithm described in this paper are publicly available 1 for demonstrationpurposes and to allow users to observe the gen erality of our techniquesa sample output is included in figure 6acknowledgements the authors would like to thank the anonymous reviewers whose comments were a great help inmaking this paper more focussed any short comings remain entirely our own responsibilitythis research was supported in part by theresearch collaboration between the ntt communication science laboratories nippon tele graph and telephone corporation and cslistanford university and by ecnsf grant ist 199911438 for the muchmore project2 1httpinfomapstanfordedugraphs 2httpmuchmoredfkidefigure 1 automatically generated graph show ing the word apple and semantically related nouns
C02-1114
a graph model for unsupervised lexical acquisitionthis paper presents an unsupervised method for assembling semantic knowledge from a partofspeech tagged corpus using graph algorithmsthe graph model is built by linking pairs of words which participate in particular syntactic relationshipswe focus on the symmetric relationship between pairs of nouns which occur together in listsan incremental clusterbuilding algorithm using this part of the graph achieves 82 accuracy at a lexical acquisition task evaluated against wordnet classesthe model naturally realises domain and corpus specific ambiguities as distinct components in the graph surrounding an ambiguous wordwe try to find graph regions that are more connected internally than externally
concept discovery from text broadcoverage lexical resources such as wordnet are extremely useful however they often include many rare senses while missing domainspecific senses we present a clustering algorithm called cbc that automatically discovers concepts from text it initially discovers a set of tight clusters called committees that are well scattered in the similarity space the centroid of the members of a committee is used as the feature vector of the cluster we proceed by assigning elements to their most similar cluster evaluating cluster quality has always been a difficult task we present a new evaluation methodology that is based on the editing distance between output clusters and classes extracted from wordnet our experiments show that cbc outperforms several wellknown clustering algorithms in cluster quality broadcoverage lexical resources such as wordnet are extremely useful in applications such as word sense disambiguation and question answering however they often include many rare senses while missing domainspecific sensesfor example in wordnet the words dog computer and company all have a sense that is a hyponym of personsuch rare senses make it difficult for a coreference resolution system to use wordnet to enforce the constraint that personal pronouns must refer to a personon the other hand wordnet misses the userinterface object sense of the word dialog one way to deal with these problems is to use a clustering algorithm to automatically induce semantic classes many clustering algorithms represent a cluster by the centroid of all of its members or by a representative element when averaging over all elements in a cluster the centroid of a cluster may be unduly influenced by elements that only marginally belong to the cluster or by elements that also belong to other clustersfor example when clustering words we can use the contexts of the words as features and group together the words that tend to appear in similar contextsfor instance yous state names can be clustered this way because they tend to appear in the following contexts ___ appellate court campaign in ___ ___ capital governor of ___ ___ driver license illegal in ___ ___ outlaws sthprimary in ___ ___ sales tax senator for ___ if we create a centroid of all the state names the centroid will also contain features such as ___ airport archbishop of ___ ___ business district fly to ___ ___ mayor mayor of ___ ___ subway outskirts of ___ because some of the state names are also names of citiesusing a single representative from a cluster may be problematic too because each individual element has its own idiosyncrasies that may not be shared by other members of the clusterin this paper we propose a clustering algo rithm cbc in which the centroid of a cluster is constructed by averaging the feature vectors of a subset of the cluster membersthe subset is viewed as a committee that determines which other elements belong to the clusterby carefully choosing committee members the features of the centroid tend to be the more typical features of the target classfor example our system chose the following committee members to compute the centroid of the state cluster illinois michigan minnesota iowa wisconsin indiana nebraska and vermontas a result the centroid contains only features like those in list a evaluating clustering results is a very difficult taskwe introduce a new evaluation methodol ogy that is based on the editing distance between output clusters and classes extracted from wordnet clustering algorithms are generally categorized as hierarchical and partitionalin hierarchical agglomerative algorithms clusters are constructed by iteratively merging the most similar clustersthese algorithms differ in how they compute cluster similarityin singlelink clustering the similarity between two clusters is the similarity between their most similar members while completelink clustering uses the similarity between their least similar membersaveragelink clustering computes this similarity as the average similarity between all pairs of elements across clustersthe complexity of these algorithms is o where n is the number of elements to be clustered chameleon is a hierarchical algorithm that employs dynamic modeling to improve clustering quality when merging two clusters one might consider the sum of the similarities between pairs of elements across the clusters a drawback of this approach is that the existence of a single pair of very similar elements might unduly because the merger of two clustersan alternative considers the number of pairs of elements whose similarity exceeds a certain threshold however this may cause undesirable mergers when there are a large number of pairs whose similarities barely exceed the thresholdchameleon clustering combines the two approacheskmeans clustering is often used on large data sets since its complexity is linear in n the number of elements to be clusteredkmeans is a family of partitional clustering algorithms that iteratively assigns each element to one of k clusters according to the centroid closest to it and recomputes the centroid of each cluster as the average of the clusters elementskmeans has complexity o and is efficient for many clustering tasksbecause the initial centroids are randomly selected the resulting clusters vary in qualitysome sets of initial centroids lead to poor convergence rates or poor cluster qualitybisecting kmeans a variation of kmeans begins with a set containing one large cluster consisting of every element and iteratively picks the largest cluster in the set splits it into two clusters and replaces it by the split clusterssplitting a cluster consists of applying the basic kmeans algorithm times with k2 and keeping the split that has the highest average element centroid similarityhybrid clustering algorithms combine hierarchical and partitional algorithms in an attempt to have the high quality of hierarchical algorithms with the efficiency of partitional algorithmsbuckshot addresses the problem of randomly selecting initial centroids in kmeans by combining it with averagelink clusteringbuckshot first applies averagelink to a random sample of n elements to generate k clustersit then uses the centroids of the clusters as the initial k centroids of kmeans clusteringthe sample size counterbalances the quadratic running time of averagelink to make buckshot efficient othe parameters k and t are usually considered to be small numbersfollowing we represent each word by a feature vectoreach feature corresponds to a context in which the word occursfor example threaten with __is a contextif the word handgun occurred in this context the context is a feature of handgunthe value of the feature is the pointwise mutual information between the feature and the wordlet c be a context and fc be the frequency count of a word w occurring in context c the pointwise mutual information between c and w is defined as n jf n wf n wf cw j c i i c mi where n i j i jf is the total frequency counts of all words and their contextsa well known problem with mutual information is that it is biased towards infrequent wordsfeatureswe therefore multiplied miwc with a discounting factor 11 i j ci i j ci c c jfwfmin jfwfmin wf wf we compute the similarity between two words wi and wj using the cosine coefficient of their mutual information vectors c cw c cw c cwcw ji ji ji mimi mimi wwsim 22cbc consists of three phasesin phase i we compute each elements topk similar elementsin our experiments we used k 20in phase ii we construct a collection of tight clusters where the elements of each cluster form a committeethe algorithm tries to form as many committees as possible on the condition that each newly formed committee is not very similar to any existing committeeif the condition is violated the committee is simply discardedin the final phase of the algorithm each element is assigned to its most similar cluster41phase i find topsimilar elementscomputing the complete similarity matrix between pairs of elements is obviously quadratichowever one can dramatically reduce the running time by taking advantage of the fact that the feature vector is sparseby indexing the features one can retrieve the set of elements that have a given featureto compute the top similar words of a word w we first sort ws features according to their mutual information with w we only compute pairwise similarities between w and the words that share a high mutual information feature with w 42phase ii find committeesthe second phase of the clustering algorithm recursively finds tight clusters scattered in the similarity spacein each recursive step the algorithm finds a set of tight clusters called committees and identifies residue elements that are not covered by any committeewe say a committee covers an element if the elements similarity to the centroid of the committee exceeds some high similarity thresholdthe algorithm then recursively attempts to find more committees among the residue elementsthe output of the algorithm is the union of all committees found in each recursive stepthe details of phase ii are presented in figure 1in step 1 the score reflects a preference for bigger and tighter clustersstep 2 gives preference to higher quality clusters in step 3 where a cluster is only kept if its similarity to all previously kept clusters is below a fixed thresholdin our experiments we set 1 035input a list of elements e to be clustered a similarity database s from phase i thresh olds 1 and 2step 1 for each element e e cluster the top similar elements of e from s using averagelink clusteringfor each cluster discovered c compute the following score c avgsim where c is the number of elements in c and avgsim is the average pairwise simi larity between elements in c store the highestscoring cluster in a list l step 2 sort the clusters in l in descending order of their scoresstep 3 let c be a list of committees initially emptyfor each cluster c l in sorted order compute the centroid of c by averaging the frequency vectors of its elements and computing the mutual information vector of the centroid in the same way as we did for individual elementsif cs similarity to the centroid of each committee previously added to c is be low a threshold 1 add c to c step 4 if c is empty we are done and return c step 5 for each element e e if es similarity to every committee in c is below threshold 2 add e to a list of resi dues r step 6 if r is empty we are done and return c otherwise return the union of c and the output of a recursive call to phase ii us ing the same input except replacing e with r output a list of committeesfigure 1phase ii of cbcstep 4 terminates the recursion if no committee is found in the previous stepthe residue elements are identified in step 5 and if no residues are found the algorithm terminates otherwise we recursively apply the algorithm to the residue elementseach committee that is discovered in this phase defines one of the final output clusters of the algorithm43phase iii assign elements to clustersin phase iii every element is assigned to the cluster containing the committee to which it is most similarthis phase resembles kmeans in that every element is assigned to its closest centroidunlike kmeans the number of clusters is not fixed and the centroids do not change many cluster evaluation schemes have been proposedthey generally fall under two categories comparing cluster outputs with manually generated answer keys or embedding the clusters in an application and using its evaluation measurean example of the first approach considers the average entropy of the clusters which measures the purity of the clusters however maximum purity is trivially achieved when each element forms its own clusteran example of the second approach evaluates the clusters by using them to smooth probability distributions like the entropy scheme we assume that there is an answer key that defines how the elements are supposed to be clusteredlet c be a set of clusters and a be the answer keywe define the editing distance dist as the number of operations required to make c consistent with a we say that c is consistent with a if there is a one to one mapping between clusters in c and the classes in a such that for each cluster c in c all elements of c belong to the same class in a we allow two editing operations merge two clusters and move an element from one cluster to anotherlet b be the baseline clustering where each element is its own clusterwe define the quality of a set of clusters c as follows abdist acdist 1suppose the goal is to construct a clustering consistent with the answer keythis measure can be interpreted as the percentage of operations saved by starting from c versus starting from the baselinewe aim to construct a clustering consistent with a as opposed to a clustering identical to a because some senses in a may not exist in the corpus used to generate c in our experiments we extract answer classes from wordnetthe word dog belongs to both the person and animal classeshowever in the newspaper corpus the person sense of dog is at best extremely rarethere is no reason to expect a clustering algorithm to discover this sense of dogthe baseline distance dist is exactly the number of elements to be clusteredwe made the assumption that each element belongs to exactly one clusterthe transforma tion procedure is as follows 1suppose there are m classes in the answerkeywe start with a list of m empty sets each of which is labeled with a class in the answer key2for each cluster merge it with the setwhose class has the largest number of elements in the cluster 3if an element is in a set whose class is notthe same as one of the elements classes move the element to a set where it be longsdist is the number of operations performed using the above transformation rules on c a b e c d e a c d b e b a c d e a b c d e a b c d e figure 2an example of applying the transformation rules to three clustersa the classes in the answer key b the clusters to be transformed c the sets used to reconstruct the classes d the sets after three merge operations e the sets after one move operation figure 2 shows an examplein d the cluster containing e could have been merged with either set the total number of operations is 4we generated clusters from a news corpus using cbc and compared them with classes extracted from wordnet 61test datato extract classes from wordnet we first estimate the probability of a random word belonging to a subhierarchy we use the frequency counts of synsets in the semcor corpus to estimate the probability of a subhierarchysince semcor is a fairly small corpus the frequency counts of the synsets in the lower part of the wordnet hierarchy are very sparsewe smooth the probabilities by assuming that all siblings are equally likely given the parenta class is then defined as the maximal subhierarchy with probability less than a threshold we used minipar 1 a broad coverage english parser to parse about 1gb of newspaper text from the trec collection at a speed of about 500 wordssecond on a piii750 with 512mb memorywe collected the frequency counts of the grammatical relationships output by minipar and used them to compute the pointwise mutual information values from section 3the test set is constructed by intersecting the words in wordnet with the nouns in the corpus whose total mutual information with all of its contexts exceeds a threshold m since wordnet has a low coverage of proper names we removed all capitalized nounswe constructed two test sets s13403 consisting of 13403 words and s3566 consisting of 3566 words we then removed from the answer classes the words that did not occur in the test setstable 1 summa rizes the test setsthe sizes of the wordnet classes vary a lotfor s13403 there are 99 classes that contain three words or less and the largest class contains 3246 wordsfor s3566 78 classes have three or less words and the largest class contains 1181 words1available at wwwcsualbertacalindekminiparhtm62cluster evaluationwe clustered the test sets using cbc and the clustering algorithms of section 2 and applied the evaluation methodology from the previous sectiontable 2 shows the resultsthe columns are our editing distance based evaluation measuretest set s3566 has a higher score for all algorithms because it has a higher number of average features per word than s13403for the kmeans and buckshot algorithms we set the number of clusters to 250 and the maximum number of iterations to 8we used a sample size of 2000 for buckshotfor the bisecting kmeans algorithm we applied the basic kmeans algorithm twice with a maximum of 8 iterations per splitour implementation of chameleon was unable to complete clustering s13403 in reasonable time due to its time complexitytable 2 shows that kmeans buckshot and averagelink have very similar performancecbc outperforms all other algorithms on both data sets63manual inspectionlet c be a cluster and wn be the wordnet class that has the largest intersection with c the precision of c is defined as table 1a description of the test sets in our experimentsdata set total words m average of features total classes s13403 13403 250 7408 202 s3566 3566 3500 22183 150 data set total words m averagefeatures per word 13403 250 7408 3566 3500 22183 table 2cluster quality of several clustering algorithms on the test setsalgorithm s13403 s3566 cbc 6095 6582 kmeans 5670 6248 buckshot 5626 6315 bisecting kmeans 4344 6110 chameleon na 6082 averagelink 5626 6262 completelink 4980 6029 singlelink 2000 3174 c cwnc cprecision cbc discovered 943 clusterswe sorted them according to their precisiontable 3 shows five of the clusters evenly distributed according to their precision ranking along with their top15 features with highest mutualinformationthe words in the clusters are listed in descending order of their similarity to the cluster centroidfor each cluster c we also include wnthe underlined words are in wnthe first cluster is clearly a cluster of firearms and the second one is of pestsin wordnet the word pest is curiously only under the person hierarchythe words stopwatch and houseplant do not belong to the clusters but they have low similarity to their cluster centroidthe third cluster represents some kind of controlin wordnet the legal power sense of jurisdiction is not a hyponym of social control as are supervision oversight and governancethe fourth cluster is about mixturesthe words blend and mix as the event of mixing are present in wordnet but not as the result of mixingthe last cluster is about consumershere is the consumer class in wordnet 15 addict alcoholic big spender buyer client concertgoer consumer customer cutter diner drinker drug addict drug user drunk eater feeder fungi head heroin addict home buyer junkie junky lush nonsmoker patron policy holder purchaser reader regular shopper smoker spender subscriber sucker taker user vegetarian wearer in our cluster only the word client belongs to wordnets consumer classthe cluster is ranked very low because wordnet failed to consider words like patient tenant and renter as consumerstable 3 shows that even the lowest ranking cbc clusters are fairly coherentthe features associated with each cluster can be used to classify previously unseen words into one or more existing clusterstable 4 shows the clusters containing the word cell that are discovered by various clustering algorithms from s13403the underlined words represent the words that belong to the cell class in wordnetthe cbc cluster corresponds almost exactly to wordnets cell classkmeans and buckshot produced fairly coherent clustersthe cluster constructed by bisecting kmeans is obviously of inferior qualitythis is consistent with the fact that bisecting kmeans has a much lower score on s13403 compared to cbc k means and buckshottable 3five of the 943 clusters discovered by cbc from s13403 along with their features with top15 highest mutual information and the wordnet classes that have the largest intersection with each clusterrank members top15 features wn 1 handgun revolver shotgun pistol rifle machine gun sawedoff shotgun submachine gun gun automatic pistol automatic rifle firearm carbine ammunition magnum cartridge automatic stopwatch __ blast barrel of __ brandish __ fire __ point __ pull out __ __ discharge __ fire __ go off arm with __ fire with __ kill with __ open fire with __ shoot with __ threaten with __ artifact artifact 236 whitefly pest aphid fruit fly termite mosquito cockroach flea beetle killer bee maggot predator mite houseplant cricket __ control __ infestation __ larvae __ population infestation of __ specie of __ swarm of __ attract __ breed __ eat __ eradicate __ feed on __ get rid of __ repel __ ward off __ animal animate being beast brute creature fauna 471 supervision discipline oversight control governance decision making jurisdiction breakdown in __ lack of __ loss of __ assume __ exercise __ exert __ maintain __ retain __ seize __ tighten __ bring under __ operate under __ place under __ put under __ remain under __ act human action human activity 706 blend mix mixture combination juxtaposition combine amalgam sprinkle synthesis hybrid melange dip in __ marinate in __ pour in __ stir in __ use in __ add to __ pour __ stir __ curious __ eclectic __ ethnic __ odd __ potent __ unique __ unusual __ group grouping 941 employee client patient applicant tenant individual participant renter volunteer recipient caller internee enrollee giver benefit for __ care for __ housing for __ benefit to __ service to __ filed by __ paid by __ use by __ provide for __ require for give to __ offer to __ provide to __ disgruntled __ indigent __ workerwe presented a clustering algorithm cbc for automatically discovering concepts from textit can handle a large number of elements a large number of output clusters and a large sparse feature spaceit discovers clusters using well scattered tight clusters called committeesin our experiments we showed that cbc outperforms several well known hierarchical partitional and hybrid clustering algorithms in cluster qualityfor example in one experiment cbc outperforms kmeans by 425by comparing the cbc clusters with wordnet classes we not only find errors in cbc but also oversights in wordnetevaluating cluster quality has always been a difficult taskwe presented a new evaluation methodology that is based on the editing distance between output clusters and classes extracted from wordnet acknowledgements the authors wish to thank the reviewers for their helpful commentsthis research was partly supported by natural sciences and engineering research council of canada grant ogp121338 and scholarship pgsb207797
C02-1144
concept discovery from textbroadcoverage lexical resources such as wordnet are extremely usefulhowever they often include many rare senses while missing domainspecific senseswe present a clustering algorithm called cbc that automatically discovers concepts from textit initially discovers a set of tight clusters called committees that are well scattered in the similarity spacethe centroid of the members of a committee is used as the feature vector of the clusterwe proceed by assigning elements to their most similar clusterevaluating cluster quality has always been a difficult taskwe present a new evaluation methodology that is based on the editing distance between output clusters and classes extracted from wordnet our experiments show that cbc outperforms several wellknown clustering algorithms in cluster qualitymutual information is an information theoric measure and has been used in our method for clustering words
building a largescale annotated chinese corpus in this paper we address issues related to building a largescale chinese corpus we try to answer four questions how to speed up annotation how to maintain high annotation quality for what purposes is the corpus applicable and finally what future work we anticipate the penn chinese treebank is an ongoing project with its objective being to create a segmented chinese corpus annotated with pos tags and syntactic bracketsthe first installment of the project consists of xinhua newswire between the years 1994 and 1998 totaling 100000 words fully segmented postagged and syntactically bracketed and it has been released to the public via the penn linguistic data consortium the preliminary results of this phase of the project have been reported in xia et al currently the second installment of the project the 400000word ctbii is being developed and is expected to be completed early in the year 2003ctbii will follow the standards set up in the segmentation pos tagging and bracketing guidelines and it will use articles from peoples daily hong kong newswire and material translated into chinese from other languages in addition to the xinhua newswire used in ctbi in an effort to diversify the sourcesthe availability of ctbi changed our approach to ctbii considerablydue to the existence of ctbi we were able to train new automatic chinese language processing tools which crucially use annotated corpora as training materialthese tools are then used for preprocessing in the development of the ctbiiwe also developed tools to control the quality of the corpusin this paper we will address three issues in the development of the chinese treebank annotation speed annotation accuracy and usability of the corpusspecifically we attempt to answer four questions how do we speed up the annotation process how do we maintain high quality ie annotation accuracy and interannotator consistency during the annotation process and for what purposes is the corpus applicable and what are our future plansalthough we will touch upon linguistic problems that are specific to chinese we believe these issues are general enough for the development of any single language corpus1 annotation speedthere are three main factors that affect the annotation speed annotatorsbackground guideline design and more importantly the availability of preprocessing toolswe will discuss how each of these three factors affects annotation speed11 annotator backgroundeven with the best sets of guidelines it is important that annotators have received considerable training in linguistics particularly in syntaxin both the segmentationpos tagging phase and the syntactic bracketing phase understanding the structure of the sentences is essential for correct annotation with reasonable speedfor example
C02-1145
building a largescale annotated chinese corpusin this paper we address issues related to building a largescale chinese corpuswe try to answer four questions how to speed up annotation how to maintain high annotation quality for what purposes is the corpus applicable and finally what future work we anticipate
learning question classifiers in order to respond correctly to a free form factual question given a large collection of texts one needs to un derstand the question to a level that allows determiningsome of the constraints the question imposes on a pos sible answer these constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer this paper presents a machine learning approach toquestion classification we learn a hierarchical classifier that is guided by a layered semantic hierarchy of answer types and eventually classifies questions into finegrained classes we show accurate results on a large col lection of freeform questions used in trec 10 opendomain question answering and storycomprehension have become important directions in natural language pro cessingquestion answering is a retrieval task morechallenging than common search engine tasks be because its purpose is to find an accurate and conciseanswer to a question rather than a relevant docu mentthe difficulty is more acute in tasks such as story comprehension in which the target text is less likely to overlap with the text in the questionsfor this reason advanced natural language techniques rather than simple key term extraction are neededone of the important stages in this process is analyz ing the question to a degree that allows determining the typeof the sought after answerin the treccompetition participants are requested to build a system which given a set of en glish questions can automatically extract answers of no more than 50 bytes from a5gigabyte document libraryparticipants have re research supported by nsf grants iis9801638 and itr iis 0085836 and an onr muri awardalized that locating an answer accurately hinges on first filtering out a wide range of candidates based on some categorization of answer typesthis work develops a machine learning approach to question classification our goal is to categorize questions into different semantic classes that impose constraints on potential answers so that they can be utilized in later stages of the question answeringprocessfor example when considering the question q what canadian city has the largest popula tion the hope is to classify this question as havinganswer type city implying that only candidate an swers that are cities need considerationbased on the snow learning architecture we develop a hierarchical classifier that is guided by a lay ered semantic hierarchy of answer types and is able to classify questions into finegrained classeswesuggest that it is useful to consider this classifica tion task as a multilabel classification and find that it is possible to achieve good classification results despite the fact that the number of dif ferent labels used is fairly large 50we observe thatlocal features are not sufficient to support this accu racy and that inducing semantic features is crucial for good performancethe paper is organized as follows sec2 presents the question classification problem sec3 discusses the learning issues involved in qc and presents ourlearning approach sec4 describes our experimen tal studywe define question classification here to be the task that given a question maps it to one of k classes which provide a semantic constraint on the soughtafter answer1the intension is that this 1we do not address questions like do you have a light which calls for an action but rather only factual whquestionsclassification potentially with other constraints on the answer will be used by a downstream process which selects a correct answer from among several candidatesa question classification module in a question an swering system has two main requirementsfirst it provides constraints on the answer types that allow further processing to precisely locate and verify theanswersecond it provides information that downstream processes may use in determining answer se lection strategies that may be answer type specificrather than uniformfor example given the ques tion who was the first woman killed in the vietnam warwe do not want to test every noun phrase in a document to see whether it provides an answerat the very least we would like to know that the target of this question is a person thereby reducingthe space of possible answers significantlythe fol lowing examples taken from the trec 10 question collection exhibit several aspects of this pointq what is a prismidentifying that the target of this question is a definition strategies that are specific fordefinitions may be use fulsimilarly in q why is the sun yellowidentifying that this question asks for a reason may lead to using a specific strategy for reasonsthe above examples indicate that given that dif ferent answer types may be searched using different strategies a good classification module may helpthe question answering taskmoreover determin ing the specific semantic type of the answer couldalso be beneficial in locating the answer and veri fying itfor example in the next two questions knowing that the targets are a city or country willbe more useful than just knowing that they are loca tionsq what canadian city has the largest populationq which country gave new york the statue of libertyhowever confined by the huge amount of man ual work needed for constructing a classifier for a complicated taxonomy of questions most questionanswering systems can only perform a coarse clas sification for no more than 20 classesas a result existing approaches as in have adopted a small set of simple answer entitytypes which consisted of the classes person location organization date quantity duration lin ear measurethe rules used in the classification were of the following forms if a query starts with who or whom type personif a query starts with where type locationif a query contains which or what the head noun phrase determines the class as for what x questionswhile the rules used have large coverage and rea sonable accuracy they are not sufficient to supportfinegrained classificationone difficulty in supporting finegrained classification is the need to ex tract from the questions finer features that require syntactic and semantic analysis of questions and possibly many of themthe approach we adoptedis a multilevel learning approach some of our fea tures rely on finer analysis of the questions that are outcomes of learned classifiers the qc module then applies learning with these as input features21 classification standardearlier works have suggested various standards of classifying questionswendy lehnerts conceptual taxonomy for example proposesabout 13 conceptual classes including causal antecedent goal orientation enablement causal consequent verification disjunctive and so onhow ever in the context of factual questions that are of interest to us here conceptual categories do notseem to be helpful instead our goal is to se mantically classify questions as in earlier work on trec the key difference though is that we attempt todo that with a significantly finer taxonomy of answer types the hope is that with the semantic an swer types as input one can easily locate answercandidates given a reasonably accurate named en tity recognizer for documents22 question hierarchywe define a twolayered taxonomy which repre sents a natural semantic classification for typicalanswers in the trec taskthe hierarchy con tains 6 coarse classes and 50 fine classes table 1 showsthe distribution of these classes in the 500 ques tions of trec 10each coarse class contains anonoverlapping set of fine classesthe motiva tion behind adding a level of coarse classes is that of compatibility with previous works definitions andcomprehensibilitywe also hoped that a hierarchi cal classifier would have a performance advantage over a multiclass classifier this point however is not fully supported by our experimentsclass class abbrev9 description 7 abb 1 manner 2 exp 8 reason 6 entity 94 human 65 animal 16 group 6 body 2 individual 55 color 10 title 1 creative 0 description 3 currency 6 location 81 dismed2 city 18 event 2 country 3 food 4 mountain 3 instrument 1 other 50 lang 2 state 7 letter 0 numeric 113 other 12 code 0 plant 5 count 9 product 4 date 47 religion 0 distance 16 sport 1 money 3 substance 15 order 0 symbol 0 other 12 technique 1 period 8 term 7 percent 3 vehicle 4 speed 6 word 0 temp 5 description 138 size 0 definition 123 weight 4 table 1 the distribution of 500 trec 10 questions over the question hierarchycoarse classes are followed by their fine class refinements23 the ambiguity problemone difficulty in the question classification task is that there is no completely clear boundary between classestherefore the classification of a specific question can be quite ambiguousconsider 1what is bipolar disorder2what do bats eatquestion 1 could belong to definition or dis ease medicine question 2 could belong to foodplant or animal and question 3 could be a numeric value or a definitionit is hard to catego rize those questions into one single class and it islikely that mistakes will be introduced in the down stream process if we do soto avoid this problemwe allow our classifiers to assign multiple class la bels for a single questionthis method is better than only allowing one label because we can apply all the classes in the later precessing steps without any loss3 learning a question classifierusing machine learning methods for question clas sification is advantageous over manual methods forseveral reasonsthe construction of a manual clas sifier for questions is a tedious task that requiresthe analysis of a large number of questionsmore over mapping questions into fine classes requiresthe use of lexical items and there fore an explicit representation of the mapping may be very largeon the other hand in our learning approach one can define only a small number of typesof features which are then expanded in adatadriven way to a potentially large number of features relying on the abil ity of the learning process to handle itit is hard to imagine writing explicitly a classifier that depends on thousands or more featuresfinally a learnedclassifier is more flexible to reconstruct than a man ual one because it can be trained on a new taxonomy in a very short timeone way to exhibit the difficulty in manually con structing a classifier is to consider reformulations of a question what tourist attractions are there in reimswhat are the names of the tourist attractions in reimswhat do most tourists visit in reimswhat attracts tourists to reimswhat is worth seeing in reimsall these reformulations target the same answertype locationhowever different words and syntactic structures make it difficult for a manual clas sifier based on a small set of rules to generalize well and map all these to the same answer typegood learning methods with appropriate features on the other hand may not suffer from the fact that the number of potential features is so large and would gen eralize and classify these cases correctly31 a hierarchical classifierquestion classification is a multiclass classificationa question can be mapped to one of 50 pos sible classes our learned classifier is based on the snow learning architecture 2 where in order to allow the classifier to output more than one class label wemap the classifiers output activation into a condi tional probability of the class labels and threshold itthe question classifier makes use of a sequence of two simple classifiers each utilizing the winnow algorithm within snowthe first classifies questions into coarse classes and the second into fineclasses a feature extractor automatically extracts the same features for each clas sifierthe second classifier depends on the first in2freely available at httpl2rcsuiuceducogcompcc softwarehtml abbr entitydeschumanlocnum abbr entity entity human entity locnum desc coarse classifier fine classifier abbexp ind plant date abb animal food plantfoodplant indgroupfood plant city statedefinition reasonmap coarse classes to fine classes c0 c1 c2 c3 abbdef animalfood all possible subsets of c0 wih size 5 all possible subsets of c2 with size 5 figure 1 the hierarchical classifier that its candidate labels are generated by expanding the set of retained coarse classes from the first into a set of fine classes this set is then treated as the confusion set for the second classifierfigure 1 shows the basic structure of the hierar chical classifierduring either the training or the testing stage a question is processed along one path topdown to get classifiedthe initial confusion set of any question is c 0 fc 1 c 2 c n g the set of all the coarse classesthe coarse classifier determines a set of preferred labels c 1 coarse classifier c 1 c 0 so that jc 1 j 5then each coarse class label in c 1 is expanded to a fixed set of fine classesdetermined by the class hierarchythat is sup pose the coarse class c i is mapped into the set c i ff i1 f i2 f i am g of fine classes then c 2 s c i 2c 1 c i the fine classifier determines a set ofpreferred labels c 3 fine classifier so that c 3 c 2 and jc 3 j 5c 1 and c 3are the ul timate outputs from the whole classifier which are used in our evaluation32 feature spaceeach question is analyzed and represented as a listof features to be treated as a training or test exam ple for learningwe use several types of features and investigate below their contribution to the qc accuracythe primitive feature types extracted for eachquestion include words pos tags chunks named entitieshead chunks and semantically related words over these primitive features we use a set of operators to composemore complex features such as conjunctive and relational features as in a simple script that describes the typesof features used is written and the features themselves are ex tracted in a data driven wayonly activefeatures are listed in our representation so that despite the large number of potential features the size of each example is smallamong the 6 primitive feature types pos tags chunks and head chunks are syntactic features while named entities and semantically related words are semantic featurespos tags are extracted using a snowbased pos tagger chunks are extracted using a previously learned classifier the named entity classifier isalso learned and makes use of the same technol ogy developed for the chunker the related wordsensors were constructed semi automaticallymost question classes have a semantically related word listfeatures will be extracted for this class ifa word in a question belongs to the listfor exam ple when away which belongs to a list of words semantically related to the class distance occurs inthe sentence the sensor rel will be ac tivewe note that the features from these sensors are different from those achieved using named entitysince they support more general semantic catego rizationand include nouns verbs adjectives rather than just named entitiesfor the sake of the experimental comparison wedefine six feature sets each of which is an incre mental combination of the primitive feature typesthat is feature set 1 contains word features feature set 2 contains featurescomposed of words and pos tags and so on the fi nal feature set feature set 6 contains all the feature types and is the only one that containsthe related words liststhe classifiers will be experimented with different feature sets to test the influ ence of different featuresoverall there are about 200 000 features in the feature space of relworddue to the generation of complex features over sim ple feature typesfor each question up to a couple of hundreds of them are active33 decision modelfor both the coarse and fine classifiers the same decision model is used to choose class labels for a questiongiven a confusion set and a question snow outputs a density over the classes derived from the activation of each classafter ranking the classes in the decreasing order of density values we have the possible class labels c fc 1 c 2 c n g with their densities p fp 1 p 2 p n g as dis cussed earlier for each question we output the first k classes c 1 c 2 c kwhere k satis fies k min 5 t is a threshold value in 01if we treat p i as the probability that a question belongs to class i the decision model yields a reasonable probabilistic interpretationwe use t 095 in the experimentswe designed two experiments to test the accuracy ofour classifier on trec questionsthe first experi ment evaluates the contribution of different featuretypes to the quality of the classificationour hi erarchical classifier is trained and tested using oneof the six feature sets defined in sect32 in the second experiment we evaluate the advantage we get from the hierarchical clas sifierwe construct a multiclass classifier only for fine classesthis flat classifier takes all fine classes as its initial confusion set and classifies a questioninto fine classes directlyits parameters and deci sion model are the same as those of the hierarchicaloneby comparing this flat classifier with our hi erarchical classifier in classifying fine classes we hope to know whether the hierarchical classifier hasany advantage in performance in addition to the ad vantages it might have in downstream processing and comprehensibility41 datadata are collected from four sources 4500 english questions published by usc about 500 manually constructed questions for a few rare classes 894 trec 8 and trec 9 questions and also 500 questions from trec 10 which serves as our test set3these questions were manually labeled accord ing to our question hierarchyalthough we allow multiple labels for one question in our classifiers in our labeling for simplicity we assigned exactly 3the annotated data and experimental results are available from httpl2rcsuiuceducogcompone label to each questionour annotators were requested to choose the most suitable class accord ing to their own understandingthis methodology might cause slight problems in training when the labels are ambiguous since some questions are not treated as positive examples for possible classes as they should bein training we divide the 5500 questions from the first three sources randomly into 5 training sets of 1000 2000 3000 4000 and 5500 questionsall 500 trec 10 questions are used as the test set42 evaluationin this paper we count the number of correctly clas sified questions by two different precision standards p 1 and p 5 suppose k ilabels are output for the i th question and are ranked in a decreasing order according to their density valueswe define i ij f 1 if the correct label of the ith question is output in rank j 0 otherwise then p 1 p m i1 i i1 m and p 5 p m i1 p k i j1 i ij m where m is the total number of test examplesp 1corresponds to the usual defini tion of precision which allows only one label for each question while p 5 allows multiple labelsp 5reflects the accuracy of our classifier with respect to later stages in a question answering sys temas the results below show although questionclasses are still ambiguous few mistakes are intro duced by our classifier in this step43 experimental resultsperformance of the hierarchical classifier table 2 shows the p 5precision of the hierarchi cal classifier when trained on 5500 examples andtested on the 500 trec 10 questionsthe re sults are quite encouraging question classification is shown to be solved effectively using machine learning techniquesit also shows the contribution of the feature sets we definedoverall we get a9880 precision for coarse classes with all the fea tures and 95 for the fine classesp 5 word pos chunk ne head relword coarse 9200 9660 9700 9700 9780 9880 fine 8600 8660 8760 8860 8940 9500table 2 classification results of the hierarchical clas sifier on 500 trec 10 questionstraining is done on 5500 questionscolumns show the performance for difference feature sets and rows show the precision forcoarse and fine classes respall the results are evalu ated using p 5 inspecting the data carefully we can observe the significant contribution of the features constructed based on semantically related words sensorsit is interesting to observe that this improvement is even more significant for fine classesno train test p 1 p 5 1 1000 500 8380 9560 2 2000 500 8480 9640 3 3000 500 9100 9800 4 4000 500 9080 9800 table 3 classification accuracy for coarse classes ondifferent training sets using the feature set relwordre sults are evaluated using p 1 and p 5 notrain test p 1 p 5 1 1000 500 7100 8380 2 2000 500 7780 8820 3 3000 500 7980 9060 4 4000 500 8000 9120table 4 classification accuracy for fine classes on different training sets using the feature set relwordre sults are evaluated using p 1 and p 5 tables 3 and 4 show the p 1 and p 5 accuracyof the hierarchical classifier on training sets of dif ferent sizes and exhibit the learning curve for this problemwe note that the average numbers of labels out put by the coarse and fine classifiers are 154 and 205 resp which shows the decision model is accurate as well as efficientcomparison of the hierarchical and the flat classifier the flat classifier consists of one classifier which isalmost the same as the fine classifier in the hierar chical case except that its initial confusion set is the whole set of fine classesour original hope was that the hierarchical classifier would have a better performance given that its fine classifier only needs to deal with a smaller confusion sethowever it turns out that there is a tradeoff between this factor and the inaccuracy albeit small of the coarse levelpredictionas the results show there is no perfor mance advantage for using a level of coarse classes and the semantically appealing coarse classes do not contribute to better performancefigure 2 give some more intuition on the flat vs hierarchical issuewe define the tendency of class i to be confused with class j as follows d ij err ij 2 where err ij is the number ofquestions in class i that are misclassified as belong p 1 word pos chunk ne head relword h 7760 7820 7740 7880 7880 8420 f 5240 7720 7700 7840 7680 8400 p 5 word pos chunk ne head relword h 8600 8660 8760 8860 8940 9500 f 8320 8680 8660 8840 8980 9560 table 5 comparing accuracy of the hierarchical and flat classifiers on 500 trec 10 question training is done on 5500 questionsresults are shown for different feature sets using p 1 and p 5 fine classes 150 fi ne c la ss es 1 5 0 2 24 28 32 37 50 2 24 28 32 37 50 figure 2 the grayscale map of the matrix dnnthe color of the small box in position denotes d ij thelarger d ij is the darker the color is the dotted lines separate the 6 coarse classesing to class j and n i n jare the numbers of ques tions in class i and j respfigure 2 is a grayscale map of the matrix dnndnn is so sparse that most parts of the graph areblankwe can see that there is no good cluster ing of fine classes mistakes within a coarse classwhich explains intuitively why the hierarchical clas sifier with an additional level coarse classes does not work much better44 discussion and exampleswe have shown that the overall accuracy of our clas sifier is satisfactoryindeed all the reformulation questions that we exemplified in sec3 have been correctly classifiednevertheless it is constructive to consider some cases in which the classifier failsbelow are some examples misclassified by the hier archical classifierwhat french ruler was defeated at the battle of water loothe correct label is individual but the classifier failing to relate the word rulerto a person since it was not in any semantic list outputs eventwhat is the speed hummingbirds fly the correct label is speed but the classifier outputs animalour feature sensors fail to determine that the focus of the question is speedthis example illustrates the necessity of identifying the question focus by analyzing syntactic structureswhat do you call a professional map drawer the classifier returns other entities instead ofequivalent termin this case both classes are ac ceptablethe ambiguity causes the classifier not to output equivalent term as the first choicethis paper presents a machine learning approach to question classificationwe developed a hierarchicalclassifier that is guided by a layered semantic hierarchy of answers types and used it to classify questions into finegrained classesour experimental re sults prove that the question classification problemcan be solved quite accurately using a learning ap proach and exhibit the benefits of features based on semantic analysisin future work we plan to investigate further the application of deeper semantic analysis to feature extraction automate the generation of thesemantic features and develop a better understand ing to some of the learning issues involved in thedifference between a flat and a hierarchical classi fier
C02-1150
learning question classifiersin order to respond correctly to a free form factual question given a large collection of texts one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answerthese constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answerthis paper presents a machine learning approach to question classificationwe learn a hierarchical classifier that is guided by a layered semantic hierarchy of answer types and eventually classifies questions into finegrained classeswe show accurate results on a large collection of freeform questions used in trec 10we assign one of fifty possible types to a question based on features present in the questionwe have developed a machine learning approach which uses the snow learning architecture
deterministic dependency parsing of english text this paper presents a deterministic dependency parser based on memorybased learning which parses english text in linear time when trainedand evaluated on the wall street journal sec tion of the penn treebank the parser achieves a maximum attachment score of 871 unlikemost previous systems the parser produces la beled dependency graphs using as arc labels a combination of bracket labels and grammaticalrole labels taken from the penn treebank ii annotation scheme the best overall accuracy ob tained for identifying both the correct head and the correct arc label is 860 when restricted to grammatical role labels and 844 for the maximum set there has been a steadily increasing interest in syntactic parsing based on dependency analysis in re cent yearsone important reason seems to be thatdependency parsing offers a good compromise be tween the conflicting demands of analysis depth on the one hand and robustness and efficiency on the otherthus whereas a complete dependency structure provides a fully disambiguated analysisof a sentence this analysis is typically less complex than in frameworks based on constituent analysis and can therefore often be computed determin istically with reasonable accuracydeterministicmethods for dependency parsing have now been ap plied to a variety of languages including japanese english turkish and swedish for english the interest in dependency parsing has been weaker than for other languagesto some extent this can probably be explained by the strong tradition of constituent analysis in angloamerican linguistics but this trend has been reinforced by the fact that the major treebank of american englishthe penn treebank is anno tated primarily with constituent analysison the other hand the best available parsers trained on thepenn treebank those of collins and charniak use statistical models for disambigua tion that make crucial use of dependency relationsmoreover the deterministic dependency parser of yamada and matsumoto when trained on the penn treebank gives a dependency accuracy that is almost as good as that of collins and charniak the parser described in this paper is similar to that of yamada and matsumoto in that it uses a deterministic parsing algorithm in combination with a classifier induced from a treebankhowever there are also important differences between the twoapproachesfirst of all whereas yamada and matsumoto employs a strict bottomup algorithm with multiple passes over the input the present parser uses the algorithmproposed in nivre which combines bottom up and topdown processing in a single pass in order to achieve incrementalitythis also means that the time complexity of the algorithm used here is linearin the size of the input while the algorithm of ya mada and matsumoto is quadratic in the worst caseanother difference is that yamada and matsumoto use support vector machines whilewe instead rely on memorybased learning most importantly however the parser presented in this paper constructs labeled dependency graphs ie dependency graphs where arcs are labeled with dependency typesas far as we know this makesit different from all previous systems for dependency parsing applied to the penn treebank althoughthere are systems that extract labeled grammatical relations based on shallow parsing eg buchholz the fact that we are working with labeled dependency graphs is also one of the motivations for choosing memorybased learning over sup port vector machines since we require a multiclass classifiereven though it is possible to use svmfor multiclass classification this can get cumber some when the number of classes is large the classification problem only in volves three classesthe parsing methodology investigated here haspreviously been applied to swedish where promis ing results were obtained with a relatively smalltreebank resulting in an attachment score of 847 and a labeled accuracy of 806 1 however since there are no comparable resultsavailable for swedish it is difficult to assess the significance of these findings which is one of the reasons why we want to apply the method to a bench mark corpus such as the the penn treebank even though the annotation in this corpus is not ideal for labeled dependency parsingthe paper is structured as followssection 2 describes the parsing algorithm while section 3 ex plains how memorybased learning is used to guidethe parserexperimental results are reported in sec tion 4 and conclusions are stated in section 5in dependency parsing the goal of the parsing pro cess is to construct a labeled dependency graph of the kind depicted in figure 1in formal terms we define dependency graphs as follows 1let are r1 rm be the set of permissibledependency types 2a dependency graph for a string of words w w1wn is a labeled directed graph d where w is the set of nodes ie word tokens in the input string a is a set of labeled arcs for every wj w there is at most one arc a1the attachment score only considers whether a word is as signed the correct head the labeled accuracy score in additionrequires that it is assigned the correct dependency type cfsec tion 4acyclic projective and connectedfor a more detailed discussion of dependency graphs and wellformedness conditions the reader is referred to nivre the parsing algorithm used here was first de fined for unlabeled dependency parsing in nivre and subsequently extended to labeled graphsin nivre et al parser configurations are rep resented by triples s ia where s is the stack i is the list of input tokens and a is the arc relation for the dependency graphgiven an input string w the parser is initial ized to nilw 2 and terminates when it reaches a configuration snil athe input string w is accepted if the de pendency graph d given at termination is wellformed otherwise w is rejectedgiven an arbitrary configuration of the parser there are four possible transitions to the next configuration 1leftarc in a configuration tsnia ifthere is no arc a extend a with and pop the stack giving the configu ration snia2rightarc in a configuration tsnia ifthere is no arc a extend a with and push n onto the stack giving the configuration ntsiais an arc a pop the stack giving the configuration sian onto the stack giving the configuration nsia2we use nil to denote the empty list and aa to denote a list with head a and tail a thpos tdep tlpos tldep tpos tlex trdep trpos nlpos nldep npos nlex l1pos l2pos l3pos t top of the stack n next input token tl leftmost dependent of t tr rightmost dependent of t nl leftmost dependent of n li next plus i input token xlex word form of x xpos partofspeech of x xdep dependency type of x figure 2 parser state featuresafter initialization the parser is guaranteed to ter minate after at most 2n transitions given an input string of length n moreover the parser always constructs a dependency graph that isacyclic and projectivethis means that the depen dency graph given at termination is wellformed if and only if it is connected otherwise it is a set of connected components each of which is a wellformed dependency graph for a substring of the original inputthe transition system defined above is nondeterministic in itself since several transitions can often be applied in a given configurationto con struct deterministic parsers based on this systemwe use classifiers trained on treebank data in or der to predict the next transition given the current configuration of the parserin this way our approach can be seen as a form ofhistorybased parsing in the experiments reported here we use memorybased learning to train our classifiers3 memorybased learningmemorybased learning and problem solving is based on two fundamental principles learning is thesimple storage of experiences in memory and solv ing a new problem is achieved by reusing solutionsfrom similar previously solved problems it is inspired by the nearest neighborapproach in statistical pattern recognition and arti ficial intelligence as well as the analogical modeling approach in linguisticsin machine learning terms it can be characterized as a lazy learning method since it defers processing of input un til needed and processes input by combining stored data memorybased learning has been successfully applied to a number of problems in natural languageprocessing such as graphemetophoneme conver sion partofspeech tagging prepositionalphraseattachment and base noun phrase chunking previous work on memorybased learning for deterministic parsing includes veenstra and daelemans and nivre et al for the experiments reported in this paper we have used the software package timbl which provides a vari ety of metrics algorithms and extra functions on top of the classical k nearest neighbor classification kernel such as value distance metrics and distance weighted class voting the function we want to approximate is a map ping f from configurations to parser actions where each action consists of a transition and a dependency type f config lararesh here config is the set of all configurations and r is the set of dependency typesin order to make theproblem tractable we approximate f with a func tion fwhose domain is a finite space of parser states which are abstractions over configurationsfor this purpose we define a number of features that can be used to define different models of parser statefigure 2 illustrates the features that are used to define parser states in the present studythe two central elements in any configuration are the token on top of the stack and the next input token the tokens which may be connected by a de pendency arc in the next configurationfor these tokens we consider both the word form and the partofspeech as assigned by an automatic partofspeech tagger ina preprocessing phasenext we consider a selection of dependencies that may be present in the cur rent arc relation namely those linking t to its head and its leftmost and rightmost dependent and that linking n to its leftmost dependent 3 considering both the dependency type and the partofspeech of the head or depen dentfinally we use a lookahead of three tokens considering only their partsofspeechwe have experimented with two different statemodels one that incorporates all the features depicted in figure 2 and one that ex cludes the partsofspeech of th tl tr nl models similar to model 2 have been found towork well for datasets with a rich annotation of de pendency types such as the swedish dependency treebank derived from einarsson where the extra partofspeech features are largely redundant model 1 can be expected towork better for datasets with less informative dependency annotation such as dependency trees ex tracted from the penn treebank where the extra partofspeech features may compensate for the lack of information in arc labelsthe learning algorithm used is the ib1 algorithm with k 5 ie classification basedon 5 nearest neighbors4 distances are measured us ing the modified value difference metric for instances with a frequency of at least 3 and classifica tion is based on distance weighted class voting with inverse distance weighting thesesettings are the result of extensive experiments partially reported in nivre et al for more infor mation about the different parameters and settings see daelemans et al 4 experimentsthe data set used for experimental evaluation is the standard data set from the wall street journal section of the penn treebank with sections 221 3given the parsing algorithm n can never have a head or a right dependent in the current configuration4in timbl the value of k in fact refers to k nearest dis tances rather than k nearest neighbors which means that evenwith k 1 the nearest neighbor set can contain several instances that are equally distant to the test instancethis is dif ferent from the original ib1 algorithm as described in aha et alused for training and section 23 for testing the data has been converted to dependency trees using head rules we are grateful to ya mada and matsumoto for letting us use their rule set which is a slight modification of the rules used bycollins this permits us to make exact com parisons with the parser of yamada and matsumoto but also the parsers of collins and charniak which are evaluated on the same data set in yamada and matsumoto one problem that we had to face is that the standard conversion of phrase structure trees to de pendency trees gives unlabeled dependency trees whereas our parser requires labeled treessince the annotation scheme of the penn treebank does notinclude dependency types there is no straightfor ward way to derive such labelswe have therefore experimented with two different sets of labels none of which corresponds to dependency types in a strict sensethe first set consists of the function tags forgrammatical roles according to the penn ii annota tion guidelines we call this set gthe second set consists of the ordinary bracket la bels combined with function tags for grammatical roles giving composite labels such as npsbj we call this set b we assign labels to arcs by letting each word that heads aphrase p in the original phrase structure have its in coming edge labeled with the label of p in both sets we also includea default label dep for arcs that would not other wise get a labelthis gives a total of 7 labels in the g set and 50 labels in the b setfigure 1 shows a converted dependency tree using the b labels in the corresponding tree with g labels npsbj would be replaced by sbj advp and vp by depwe use the following metrics for evaluation 1unlabeled attachment score the proportion of words that are assigned the correct head 2labeled attachment score the proportion of words that are assigned the correct head and dependency type 3dependency accuracy the proportionof nonroot words that are assigned the correct head 4root accuracy the proportion of rootwords that are analyzed as such sentences whose unlabeled dependency structure is completely correct all metrics except cm are calculated as meanscores per word and punctuation tokens are con sistently excludedtable 1 shows the attachment score both unla beled and labeled for the two different state models with the two different label setsfirst of all we see that model 1 gives better accuracy than model 2 with the smaller label set g which confirms our expectations that the added partofspeech featuresare helpful when the dependency labels are less informativeconversely we see that model 2 outper forms model 1 with the larger label set b which is consistent with the hypothesis that partofspeech features become redundant as dependency labels get more informativeit is interesting to note that this effect holds even in the case where the dependencylabels are mostly derived from phrase structure cate gorieswe can also see that the unlabeled attachment score improves for both models when the set of dependency labels is extendedon the other hand the labeled attachment score drops but it must beremembered that these scores are not really comparable since the number of classes in the classifi cation problem increases from 7 to 50 as we move from the g set to the b settherefore we have also included the labeled attachment score restricted to the g set for the parser using the b set and wesee then that the attachment score improves espe cially for model 2table 2 shows the dependency accuracy root accuracy and complete match scores for our best parser in comparison with collins charniak and yamada and matsumoto 5 it is clear that with respect to unlabeled accuracy our parser does not quite reach stateoftheart performance evenif we limit the competition to deterministic meth ods such as that of yamada and matsumoto we believe that there are mainly three reasons for thisfirst of all the partofspeech tagger used for preprocessing in our experiments has a loweraccuracy than the one used by yamada and mat sumoto although this is not a very interesting explanation it undoubtedly accounts for part of the differencesecondly since 5the information in the first three rows is taken directly from yamada and matsumoto our parser makes crucial use of dependency type in formation in predicting the next action of the parser it is very likely that it suffers from the lack of realdependency labels in the converted treebankindi rect support for this assumption can be gained fromprevious experiments with swedish data where al most the same accuracy has been achieved with a treebank whichis much smaller but which contains proper depen dency annotation a third important factor is the relatively low rootaccuracy of our parser which may reflect a weak ness in the onepass parsing strategy with respect tothe global structure of complex sentencesit is note worthy that our parser has lower root accuracy than dependency accuracy whereas the inverse holds for all the other parsersthe problem becomes even more visible when we consider the dependency and root accuracy for sentences of different lengths as shown in table 3here we see that for really short sentences root accuracy is indeedhigher than dependency accuracy but while depen dency accuracy degrades gracefully with sentence length the root accuracy drops more drastically this may be taken to suggest that some kind of preprocessing in the form of clausing may help to improve overall accuracyturning finally to the assessment of labeled de pendency accuracy we are not aware of any strictlycomparable results for the given data set but buch holz reports a labeled accuracy of 726 for the assignment of grammatical relations using a cascade of memorybased processorsthis can be compared with a labeled attachment score of 844 for model 2 with our b set which is of about the same size as the set used by buchholz although the labels are not the samein another study blaheta and charniak report an fmeasure of 989 for the assignment of penn treebank grammatical role labels to phrases that were correctly parsed by the parser described in charniak if null labels areexcluded the fscore drops to 957the corre sponding fmeasures for our best parser are 990 and 947for the larger b set our best parser achieves an fmeasure of 969 which can be compared with 970 for a similar set of labels incollins 6 although none of the previous re sults on labeling accuracy is strictly comparable to ours it nevertheless seems fair to conclude that the 6this fmeasure is based on the recall and precision figures reported in figure 715 in collins model 1 model 2 g b bg g b bg uas 864 867 858 871 las 853 840 855 846 844 860 table 1 parsing accuracy attachment score da ra cm charniak 921 952 452 collins 915 952 433 yamada matsumoto 903 916 384 nivre scholz 873 843 304 table 2 comparison with related work labeling accuracy of the present parser is close to the state of the art even if its capacity to derive correct structures is not5 conclusionthis paper has explored the application of a data driven dependency parser to english text using data from the penn treebankthe parser is deterministic and uses a lineartime parsing algorithm guided bymemorybased classifiers to construct labeled de pendency structures incrementally in one pass over the inputgiven the difficulty of extracting labeled dependencies from a phrase structure treebank with limited functional annotation the accuracy attainedis fairly respectableand although the structural ac curacy falls short of the best available parsers the labeling accuracy appears to be competitivethe most important weakness is the limited ac curacy in identifying the root node of a sentence especially for longer sentenceswe conjecture that an improvement in this area could lead to a boost in overall performanceanother important issue to investigate further is the influence of different kinds of arc labels and in particular labels that are based on a proper dependency grammarin thefuture we therefore want to perform more experi ments with genuine dependency treebanks like the prague dependency treebank and the danish dependency treebank we also want to apply dependencybased evaluation schemes such as the ones proposed by lin and carroll et al acknowledgements the work presented in this paper has been supportedby a grant from the swedish research council the memorybased classifiers used in the experiments have been constructed using thetilburg memorybased learner the conversion of the penn tree bank to dependency trees has been performed using head rules kindly provided by hiroyasu yamada and yuji matsumoto
C04-1010
deterministic dependency parsing of english textthis paper presents a deterministic dependency parser based on memorybased learning which parses english text in linear timewhen trained and evaluated on the wall street journal section of the penn treebank the parser achieves a maximum attachment score of 871unlike most previous systems the parser produces labeled dependency graphs using as arc labels a combination of bracket labels and grammatical role labels taken from the penn treebank ii annotation schemethe best overall accuracy obtained for identifying both the correct head and the correct arc label is 860 when restricted to grammatical role labels and 844 for the maximum set we propose a variant of the model of yamada and matsumoto that reduces the complexity from the worst case quadratic to linearour deterministic shiftreduce classifierbased dependency parsing approach offers stateoftheart accuracy with high efficiency due to a greedy search strategy
efficient parsing of highly ambiguous contextfree grammars with bit vectors an efficient bitvectorbased ckystyle parser for contextfree parsing is presented the parser computes a compact parse forest representation of the complete set of possible analyses forlarge treebank grammars and long input sen tences the parser uses bitvector operations to parallelise the basic parsing operations the parser is particularly useful when all analyses are needed rather than just the most probable one large contextfree grammars extracted from tree banks achieve high coverage and accuracy but they are difficult to parse with because of their massive ambiguitythe application of standard chartparsing techniques often fails due to excessive memory and runtime requirementstreebank grammars are mostly used as probabilis tic grammars and users are usually only interested in the best analysis the viterbi parseto speed up viterbi parsing sophisticated search strategies havebeen developed which find the most probable anal ysis without examining the whole set of possible analyses these methods reduce the number of gener ated edges but increase the amount of time needed for each edgethe parser described in this paper follows a contrary approach instead of reducing the number of edges it minimises the costs of building edges in terms of memory and runtimethe new parser called bitpar is based on a bit vector implementation of the wellknown cockeyoungerkasami algorithm it buildsa compact parse forestrepresentation of all anal yses in two stepsin the first step a ckystyle recogniser fills the chart with constituentsin the second step the parse forest is built topdown from the chartviterbi parses are computed in four stepsagain the first step is a cky recogniser which is followed by a topdown filtering of the chart the bottomup computation of the viterbi probabilities and the topdown extraction of the best parsethe rest of the paper is organised as follows sec tion 2 explains the transformation of the grammar to chomsky normal formthe following sectionsdescribe the recogniser algorithm improvements of the recogniser by means of bitvector op erations and the generation of parse forests and viterbi parses section 7 discusses the advantages of the new architecture sec tion 8 describes experimental results and section 9 summarises the paperthe cky algorithm requires a grammar in chom sky normal form where the righthand side of eachrule either consists of two nonterminals or a single terminal symbolbitpar uses a modified ver sion of the cky algorithm allowing also chain rules bitpar expects that the input grammar is al ready epsilonfree and that terminal symbols onlyoccur in unary rulesrules with more than 2 non terminals on the righthand side are split into binaryrules by applying a transformation algorithm pro posed by andreas eisele1it is a greedy algorithm which tries to minimise the number of binarised rules by combining frequently cooccurring symbols firstthe algorithm consists of the following twosteps which are iterated until all rules are either bi nary or unary1compute the frequencies of the pairs of neighboring symbols on the righthand sides of rulesthe rule a
C04-1024
efficient parsing of highly ambiguous contextfree grammars with bit vectorsan efficient bitvectorbased ckystyle parser for contextfree parsing is presentedthe parser computes a compact parse forest representation of the complete set of possible analyses for large treebank grammars and long input sentencesthe parser uses bitvector operations to parallelise the basic parsing operationsthe parser is particularly useful when all analyses are needed rather than just the most probable onewe apply the viterbi algorithm exploiting its ability to deal with highlyambiguous grammars
the importance of supertagging for widecoverage ccg parsing this paper describes the role of supertagging in a widecoverage ccg parser which uses a loglinear model to select an analysis the supertagger reduces the derivation space over which model estimation is performed reducing the space required for discriminative training it also dramatically increases the speed of the parser we show that large increases in speedcan be obtained by tightly integrating the su pertagger with the ccg grammar and parserthis is the first work we are aware of to success fully integrate a supertagger with a full parser which uses an automatically extracted grammarwe also further reduce the derivation space us ing constraints on category combination the result is an accurate widecoverage ccg parserwhich is an order of magnitude faster than comparable systems for other linguistically moti vated formalisms lexicalised grammar formalisms such as lexicalized tree adjoining grammar and com binatory categorial grammar assign one or more syntactic structures to each word in a sentencewhich are then manipulated by the parsersupertag ging was introduced for ltag as a way of increasingparsing efficiency by reducing the number of struc tures assigned to each word supertagging has more recently been applied to ccg supertagging accuracy is relatively high for man ually constructed ltags however for ltags extracted automati cally from the penn treebank performance is much lower in fact performance for such grammars is below that needed for successful integration into a full parser in this paper we demonstratethat ccg supertagging accuracy is not only sufficient for accurate and robust parsing using an auto matically extracted grammar but also offers several practical advantagesour widecoverage ccg parser uses a loglinear model to select an analysisthe model paramaters are estimated using a discriminative method that isone which requires all incorrect parses for a sentence as well as the correct parsesince an auto matically extracted ccg grammar can produce anextremely large number of parses the use of a su pertagger is crucial in limiting the total number of parses for the training data to a computationally manageable numberthe supertagger is also crucial for increasing thespeed of the parserwe show that spectacular in creases in speed can be obtained without affectingaccuracy or coverage by tightly integrating the su pertagger with the ccg grammar and parserto achieve maximum speed the supertagger initially assigns only a small number of ccg categories toeach word and the parser only requests more cate gories from the supertagger if it cannot provide an analysiswe also demonstrate how extra constraints on the category combinations and the application of beam search using the parsing model can further increase parsing speedthis is the first work we are aware of to succes fully integrate a supertagger with a full parser which uses a lexicalised grammar automatically extractedfrom the penn treebankwe also report signifi cantly higher parsing speeds on newspaper text than any previously reported for a full widecoverage parserour results confirm that widecoverage ccg parsing is feasible for many largescale nlp tasksparsing using ccg can be viewed as a twostage process first assign lexical categories to the wordsin the sentence and then combine the categories to gether using ccgs combinatory rules1 the first stage can be accomplished by simply assigning to each word all categories from the words entry in the lexicon 1see steedman for an introduction to ccg and see clark et al and hockenmaier for an introduction to widecoverage parsing using ccgthe wsj is a publication that i enjoy reading npn n np npn n np np figure 1 example sentence with ccg lexical categories frequency cat types cat tokens in sentences in 221 cat tokens in sentences in 00 cutoff 221 not in cat set with missing cat 00 not in cat set with missing cat 1 1 225 0 0 12 12 10 409 1 933 1 712 79 69 table 1 statistics for the lexical category setan alternative is to use a statistical tagging approach to assign one or more categoriesa statisti cal model can be used to determine the most likelycategories given the words contextthe advantage of this supertagging approach is that the number of categories assigned to each word can be re duced with a correspondingly massive reduction in the number of derivationsbangalore and joshi use a standard markov model tagger to assign ltag elementarytrees to wordshere we use the maximum en tropy models described in curran and clark an advantage of the maximum entropy approachis that it is easy to encode a wide range of poten tially useful information as features for exampleclark has shown that pos tags provide use ful information for supertaggingthe next section describes the set of lexical categories used by our supertagger and parser21 the lexical category setthe set of lexical categories is obtained from ccgbank a corpus of ccg normalform deriva tions derived semiautomatically from the penntreebankfollowing clark we apply a fre quency cutoff to the training set only using thosecategories which appear at least 10 times in sections 221figure 1 gives an example sentence su pertagged with the correct ccg lexical categoriestable 1 gives the number of different category types and shows the coverage on training anddevelopment data the table also gives statistics for the com plete set containing every lexical category type inccgbank2 these figures show that using a fre quency cutoff can significantly reduce the size of the category set with only a small loss in coverage2the numbers differ slightly from those reported in clark since a newer version of ccgbank is being used hereclark compares the size of grammarsextracted from ccgbank with automatically extracted ltagsthe grammars of chen and vijay shanker contain between 2000 and 9000 tree frames depending on the parameters used inthe extraction process significantly more elemen tary structures than the number of lexical categories derived from ccgbankwe hypothesise this is a key factor in the higher accuracy for supertaggingusing a ccg grammar compared with an automati cally extracted ltag22 the tagging modelthe supertagger uses probabilities p where y is a lexical category and x is a contextthe conditional probabilities have the following loglinear form p 1 ze i i fi where fi is a feature i is the corresponding weight and z is a normalisation constantthe context is a 5word window surrounding the target wordfeatures are defined for each word in the window and for the pos tag of each wordcurran and clark describes the model and explains how gen eralised iterative scaling together with a gaussian prior for smoothing can be used to set the weightsthe supertagger in curran and clark finds the single most probable category sequence given the sentence and uses additional features defined in terms of the previously assigned categoriestheperword accuracy is between 91 and 92 on un seen data in ccgbank however clark shows this is not high enough for integration into a parser since the large number of incorrect categories results in a significant loss in coverageclark shows how the models in can be used to define a multitagger which can assign more than one category to a wordfor each word inthe sentence the multitagger assigns all those cat cats acc sent acc sent word acc acc 01 14 970 626 964 574 0075 15 974 659 968 606 005 17 978 702 973 644 001 29 985 784 982 742 001k100 35 989 836 986 789 0 219 991 848 990 830 table 2 supertagger accuracy on section 00 egories whose probability according to is within some factor of the highest probability category for the wordwe follow clark in ignoring the featuresbased on the previously assigned categories there fore every tagging decision is local and the viterbi algorithm is not requiredthis simple approach has the advantage of being very efficient and we findthat it is accurate enough to enable highly accu rate parsinghowever a method which used theforwardbackward algorithm to sum over all possi ble sequences or some other method which took into account category sequence information may well improve the resultsfor words seen at least k times in the trainingdata the tagger can only assign categories appear ing in the words entry in the tag dictionaryeachentry in the tag dictionary is a list of all the cate gories seen with that word in the training datafor words seen less than k times we use an alternative dictionary based on the words pos tag the tagger can only assign categories that have been seen with the pos tag in the training dataa value of k 20was used in this work and sections 221 of ccg bank were used as training datatable 2 gives the perword accuracy on sec tion 00 for various values of together with the average number of categories per wordthe sent acc column gives the precentage of sentences whose words are all supertagged correctlythe figures for 001k100 correspond to a value of 100 for thetag dictionary parameter k the set of categories as signed to a word is considered correct if it contains the correct categorythe table gives results for gold standard pos tags and in the final 2 columns for pos tags automatically assigned by the curran andclark taggerthe drop in accuracy is ex pected given the importance of pos tags as featuresthe figures for 0 are obtained by assigning all categories to a word from the words entry in the tag dictionaryfor words which appear less than 20 times in the training data the dictionary based on the words pos tag is usedthe table demonstrates the significant reduction in the average number of categories that can be achieved through the use of a supertaggerto give one example the number of categories in the tag dictionarys entry for the wordis is 45 however in the sentence mr vinken is chairman of elsevier nv the dutch publishing group the supertag ger correctly assigns 1 category to is for 01 and 3 categories for 001the parser is described in detail in clark and curran it takes pos tagged sentences as input with each word assigned a set of lexical categoriesa packed chart is used to efficiently represent all of the possible analyses for a sentence and the cky chart parsing algorithm described in steedman is used to build the chartclark and curran evaluate a number of loglinear parsing models for ccgin this paper weuse the normalform model which defines proba bilities with the conditional loglinear form in where y is a derivation and x is a sentencefeaturesare defined in terms of the local trees in the derivation including lexical head information and word word dependenciesthe normalform derivations in ccgbank provide the gold standard training datathe feature set we use is from the best performing normalform model in clark and curran for a given sentence the output of the parser is a dependency structure corresponding to the most probable derivation which can be found using theviterbi algorithmthe dependency relations are de fined in terms of the argument slots of ccg lexical categoriesclark et al and clark and curran give a detailed description of the dependency structures31 model estimationin clark and curran we describe a discrim inative method for estimating the parameters of a loglinear parsing modelthe estimation method maximises the following objective function l lg log m j1 p n i1 2i 22the data consists of sentences s 1 s m to gether with gold standard normalform derivations d1 dmlis the loglikelihood of model and gis a gaussian prior term used to avoid overfitting the objective function is optimised using lbfgs an iterative algorithm from the numerical optimisation lit eraturethe algorithm requires the gradient of the objective function and the value of the objective function at each iterationcalculation of these val ues requires all derivations for each sentence in the training datain clark and curran wedescribe efficient methods for performing the cal culations using packed chartshowever a very large amount of memory is still needed to store the packed charts for the complete training data even though the representation is very compact in clark and curran we report a memory usage of 30 gbto handle this we have developed a parallel implementation of the estimation algorithm which runs on a beowulf clusterthe need for large highperformance computing resources is a disadvantage of our earlier approachin the next section we show how use of the supertag ger combined with normalform constraints on thederivations can significantly reduce the memory re quirements for the model estimationsince the training data contains the correct lexicalcategories we ensure the correct category is as signed to each word when generating the packed charts for model estimationwhilst training theparser the supertagger can be thought of as supply ing a number of plausible but incorrect categoriesfor each word these together with the correct cat egories determine the parts of the parse space that are used in the estimation processwe would like to keep the packed charts as small as possible but not lose accuracy in the resulting parsersection 42discusses the use of various settings on the supertag gerthe next section describes how normalform constraints can further reduce the derivation space41 normalform constraintsas well as the supertagger we use two additional strategies for reducing the derivation spacethefirst following hockenmaier is to only al low categories to combine if the combination hasbeen seen in sections 221 of ccgbankfor exam ple npnp could combine with npnp accordingto ccgs combinatory rules but since this particular combination does not appear in ccgbank the parser does not allow itthe second strategy is to use eisners normal form constraints the constraints supertaggingparsing usage constraints disk memory 001 005 01 17 gb 31 gb ccgbank constraints 13 gb 23 gb eisner constraints 9 gb 16 gb 005 01 2 gb 4 gb table 3 space requirements for model training dataprevent any constituent which is the result of a forward composition serving as the primary functor in another forward composition or a forward applicationeis ner only deals with a grammar without typeraisingand so the constraints do not guarantee a normal form parse when using a grammar extracted from ccgbankhowever the constraints are still useful in restricting the derivation spaceas far as we are aware this is the first demonstration of the utility of such constraints for a widecoverage ccg parser42 results table 3 shows the effect of different supertagger set tings and the normalform constraints on the size of the packed charts used for model estimationthe disk usage is the space taken on disk by the chartsand the memory usage is the space taken in memory during the estimation processthe training sen tences are parsed using a number of nodes from a 64node beowulf cluster3 the time taken to parse the training sentences depends on the supertagging and parsing constraints and the number of nodes used but is typically around 30 minutesthe first row of the table corresponds to using the least restrictive value of 001 and reverting to 005 and finally 01 if the chart size exceeds some thresholdthe threshold was set at300000 nodes in the chartpacked charts are created for approximately 94 of the sentences in sec tions 221 of ccgbankthe coverage is not 100because for some sentences the parser cannot pro vide an analysis and some charts exceed the node limit even at the 01 levelthis strategy was used in our earlier work and as the table shows results in very large chartsnote that even with this relaxed setting on the su pertagger the number of categories assigned to each word is only around 3 on averagethis suggests that it is only through use of the supertagger that we are able to estimate a loglinear parsing model on all of the training data at all since without it the memory 3the figures in the table are estimates based on a sample of the nodes in the clusterrequirements would be far too great even for the entire 64node cluster4 the second row shows the reduction in size if the parser is only allowed to combine categorieswhich have combined in the training datathis sig nificantly reduces the number of categories created using the composition rules and also prevents thecreation of unlikely categories using rule combina tions not seen in ccgbankthe results show thatthe memory and disk usage are reduced by approx imately 25 using these constraintsthe third row shows a further reduction in size when using the eisner normalform constraintseven with the ccgbank rule constraints theparser still builds many nonnormalform derivations since ccgbank does contain cases of compo sition and typeraisingthe combination of the two types of normal form constraints reduces the memory requirements by 48 over the original approachin clark andcurran we show that the parsing model re sulting from training data generated in this way produces stateoftheart ccg dependency recovery 846 fscore over labelled dependenciesthe final row corresponds to a more restrictive setting on the supertagger in which a value of 005 is used initially and 01 is used if thenode limit is exceededthe two types of normal form constraints are also usedin clark and curran we show that using this more restrictive set ting has a small negative impact on the accuracy of the resulting parser however the memory requirement for training the model is now only 4 gb a reduction of 87 compared with the original approachthe previous section showed how to combine the supertagger and parser for the purpose of creating training data assuming the correct category for each word is knownin this section we describe our approach to tightly integrating the supertagger and parser for parsing unseen dataour previous approach to parsing unseen data wasto use the least restrictive setting of the supertagger which still allows a reasonable compromise be tween speed and accuracyour philosophy was to give the parser the greatest possibility of finding thecorrect parse by giving it as many categories as pos sible while still retaining reasonable efficiency4another possible solution would be to use sampling meth ods eg osborne supertaggingparsing time sents words constraints sec sec sec 00101 3 523 07 16 ccgbank constraints 1 181 20 46 eisner constraints 995 24 55 01001k100 608 39 90 ccgbank constraints 124 194 440 eisner constraints 100 240 546 parser beam 67 358 814 94 coverage 49 490 1 114 parser beam 46 522 1 186 oracle 18 1334 3 031 table 4 parse times for section 23 the problem with this approach is that for some sentences the number of categories in the chart still gets extremely large and so parsing is unacceptably slowhence we applied a limit to the number of categories in the chart as in the previous sectionand reverted to a more restrictive setting of the su pertagger if the limit was exceededwe first used a value of 001 and then reverted to 005 and finally 01in this paper we take the opposite approach westart with a very restrictive setting of the supertag ger and only assign more categories if the parser cannot find an analysis spanning the sentencein this way the parser interacts much more closely with the supertaggerin effect the parser is using the grammar to decide if the categories provided by thesupertagger are acceptable and if not the parser re quests more categoriesthe parser uses the 5 levels given in table 2 starting with 01 and moving through the levels to 001k100 the advantage of this approach is that parsing speeds are much higherwe also show that our new approach slightly increases parsing accuracy over the previous methodthis suggests that given our current parsing model it is better to rely largely on the supertagger to provide the correct categoriesrather than use the parsing model to select the cor rect categories from a very large derivation space51 results the results in this section are all using the best per forming normalform model in clark and curran which corresponds to row 3 in table 3all experiments were run on a 28 ghz intel xeon p4 with 2 gb ramtable 4 gives parse times for the 2401 sentences in section 23 of ccgbankthe final two columns give the number of sentences and the number of cats 01 first 001 first word parses parses 01 14 1689 884 0 00 0075 15 43 23 7 04 005 17 51 27 39 20 001 29 79 41 1816 951 001k100 35 33 17 33 17 no span 15 08 15 08 table 5 supertagger levels used on section 00words parsed per secondfor all of the figures re ported on section 23 unless stated otherwise the parser is able to provide an analysis for 985 of the sentencesthe parse times and speeds include the failed sentences but do not include the time takenby the supertagger however the supertagger is ex tremely efficient and takes less than 6 seconds to supertag section 23 most of which consists of load time for the maximum entropy modelthe first three rows correspond to our strategy ofearlier work by starting with the least restrictive set ting of the supertaggerthe first value of is 001 if the parser cannot find a spanning analysis this ischanged to 001k100 if the node limit is ex ceeded is changed to 005if the node limit is still exceeded is changed to 0075 and finally 01the second row has the ccgbank rule restriction applied and the third row the eisner normalform restrictionsthe next three rows correspond to our new strat egy of starting with the least restrictive setting of thesupertagger and moving through the set tings if the parser cannot find a spanning analysisthe table shows that the normalform constraints have a significant impact on the speed reducing theparse times for the old strategy by 72 and reduc ing the times for the new strategy by 84the new strategy also has a spectacular impact on the speed compared with the old strategy reducing the times by 83 without the normalform constraints and 90 with the constraintsthe 94 coverage row corresponds to using only the first two supertagging levels the parser ignores the sentence if it cannot get an analysis at the 005 levelthe percentage of sentences without an analysis is now 6 but the parser is extremely fastprocessing almost 50 sentences a secondthis configuration of the system would be useful for obtaining data for lexical knowledge acquisition for ex ample for which large amounts of data are requiredthe oracle row shows the parser speed when it is provided with only the correct lexical categoriesthe parser is extremely fast and in clark and cur ran we show that the fscore for labelled dependencies is almost 98this demonstratesthe large amount of information in the lexical categories and the potential for improving parser ac curacy and efficiency by improving the supertaggerfinally the first parser beam row corresponds to the parser using a beam search to further reduce thederivation spacethe beam search works by prun ing categories from the chart a category can only be part of a derivation if its beam score is within some factor of the highest scoring category forthat cell in the charthere we simply use the ex ponential of the inside score of a category as the beam score the inside score for a category c is the sum over all subderivations dominated by c of the weights of the features in those subderivations 5the value of that we use here reduces the accu racy of the parser on section 00 by a small amount but has a significant impacton parser speed reducing the parse times by a fur ther 33the final parser beam row combines thebeam search with the fast reduced coverage config uration of the parser producing speeds of over 50 sentences per secondtable 5 gives the percentage of sentences which are parsed at each supertagger level for both the new and old parsing strategiesthe results show that for the old approach most of the sentences areparsed using the least restrictive setting of the supertagger conversely for the new ap proach most of the sentences are parsed using the most restrictive setting as well as investigating parser efficiency we have also evaluated the accuracy of the parser onsection 00 of ccgbank using both parsing strate gies together with the normalform constraintsthenew strategy increases the fscore over labelled de pendencies by approximately 05 leading to the figures reported in clark and curran 52 comparison with other workthe only other work we are aware of to investigate the impact of supertagging on parsing efficiency is the work of sarkar et al for ltagsarkar etaldid find that ltag supertagging increased pars ing speed but at a significant cost in coverage only 1324 sentences out of a test set of 2250 received a parsethe parse times reported are also not as good as those reported here the time taken to parse the 2250 test sentences was over 5 hours5multiplying by an estimate of the outside score may i am prove the efficacy of the beamkaplan et al report high parsing speedsfor a deep parsing system which uses an lfg gram mar 19 sentences per second for 560 sentencesfrom section 23 of the penn treebankthey also re port speeds for the publicly available collins parser 28 sentences per second for the same setthe best speeds we have reported for the ccg parser are an order of magnitude fasterthis paper has shown that by tightly integrating a supertagger with a ccg parser very fast parse times can be achieved for penn treebank wsj textas far as we are aware the times reported here are an orderof magnitude faster than any reported for compara ble systems using linguistically motivated grammar formalismsthe techniques we have presented inthis paper increase the speed of the parser by a factor of 77this makes this parser suitable for large scale nlp tasksthe results also suggest that further improvements can be obtained by improving the supertagger which should be possible given the simple tag ging approach currently being usedthe novel parsing strategy of allowing the grammar to decide if the supertagging is likely to be cor rect suggests a number of interesting possibilitiesin particular we would like to investigate only re pairing those areas of the chart that are most likely to contain errors rather than parsing the sentence from scratch using a new set of lexical categoriesthis could further increase parsing effficiencyacknowledgements we would like to thank julia hockenmaier whosework creating the ccgbank made this research possible and mark steedman for his advice and guid ancethis research was supported by epsrc grant grm96889 and a commonwealth scholarship and a sydney university travelling scholarship to the second author
C04-1041
the importance of supertagging for widecoverage ccg parsingthis paper describes the role of supertagging in a widecoverage ccg parser which uses a loglinear model to select an analysisthe supertagger reduces the derivation space over which model estimation is performed reducing the space required for discriminative trainingit also dramatically increases the speed of the parserwe show that large increases in speed can be obtained by tightly integrating the supertagger with the ccg grammar and parserthis is the first work we are aware of to successfully integrate a supertagger with a full parser which uses an automatically extracted grammarwe also further reduce the derivation space using constraints on category combinationthe result is an accurate widecoverage ccg parser which is an order of magnitude faster than comparable systems for other linguistically motivated formalismsour scores give an indication of how supertagging accuracy corresponds to overall dependency recoverywe describe two loglinear parsing models for ccg a normalform derivation model and a dependency modelthe ccg parsing consists of two phases first the supertagger assigns the most probable categories toeach word and then the small number of combinatory rules plus the typechanging and punctuation rules are used with the cky algorithm to build a packed chart we propose a method for integrating the supertagger with the parser initially a small number of categories is assigned to each word and more categories are requested if the parser can not find a spanning analysis
unsupervised construction of large paraphrase corpora exploiting massively parallel news sources we investigate unsupervised techniques for acquiring monolingual sentencelevel paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of webbased news sources two techniques are employed simple string edit distance and a heuristic strategy that pairs initial sentences from different news stories in the same cluster we evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translation results show that edit distance data is cleaner and more easilyaligned than the heuristic data with an overall alignment error rate of 1158 on a similarlyextracted test set on test data extracted by the heuristic strategy however performance of the two training sets is similar with aers of 132 and 147 respectively analysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrase the summary sentences while less readily alignable retain more of the nontrivial alternations that are of greatest interest learning paraphrase relationships the importance of learning to manipulate monolingual paraphrase relationships for applications like summarization search and dialog has been highlighted by a number of recent efforts while several different learning methods have been applied to this problem all share a need for large amounts of data in the form of pairs or sets of strings that are likely to exhibit lexical andor structural paraphrase alternationsone approach1 1 an alternative approach involves identifying anchor pointspairs of words linked in a known wayand collecting the strings that intervenesince our interest is in that has been successfully used is edit distance a measure of similarity between stringsthe assumption is that strings separated by a small edit distance will tend to be similar in meaning the leading indicators measure the economythe leading index measures the economylee barzilay for example use multi sequence alignment to build a corpus of paraphrases involving terrorist actstheir goal is to extract sentential templates that can be used in highprecision generation of paraphrase alter nations within a limited domainour goal here is rather different our interest lies in constructing a monolingual broaddomain corpus of pairwise aligned sentencessuch data would be amenable to conventional statistical machine translation techniques 2 in what follows we compare two strategies for unsupervised construction of such a corpus one employing string similarity and the other associating sentences that may overlap very little at the string levelwe measure the relative utility of the two derived monolingual corpora in the context of word alignment techniques developed originally for bilingual textwe show that although the edit distance corpus is wellsuited as training data for the alignment algorithms currently used in smt it is an incomplete source of information about paraphrase relations which exhibit many of the characteristics of comparable bilingual corpora or free translationsmany of the more complex alternations that characterize monolingual paraphrase such as largescale lexical alternations and constituent reorderings are not readily learning sentence level paraphrases including major constituent reorganizations we do not address this approach here2 barzilay mckeown consider the possibility of using smt machinery but reject the idea because of the noisy comparable nature of their datasetcaptured by edit distance techniques which conflate semantic similarity with formal similaritywe conclude that paraphrase research would benefit by identifying richer data sources and developing appropriate learning techniquesour two paraphrase datasets are distilled from a corpus of news articles gathered from thousands of news sources over an extended periodwhile the idea of exploiting multiple news reports for paraphrase acquisition is not new previous efforts have been restricted to at most two news sourcesour work represents what we believe to be the first attempt to exploit the explosion of news coverage on the web where a single event can generate scores or hundreds of different articles within a brief period of timesome of these articles represent minor rewrites of an original ap or reuters story while others represent truly distinct descriptions of the same basic factsthe massive redundancy of information conveyed with widely varying surface strings is a resource begging to be exploitedfigure 1 shows the flow of our data collection processwe begin with sets of preclustered urls which point to news articles on the web representing thousands of different news sourcesthe clustering algorithm takes into account the full text of each news article in addition to temporal cues to produce a set of topically and temporally related articlesour method is believed to be independent of the specific clustering technology usedthe story text is isolated from a sea of advertisements and other miscellaneous text through use of a supervised hmmaltogether we collected 11162 clusters in an 8 month period assembling 177095 articles with an average of 158 articles per clusterthe clusters are generally coherent in topic and focusdiscrete events like disasters business announcements and deaths tend to yield tightly focused clusters while ongoing stories like the sars crisis tend to produce less focused clusterswhile exact duplicate articles are filtered out of the clusters many slightlyrewritten variants remain21 extracting sentential paraphrasestwo separate techniques were employed to extract likely pairs of sentential paraphrases from these clustersthe first used string edit distance counting the number of lexical deletions and insertions needed to transform one string into anotherthe second relied on a discoursebased heuristic specific to the news genre to identify likely paraphrase pairs even when they have little superficial similaritya simple edit distance metric was used to identify pairs of sentences within a cluster that are similar at the string levelfirst each sentence was normalized to lower case and paired with every other sentence in the clusterpairings that were identical or differing only by punctuation were rejected as were those where the shorter sentence in the pair was less than two thirds the length of the longer this latter constraint in effect placing an upper bound on edit distance relative to the length of the sentencepairs that had been seen before in either order were also rejectedfiltered in this way our dataset yields 139k non identical sentence pairs at a levenshtein distance of n 123 mean levenshtein distance was 517 and mean sentence length was 186 wordswe will refer to this dataset as l12311 first sentences the second extraction technique was specifically intended to capture paraphrases which might contain very different sets of content words word order and so onsuch pairs are typically used to illustrate the phenomenon of paraphrase but precisely because their surface dissimilarity renders automatic discovery difficult they have generally not been the focus of previous computational approachesin order to automatically identify sentence pairs of this type we have attempted to take advantage of some of the unique characteristics of the datasetthe topical clustering is sufficiently precise to ensure that in general articles in the same cluster overlap significantly in overall semantic contenteven so any arbitrary pair of sentences from different articles within a cluster is unlikely to exhibit a paraphrase relationship the phix174 genome is short and compactthis is a robust new step that allows us to make much larger piecesto isolate just those sentence pairs that represent likely paraphrases without requiring significant string similarity we exploited a common journalistic convention the first sentence or two of 3a maximum levenshtein distance of 12 was selected for the purposes of this paper on the basis of experiments with corpora extracted at various edit distancesa newspaper article typically summarize its contentone might reasonably expect therefore that initial sentences from one article in a cluster will be paraphrases of the initial sentences in other articles in that clusterthis heuristic turns out to be a powerful one often correctly associating sentences that are very different at the string level in only 14 days us researchers have created an artificial bacteriaeating virus from synthetic genesan artificial bacteriaeating virus has been made from synthetic genes in the record time of just two weeksalso consider the following example in which related words are obscured by different parts of speech chosun ilbo one of south korea leading newspapers said north korea had finished developing a new ballistic missile last year and was planning to deploy itthe chosun ilbo said development of the new missile with a range of up to number kilometres had been completed and deployment was imminenta corpus was produced by extracting the first two sentences of each article then pairing these across documents within each clusterwe will refer to this collection as the f2 corpusthe combination of the firsttwo sentences heuristic plus topical article clusters allows us to take advantage of metainformation implicit in our corpus since clustering exploits lexical information from the entire document not just the few sentences that are our focusthe assumption that two first sentences are semantically related is thus based in part on linguistic information that is external to the sentences themselvessometimes however the strategy of pairing sentences based on their cluster and position goes astraythis would lead us to posit a paraphrase relationship where there is none terence hope should have spent most of yesterday in hospital performing brain surgerya leading brain surgeon has been suspended from work following a dispute over a bowl of soupto prevent too high an incidence of unrelated sentences one stringbased heuristic filter was found useful a pair is discarded if the sentences do not share at least 3 words of 4 charactersthis constraint succeeds in filtering out many unrelated pairs although it can sometimes be too restrictive excluding completely legitimate paraphrases there was no chance it would endanger our planet astronomers saidnasa emphasized that there was never danger of a collisionan additional filter ensured that the word count of the shorter sentence is at least onehalf that of the longer sentencegiven the relatively long sentences in our corpus these filters allowed us to maintain a degree of semantic relatedness between sentencesaccordingly the dataset encompasses many paraphrases that would have been excluded under a more stringent editdistance threshold for example the following nonparaphrase pair that contain an element of paraphrase a staggering number million americans have been victims of identity theft in the last five years according to federal trade commission survey out this weekin the last year alone number million people have had their identity purloinednevertheless even after filtering in these ways a significant amount of unfiltered noise remains in the f2 corpus which consisted of 214k sentence pairsout of a sample of 448 heldout sentence pairs 118 were rated by two independent human evaluators as sentencelevel paraphrases while 151 were rated as partial paraphrasesthe remaining 40 were assessed as news article clusters urls download urls isolate content sentence separate textual content of articles select and filter first sentence pairs approximately parallel monolingual corpus figure 1data collection unrelated4 thus although the f2 data set is nominally larger than the l12 data set when the noise factor is taken into account the actual number of full paraphrase sentences in this data set is estimated to be in the region of 56k sentences with a further estimated 72k sentences containing some paraphrase material that might be a potential source of alignmentsome of these relations captured in this data can be complexthe following pair for example would be unlikely to pass muster on edit distance grounds but nonetheless contains an inversion of deep semantic roles employing different lexical itemsthe hartford courant reported day that tony bryant said two friends were the killersa lawyer for skakel says there is a claim that the murder was carried out by two friends of one of skakel school classmates tony bryanthe f2 data also retains pairs like the following that involve both highlevel semantic alternations and long distance dependencies two men who robbed a jeweller shop to raise funds for the bali bombings were each jailed for number years by indonesian courts todayan indonesian court today sentenced two men to number years in prison for helping finance last year terrorist bombings in bali by robbing a jewelry storethese examples do not by any means exhaust the inventory of complex paraphrase types that are commonly encountered in the f2 datawe encounter among other things polarity alternations including those involving long distance dependencies and a variety of distributed paraphrases with alignments spanning widely separated elements32 word error alignment ratean objective scoring function was needed to compare the relative success of the two data collection strategies sketched in 211 and 212which technique produces more dataare the types of data significantly different in character or utilityin order to address such questions we used word alignment error rate a metric borrowed from the field of statistical machine translation aer measures how accurately an automatic algorithm can align words in corpus of parallel sentence pairs with a human 4 this contrasts with 167 pairs assessed as unrelated in a 10000 pair sampling of the l12 datatagged corpus of alignments serving as the gold standardparaphrase data is of course monolingual but otherwise the task is very similar to the mt alignment problem posing the same issues with onetomany manytomany and onemanyto null word mappingsour a priori assumption was that the lower the aer for a corpus the more likely it would be to yield learnable information about paraphrase alternationswe closely followed the evaluation standards established in melamed and och ney following och neys methodology two annotators each created an initial annotation for each dataset subcategorizing alignments as either sure or possible differences were then highlighted and the annotators were asked to review these casesfinally we combined the two annotations into a single gold standard in the following manner if both annotators agreed that an alignment should be sure then the alignment was marked as sure in the goldstandard otherwise the alignment was marked as possibleto compute precision recall and alignment error rate for the twin datasets we used exactly the formulae listed in och ney let a be the set of alignments in the comparison s be the set of sure alignments in the gold standard and p be the union of the sure and possible alignments in the gold standardthen we have precision a pa recall s sa aer sa sapa we held out a set of news clusters from our training data and randomly extracted two sets of sentence pairs for blind evaluationthe first is a set of 250 sentence pairs extracted on the basis of an edit distance of 5 n 20 arbitrarily chosen to allow a range of reasonably divergent candidate pairsthese sentence pairs were checked by an independent human evaluator to ensure that they contained paraphrases before they were tagged for alignmentsthe second set comprised 116 sentence pairs randomly selected from the set of firsttwo sentence pairsthese were likewise hand vetted by independent human evaluatorsafter an initial training pass and refinement of the linking specification interrater agreement measured in terms of aer5 was 931 for the edit distance test set versus 837 for the f2 test set suggestive of the greater variability in the latter data set33 data alignmenteach corpus was used as input to the word alignment algorithms available in giza giza is a freely available implementation of ibm models 15 and the hmm alignment along with various improvements and modifications motivated by experimentation by och ney giza accepts as input a corpus of sentence pairs and produces as output a viterbi alignment of that corpus as well as the parameters for the model that produced those alignmentswhile these models have proven effective at the word alignment task there are significant practical limitations in their outputmost fundamentally all alignments have either zero or one connection to each target wordhence they are unable to produce the manyto many alignments required to identify correspondences with idioms and other phrasal chunksto mitigate this limitation on final mappings we follow the approach of och we align once in the forward direction and again in the backward directionthese alignments can subsequently be recombined in a variety of ways 5 the formula for aer given here and in och ney is intended to compare an automatic alignment against a gold standard alignmenthowever when comparing one human against another both comparison and reference distinguish between sure and possible linksbecause the aer is asymmetric we have presented the average of the directional aerssuch as union to maximize recall or intersection to maximize precisionoch also documents a method for heuristically recombining the unidirectional alignments intended to balance precision and recallin our experience many alignment errors are present in one side but not the other hence this recombination also serves to filter noise from the processtable 1 shows the results of training translation models on data extracted by both methods and then tested on the blind datathe best overall performance irrespective of test data type is achieved by the l12 training set with an 1158 overall aer on the 250 sentence pair edit distance test set the f2 training data is probably too sparse and with 40 unrelated sentence pairs too noisy to achieve equally good results nevertheless the gap between the results for the two training data types is dramatically narrower on the f2 test datathe nearly comparable numbers for the two training data sets at 132 and 147 respectively suggest that the l12 training corpus provides no substantive advantage over the f2 data when tested on the more complex test datathis is particularly striking given the noise inherent in the f2 training datato explore some of the differences between the training sets we handexamined a random sample of sentence pairs from each corpus typethe most common paraphrase alternations that we observed fell into the following broad categories elaboration sentence pairs can differ in total information content with an added word phrase or clause in one sentence that has no training data type l12 f2 l12 f2 test data type 250 edit dist 250 edit dist 116 f2 heuristic 116 f2 heuristic precision 8746 8644 8507 8416 recall 8952 8264 8870 8655 aer 1158 1541 1324 1471 identical word precision 8936 8879 9292 9341 identical word recall 8950 8310 9349 9247 identical word aer 1057 1414 680 706 nonidentical word precision 7699 7186 6054 5369 nonidentical word recall 9022 6957 5950 5041 nonidentical word aer 2088 2857 3981 4746 table 1precision recall and alignment error rates for f2 and l12 counterpart in the other phrasal an entire group of words in one sentence alternates with one word or a phrase in the othersome are noncompositional idioms others involve different phrasing spelling britishamerican sources system atically differ in spellings of common words other variants also appear synonymy sentence pairs differ only in one or two words suggesting an editors hand in modifying a single source sentenceanaphora a full np in one sentence corresponds to an anaphor in the other cases of np anaphora are also common in the data but in quantifying paraphrase types we restricted our attention to the simpler case of pronominal anaphorareordering words phrases or entire constituents occur in different order in two related sentences either because of major syntactic differences or more local pragmatic choices these categories do not cover all possible alternations between pairs of paraphrased sentences moreover categories often overlap in the same sequence of wordsit is common for example to find instances of clausal reordering combined with synonymyfigure 2 shows a handaligned paraphrase pair taken from the f2 datathis pair displays one spelling alternation one reordering and one example of elaboration to quantify the differences between l12 and f2 we randomly chose 100 sentence pairs from each dataset and counted the number of times each phenomenon was encountereda given sentence pair might exhibit multiple instances of a single phenomenon such as two phrasal paraphrase changes or two synonym replacementsin this case all instances were countedlowerfrequency changes that fell outside of the above categories were not tallied for example the presence or absence of a definite article in figure 2 was ignoredafter summing all alternations in each sentence pair we calculated the average number of occurrences of each paraphrase type in each data setthe results are shown in table 2several major differences stand out between the two data setsfirst the f2 data is less parallel as evidenced by the higher percentage of elaborations found in those sentence pairsloss of parallelism however is offset by greater diversity of paraphrase types encountered in the f2 dataphrasal alternations are more than 4x more common and reorderings occur over 20x more frequentlythus while string difference methods may produce relatively clean training data this is achieved at the cost of filtering out common paraphrase relationshipsedit distance identifies sentence pairs that exhibit lexical and short phrasal alternations that can be aligned with considerable successgiven a large dataset and a wellmotivated clustering of documents useful datasets can be gleaned even without resorting to more sophisticated techniques figure 2sample humanaligned paraphrase l12 f2 elaboration 083 13 phrasal 014 069 spelling 012 001 synonym 018 025 anaphora 01 013 reordering 002 041 table 2mean number of instances of paraphrase phenomena per sentence however there is a disparity between the kinds of paraphrase alternations that we need to be able to align and those that we can already align well using current smt techniquesbased solely on the criterion of word aer the l12 data would seem to be superior to the f2 data as a source of paraphrase knowledgehand evaluation though indicates that many of the phenomena that we are interested in learning may be absent from this l12 datastring edit distance extraction techniques involve assumptions about the data that are inadequate but achieve high precisiontechniques like our f2 extraction strategies appear to extract a more diverse variety of data but yield more noisewe believe that an approach with the strengths of both methods would lead to significant improvement in paraphrase identification and generationin the near term however the relatively similar performances of f2 and l12trained models on the f2 test data suggest that with further refinements this more complex type of data can achieve good resultsmore data will surely helpone focus of future work is to build a classifier to predict whether two sentences are related through paraphrasefeatures might include edit distance temporaltopical clustering information information about crossdocument discourse structure relative sentence length and synonymy informationwe believe that this work has potential impact on the fields of summarization information retrieval and question answeringour ultimate goal is to apply current smt techniques to the problems of paraphrase recognition and generationwe feel that this is a natural extension of the body of recent developments in smt perhaps explorations in monolingual data may have a reciprocal impactthe field of smt long focused on closely aligned data is only now beginning to address the kinds of problems immediately encountered in monolingual paraphrase algorithms to address these phenomena will be equally applicable to both fieldsof course a broaddomain smtinfluenced paraphrase solution will require very large corpora of sentential paraphrasesin this paper we have described just one example of a class of data extraction techniques that we hope will scale to this taskacknowledgements we are grateful to the mo corstonoliver jeff stevenson and amy muia of the butler hill group for their work in annotating the data used in the experimentswe have also benefited from discussions with ken church mark johnson daniel marcu and franz ochwe remain however responsible for all content
C04-1051
unsupervised construction of large paraphrase corpora exploiting massively parallel news sourceswe investigate unsupervised techniques for acquiring monolingual sentencelevel paraphrases from a corpus of temporally and topically clustered news articles collected from thousands of webbased news sourcestwo techniques are employed simple string edit distance and a heuristic strategy that pairs initial sentences from different news stories in the same clusterwe evaluate both datasets using a word alignment algorithm and a metric borrowed from machine translationresults show that edit distance data is cleaner and more easilyaligned than the heuristic data with an overall alignment error rate of 1158 on a similarlyextracted test seton test data extracted by the heuristic strategy however performance of the two training sets is similar with aers of 132 and 147 respectivelyanalysis of 100 pairs of sentences from each set reveals that the edit distance data lacks many of the complex lexical and syntactic alternations that characterize monolingual paraphrasethe summary sentences while less readily alignable retain more of the nontrivial alternations that are of greatest interest learning paraphrase relationshipswe introduce microsoft research paraphrase corpus we use webaggregated news stories to learn both sentencelevel and wordlevel alignments
language model adaptation for statistical machine translation via structured query models we explore unsupervised language model adaptation techniques for statistical machine translation the hypotheses from the machine translation output are converted into queries at different levels of representation power and used to extract similar sentences from very large monolingual text collection specific language models are then build from the retrieved data and interpolated with a general background model experiments show significant improvements when translating with these adapted language models language models are applied in many natural language processing applications such as speech recognition and machine translation to encapsulate syntactic semantic and pragmatic informationfor systems which learn from given data we frequently observe a severe drop in performance when moving to a new genre or new domainin speech recognition a number of adaptation techniques have been developed to cope with this situationin statistical machine translation we have a similar situation ie estimate the model parameter from some data and use the system to translate sentences which may not be well covered by the training datatherefore the potential of adaptation techniques needs to be explored for machine translation applicationsstatistical machine translation is based on the noisy channel model where the translation hypothesis is searched over the space defined by a translation model and a target language statistical machine translation can be formulated as follows is the target language model and p is the translation modelthe argmax operation is the search which is done by the decoderin the current study we modify the target language model p to represent the test data better and thereby improve the translation quality list the following approaches to language model adaptation linear interpolation of a general and a domain specific model back off of domain specific probabilities with those of a specific model retrieval of documents pertinent to the new domain and training a language model online with those data maximum entropy minimum discrimination adaptation adaptation by linear transformation of vectors of bigram counts in a reduced space smoothing and adaptation in a dual space via latent semantic analysis modeling longterm semantic dependencies and trigger combinationsour approach can be characterized as unsupervised data augmentation by retrieval of relevant documents from large monolingual corpora and interpolation of the specific language model build from the retrieved data with a background language modelto be more specific the following steps are carried out to do the language model adaptationfirst a baseline statistical machine translation system using a large general language model is applied to generate initial translationsthen these translations hypotheses are reformulated as queries to retrieve similar sentences from a very large text collectiona small domain specific language model is build using the retrieved sentences and linearly interpolated with the background language modelthis new interpolated language model in applied in a second decoding run to produce the final translationsthere are a number of interesting questions pertaining to this approach which information can and should used to generate the queries the firstbest translation only or also translation alternativeshow should we construct the queries just as simple bagofwords or can we incorporate more structure to make them more powerfulhow many documents should be retrieved to build the specific language models and on what granularity should this be done ie what is a document in the information retrieval processthe paper is structured as follows section 2 outlines the sentence retrieval approach and three bagofwords query models are designed and explored structured query models are introduced in section 3in section 4 we present translation experiments are presented for the different queryfinally summary is given in section 5our language model adaptation is an unsupervised data augmentation approach guided by query modelsgiven a baseline statistical machine translation system the language model adaptation is done in several steps shown as follows generate a set of initial translation hypotheses h h1 hn for source sentences s using either the baseline mt system with the background language model or only the translation model use h to build query use query to retrieve relevant sentences from the large corpus build specific language models from retrieved sentences interpolate the specific language model with the background language retranslate sentences s with adapted language model figure1 adaptation algorithm the specific language model 1 the interpolation factor can be simply estimated using cross validation or a grid searchas an alternative to using translations for the baseline system we will also describe an approach which uses partial translations of the source sentence using the translation model onlyin this case no full translation needs to be carried out in the first step only information from the translation model is usedour approach focuses on query model building using different levels of knowledge representations from the hypothesis set or from the translation model itselfthe quality of the query models is crucial to the adapted language models performancethree bagofwords query models are proposed and explained in the following sections21 sentence retrieval processin our sentence retrieval process the standard tfidf term weighting scheme is usedthe queries are built from the translation hypotheseswe follow in considering each sentence in the monolingual corpus as a document as they have shown that this gives better results compared to retrieving entire news storiesboth the query and the sentences in the text corpus are converted into vectors by assigning a term weight to each wordthen the cosine similarity is calculated proportional to the inner product of the two vectorsall sentences are ranked according to their similarity with the query and the most similar sentences are used as the data for building the specific language modelin our experiments we use different numbers of similar sentences ranting from one to several thousand22 bagofwords query modelsdifferent query models are designed to guide the data augmentation efficientlywe first define bagofwordsmodels based on different levels of knowledge collected from the hypotheses of the statistical machine translation engine221 firstbest hypothesis as a query model the firstbest hypothesis is the viterbi path in the search space returned from the statistical machine translation decoderit is the optimal hypothesis the statistical machine translation system can generate using the given translation and language model and restricted by the applied pruning strategyignoring word order the hypothesis is converted into a bagofwords representation which is then used as a query in the querythirdly most of the hypotheses are only different from each other in one word or twothis means there is not so much noise and variance introduced in this query model223 translation model as a query model to fully leverage the available knowledge from the translation system the translation model can be used to guide the language model adaptation processas introduced in section 1 the translation model represents the full knowledge of translating words as it encodes all possible translations candidates for a given source sentencethus the query model based on the translation model has potential advantages over both 1tq and tnq to utilize the translation model all the ngrams from the source sentence are extracted and the corresponding candidate translations are collected from the translation modelthese are then converted into a bagofwords representation as follows in queries so that it is possible to model the syntactic and semantic information encoded in phrases ngrams and cooccurred word pairsthe inquery implementation is appliedso far 16 operators are defined in inquery to model word proximity four of these operators are used specially for our language model adaptation sum operator sum the terms or nodes are treated as having equal influence on the final retrieval resultthe belief values provided by the arguments of the sum are averaged to produce the belief value of the sum nodeweighted sum operator wsum the terms or nodes contribute unequally to the final result according to the weight associated with each it ordered distance operator n the terms must be found within n words of each other in the text in order to contribute to the document belief valuean ngram phrase can be modeled as an ordered distance operator with nn unordered distance operator uwn the terms contained must be found in any order within a window of n words in order for this operator to contribute to the belief value of the document32 structured query modelsgiven the representation power of the structured query language the top1 hypothesis topn best hypothesis list and the translation lattice can be converted into three structured query models respectivelyfor firstbest and nbest hypotheses we collect related target ngrams of a given source word according to the alignments generated in the viterbi decoding processwhile for the translation lattice similar to the construction of tmq we collect all the source ngrams and translate them into target ngramsin either case we get a set of target ngrams for each source wordthe structured query model for the whole source sentence is a collection of such subsets of target n grams 21 issssttttq vlvv is t v is a set of target ngrams for the source word is 311211 lll v gramiiigramiigramis ttttttt i in our experiments we consider up to trigram for better retrieval efficiency but higher order ngrams could be used as willthe second simplification is that every source word is equally important thus each ngram subset is t v will have an equal contribution to the final retrieval resultsthe last simplification is each ngram within the set of is t v has an equal weight ie we do not use the translation probabilities of the translation modelif the system is a phrasebased translation system we can encode the phrases using the ordered distance operator with n equals to the number of the words of that phrase which is denoted as the phrase operator in inquery implementationthe 2grams and 3grams can be encoded using this operator toothus our final structured query model is a sum operator over a set of nodeseach node corresponds to a source wordusually each source word has a number of translation candidates each node is a weighted sum over all translation candidates weighted by their frequency in the hypothesis setan example is shown below where phrase indicates the use of the ordered distance operator with varying n qsum wsum 1 american 1 phrase wsum wsum wsum 5 primary experiments are carried out on a standard statistical machine translation task defined in the nist evaluation in june 2002there are 878 test sentences in chinese and each sentence has four human translations as referencesnist score and bleu score of mteval version 9 are reported to evaluate the translation quality41 baseline translation systemour baseline system gives scores of 780 nist and 01952 bleu for top1 hypothesis which is comparable to the best results reported on this taskfor the baseline system we built a translation model using 284k parallel sentence pairs and a trigram language model from a 160 million words general english news text collectionthis lm is the background model to be adaptedwith the baseline system the nbest hypotheses list and the translation lattice are extracted to build the query modelsexperiments are carried out on the adapted language model using the three bagof words query models 1tq tnq and tmq and the corresponding structured query models42 data gigaword corporathe socalled gigaword corpora are very large english news text collectionsthere are four distinct international sources of english newswire afe agence france press english service apw associated press worldstream english service nyt the new york times newswire service xie the xinhua news agency english service table1 shows the size of each part in word countsafe apw nyt xie 170969k 539665k 914159k 131711k table1 number of words in the different gigaword corpora as the lemur toolkit could not handle the two large corpora we used only 200 million words from each of these two corporain the preprocessing all words are lowercased and punctuation is separatedthere is no explicit removal of stop words as they usually fade out by tfidf weights and our experiments showed not positive effects when removing stop words43 bagofwords query modelstable2 shows the size of 1tq tnq and tmq in terms of number of tokens in the 878 queries 1tq tnq tmq q 25861 231834 3412512 table2 query size in number of tokens as words occurring several times are reduced to wordfrequency pairs the size of the queries generated from the 100best translation lists is only 9 times as big as the queries generated from the firstbest translationsthe queries generated from the translation model contain many more translation alternatives summing up to almost 34 million tokensusing the lattices the whole information of the translation model is kept431 results for query 1tq in the first experiment we used the firstbest translations to generate the queriesfor each of the 4 corpora different numbers of similar sentences were retrieved to build specific language modelsfigure2 shows the language model adaptation after tuning the interpolation factor by a grid search over 01typically is around 0801bestnist scores 77500 78000 78500 79000 79500 80000 afe apw nyt xie top1 top10 top100 top1000 baseline 1bestbleuscores 01900 01920 01940 01960 01980 02000 02020 02040 afe apw nyt xie top1 top10 top100 top1000 baseline figure2 nist and bleu scores 1tq we see that each corpus gives an improvement over the baselinethe best nist score is 794 and the best bleu score is 02018both best scores are realized using top 100 relevant sentences corpus per source sentence mined from the afe432 results for query tnq figure3 shows the results for the query model tnq the best results are 799 nist score and 02022 bleu scorethese improvements are statistically significantboth scores are achieved at the same settings as those in 1tq ie using top 100 retrieved relevant sentences mined from the afe corpus100bestnistscores 77500 78000 78500 79000 79500 80000 afe apw nyt xie top1 top10 top100 top1000 baseline 100bestbleuscores 01900 01920 01940 01960 01980 02000 02020 02040 afe apw nyt xie top1 top10 top100 top1000 baseline figure3 nist and bleu scores from tnq using the translation alternatives to retrieve the data for language model adaptation gives an improvement over using the firstbest translation only for query constructionusing only one translation hypothesis to build an adapted language model has the tendency to reinforce that translation433 results for query tmq the third bagofwords query model uses all translation alternatives for source words and source phrasesfigure4 shows the results of this query model tmq the best results are 791 nist score and 01995 bleufor this query model best results were achieved using the top 1000 relevant sentences mined from the afe corpus per source sentencethe improvement is not as much as the other two query modelsthe reason is probably that all translation alternatives even wrong translations resulting from errors in the word and phrase alignment contribute alike to retrieve similar sentencesthereby an adapted language model is built which reinforces not only good translations but also bad translationsall the three query models showed improvements over the baseline system in terms of nist and bleu scoresthe best bagofwords query model is tnq built from the nbest listit provides a good balance between incorporating translation alternatives in the language model adaptation process and not reinforcing wrong translationslatticenistscores 77500 78000 78500 79000 79500 80000 afe apw nyt xie top1 top10 top100 top1000 baseline latticebleuscores 01900 01920 01940 01960 01980 02000 02020 02040 afe apw nyt xie top1 top10 top100 top1000 baseline figure4 nist and bleu scores from tmq 44 structured query modelsthe next series of experiments was done to study if using word order information in constructing the queries could help to generate more effective adapted language modelsby using the structured query language we converted the same firstbest hypothesis the 100best list and the translation lattice into structured query modelsresults are reported for the afe corpus only as this corpus gave best translation scoresfigure5 shows the results for all three structured query models built from the firstbest hypothesis the 100 best hypotheses list and translation lattice using these query models different numbers of most similar sentences ranging from 100 to 4000 where retrieved from the afe corpusthe given baseline results are the best results achieved from the corresponding bagofwords query modelsconsistent improvements were observed on nist and bleu scoresagain optimal interpolation factors to interpolate the specific language models with the background language model were used which typically were in the range of 06 07structured query models give most improvements when using more sentences for language model adaptationthe effect is more pronounced for bleu then for nist scorestructured querynistscores 77500 78000 78500 79000 79500 80000 80500 81000 81500 baseline top100 top500 top1000 top2000 top4000 1best 100best tmlattice structured querybleuscores 01920 01940 01960 01980 02000 02020 02040 02060 02080 baseline top100 top500 top1000 top2000 top4000 1best 100best tmlattice figure5 nist and bleu scores from the structured query models the really interesting result is that the structured query model tmq gives now the best translation resultsadding word order information to the queries obviously helps to reduce the noise in the retrieved data by selecting sentences which are closer to the good translations the best results using the adapted language models are nist score 812 for using the 2000 most similar sentences whereas bleu score goes up to 02068 when using 4000 sentences for language model adaptation45 exampletable3 shows translation examples for the 17th chinese sentence in the test setwe applied the baseline system the bagofword query model and the structured query model using afe corpusref the police has already blockade the scene of the explosionbase at present the police had cordoned off the explosionhyp1 at present police have sealed off the explosionhyp2 currently police have blockade on the scene of the explosiontable3 translation examples 46 oracle experiment finally we run an oracle experiments to see how much improvement could be achieved if we only selected better data for the specific language modelswe converted the four available reference translations into structured query models and retrieved the top 4000 relevant sentences from afe corpus for each source sentenceusing these language models interpolated with the background language model gave a nist score of 867 and a bleu score of 02228this result indicates that there is room for further improvements using this language model adaptation techniquethe oracle experiment suggests that better initial translations lead to better language models and thereby better 2nd iteration translationsthis lead to the question if we can iterate the retrieval process several times to get further improvement or if the observed improvement results form using for translations which have more diversity than the translations in an nbest liston the other side the oracle experiment also shows that the optimally expected improvement is limited by the translation model and decoding algorithm used in the current smt systemin this paper we studied language model adaptation for statistical machine translationextracting sentences most similar to the initial translations building specific language models for each sentence to be translated and interpolating those with the background language models gives significant improvement in translation qualityusing structured query models which capture word order information leads to better results that plain bag of words modelsthe results obtained suggest a number of extensions of this work the first question is if more data to retrieve similar sentences from will result in even better translation qualitya second interesting question is if the translation probabilities can be incorporated into the queriesthis might be especially useful for structured query models generated from the translation lattices
C04-1059
language model adaptation for statistical machine translation via structured query modelswe explore unsupervised language model adaptation techniques for statistical machine translationthe hypotheses from the machine translation output are converted into queries at different levels of representation power and used to extract similar sentences from very large monolingual text collectionspecific language models are then build from the retrieved data and interpolated with a general background modelexperiments show significant improvements when translating with these adapted language modelswe apply a slightly different sentencelevel strategy to language model adaptation first generating an nbest list with a baseline system then finding similar sentences in a monolingual target language corpuswe construct specific language models by using machine translation output as queries to extract similar sentences from large monolingual corporawe convert initial smt hypotheses to queries and retrieved similar sentences from a large monolingual collection
orange a method for evaluating automatic evaluation metrics for machine translation comparisons of automatic evaluation metrics for machine translation are usually conducted on corpus level using correlation statistics such as pearsons product moment correlation coefficient or spearmans rank order correlation coefficient between human scores and automatic scores however such comparisons rely on human judgments of translation qualities such as adequacy and fluency unfortunately these judgments are often inconsistent and very expensive to acquire in this paper we introduce a new evaluation method orange for evaluating automatic machine translation evaluation metrics automatically without extra human involvement other than using a set of reference translations we also show the results of comparing several existing automatic metrics and three new automatic metrics using orange to automatically evaluate machine translations the machine translation community recently adopted an ngram cooccurrence scoring procedure bleu a similar metric nist used by nist in a couple of machine translation evaluations in the past two years is based on bleuthe main idea of bleu is to measure the translation closeness between a candidate translation and a set of reference translations with a numerical metricalthough the idea of using objective functions to automatically evaluate machine translation quality is not new the success of bleu prompts a lot of interests in developing better automatic evaluation metricsfor example akiba et al proposed a metric called red based on edit distances over a set of multiple referencesnieen et al calculated the length normalized edit distance called word error rate between a candidate and multiple reference translationsleusch et al proposed a related measure called position independent word error rate that did not consider word position ie using bagofwords insteadturian et al introduced general text matcher based on accuracy measures such as recall precision and fmeasurewith so many different automatic metrics available it is necessary to have a common and objective way to evaluate these metricscomparison of automatic evaluation metrics are usually conducted on corpus level using correlation analysis between human scores and automatic scores such as bleu nist wer and perhowever the performance of automatic metrics in terms of human vs system correlation analysis is not stable across different evaluation settingsfor example table 1 shows the pearsons linear correlation coefficient analysis of 8 machine translation systems from 2003 nist chinese english machine translation evaluationthe pearsoncorrelation coefficients are computed according to different automatic evaluation methods vs human assigned adequacy and fluencybleu1 4 and 12 are bleu with maximum ngram lengths of 1 4 and 12 respectivelygtm10 20 and 30 are gtm with exponents of 10 20 and 30 respectively95 confidence intervals are estimated using bootstrap resampling from the bleu group we found that shorter bleu has better adequacy correlation while longer bleu has better fluency correlationgtm with smaller exponent has better adequacy correlation and gtm with larger exponent has better fluency correlationnist is very good in adequacy correlation but not as good as gtm30 in fluency correlationbased on these observations we are not able to conclude which metric is the best because it depends on the manual evaluation criteriathis results also indicate that high correlation between human and automatic scores in both adequacy and fluency cannot always been achieved at the same timethe best performing metrics in fluency according to table 1 are bleu12 and gtm30 however many metrics are statistically equivalent to them when we factor in the 95 confidence intervalsfor example even per is as good as bleu12 in adequacyone reason for this might be due to data sparseness since only 8 systems are availablethe other potential problem for correlation analysis of human vs automatic framework is that high corpuslevel correlation might not translate to high sentencelevel correlationhowever high sentencelevel correlation is often an important property that machine translation researchers look forfor example candidate translations shorter than 12 words would have zero bleu12 score but bleu12 has the best correlation with human judgment in fluency as shown in table 1in order to evaluate the ever increasing number of automatic evaluation metrics for machine translation objectively efficiently and reliably we introduce a new evaluation method orangewe describe orange in details in section 2 and briefly introduce three new automatic metrics that will be used in comparisons in section 3the results of comparing several existing automatic metrics and the three new automatic metrics using orange will be presented in section 4we conclude this paper and discuss future directions in section 5intuitively a good evaluation metric should give higher score to a good translation than a bad onetherefore a good translation should be ranked higher than a bad translation based their scoresone basic assumption of all automatic evaluation metrics for machine translation is that reference translations are good translations and the more a machine translation is similar to its reference translations the betterwe adopt this assumption and add one more assumption that automatic translations are usually worst than their reference translationstherefore reference translations should be ranked higher than machine translations on average if a good automatic evaluation metric is usedbased on these assumptions we propose a new automatic evaluation method for evaluation of automatic machine translation metrics as follows given a source sentence its machine translations and its reference translations we compute the average rank of the reference translations within the combined machine and reference translation listfor example a statistical machine translation system such as isis altemp smt system can generate a list of nbest alternative translations given a source sentencewe compute the automatic scores for the nbest translations and their reference translationswe then rank these translations calculate the average rank of the references in the nbest list and compute the ratio of the average reference rank to the length of the nbest listwe call this ratio orange and the smaller the ratio is the better the automatic metric is there are several advantages of the proposed orange evaluation method no extra human involvement orange uses the existing human references but not human evaluationsapplicable on sentencelevel diagnostic error analysis on sentencelevel is naturally providedthis is a feature that many machine translation researchers look formany existing data points every sentence is a data point instead of every system for example there are 919 sentences vs 8 systems in the 2003 nist chineseenglish machine translation evaluationonly one objective function to optimize minimize a single orange score instead of maximize pearsons correlation coefficients between automatic scores and human judgments in adequacy fluency or other quality metricsa natural fit to the existing statistical machine translation framework a metric that ranks a good translation high in an n best list could be easily integrated in a minimal error rate statistical machine translation training framework the overall system performance in terms of 1 oracles refer to the reference translations used inthe evaluation proceduremethod pearson 95l 95you pearson 95l 95you bleu1 086 083 089 081 075 086 bleu4 077 072 081 086 081 090 bleu12 066 060 072 087 076 093 nist 089 086 092 081 075 087 wer 047 041 053 069 062 075 per 067 062 072 079 074 085 gtm10 082 079 085 073 066 079 gtm20 077 073 081 086 081 090 gtm30 074 070 078 087 081 091 adequacy fluency table 1pearson correlation analysis of 8 machine translation systems in 2003 nist chineseenglish machine translation evaluationgenerating more human like translations should also be improvedbefore we demonstrate how to use orange to evaluate automatic metrics we briefly introduce three new metrics in the next sectionrougel and rouges are described in details in lin and och since these two metrics are relatively new we provide short summaries of them in section 31 and section 33 respectivelyrougew an extension of rougel is new and is explained in details in section 3231 rougel longest common subsequence given two sequences x and y the longest common subsequence of x and y is a common subsequence with maximum length to apply lcs in machine translation evaluation we view a translation as a sequence of wordsthe intuition is that the longer the lcs of two translations is the more similar the two translations arewe propose using lcsbased fmeasure to estimate the similarity between two translations x of length m and y of length n assuming x is a reference translation and y is a candidate translation as follows rlcs m yxlcs plcs n yxlcs flcs lcslcs lcslcs pr pr 2 2 1 where lcs is the length of a longest common subsequence of x and y and plcsrlcs when flcsrlcs__flcsplcswe call the lcs based fmeasure ie equation 3 rougelnotice that rougel is 1 when x y since lcs m or n while rougel is zero when lcs 0 ie there is nothing in common between x and y one advantage of using lcs is that it does not require consecutive matches but insequence matches that reflect sentence level word order as n gramsthe other advantage is that it automatically includes longest insequence common ngrams therefore no predefined ngram length is necessaryby only awarding credit to insequence unigram matches rougel also captures sentence level structure in a natural wayconsider the following example s1police killed the gunman s2police kill the gunman s3the gunman kill police using s1 as the reference translation s2 has a rougel score of 34 075 and s3 has a rouge l score of 24 05 with 1therefore s2 is better than s3 according to rougelthis example illustrated that rougel can work reliably at sentence levelhowever lcs suffers one disadvantage it only counts the main in sequence words therefore other alternative lcses and shorter sequences are not reflected in the final scorein the next section we introduce rougew32 rougew weighted longest commonsubsequence lcs has many nice properties as we have described in the previous sectionsunfortunately the basic lcs also has a problem that it does not differentiate lcses of different spatial relations within their embedding sequencesfor example given a reference sequence x and two candidate sequences y1 and y2 as follows x a b c d e f g y1 a b c d h i k y2 a h b k c i d y1 and y2 have the same rougel scorehowever in this case y1 should be the better choice than y2 because y1 has consecutive matchesto improve the basic lcs method we can simply remember the length of consecutive matches encountered so far to a regular two dimensional dynamic program table computing lcswe call this weighted lcs and use k to indicate the length of the current consecutive matches ending at words xi and yjgiven two sentences x and y the recurrent relations can be written as follows if xi yj then the length of consecutive matches at position i1 and j1 k w c c f f remember the length of consecutive matches at position i j w k1 otherwise if c c then c c w 0 no match at i j else c c w 0 no match at i j wlcs c where c is the dynamic programming table 0 f f for any positive integers x and y in other words consecutive matches are awarded more scores than nonconsecutive matchesfor example fk when k 0 and 0this function charges a gap penalty of for each nonconsecutive ngram sequencesanother possible function family is the polynomial family of the form kwhere 1however in order to normalize the final rougew score we also prefer to have a function that has a close form inverse functionfor example fk2 has a close form inverse function f 1k12fmeasure based on wlcs can be computed as follows given two sequences x of length m and y of length n rwlcs pwlcs fwlcs wlcswlcs wlcswlcs pr pr 2 2 1 f 1 is the inverse function of f we call the wlcsbased fmeasure ie equation 6 rouge w using equation 6 and fk2 as the weighting function the rougew scores for sequences y1 and y2 are 0571 and 0286 respectivelytherefore y1 would be ranked higher than y2 using wlcswe use the polynomial function of the form kin the experiments described in section 4 with the weighting factor varying from 11 to 20 with 01 incrementrougew is the same as rougel when is set to 1in the next section we introduce the skipbigram cooccurrence statistics33 rouges skipbigram cooccurrencestatistics skipbigram is any pair of words in their sentence order allowing for arbitrary gapsskipbigram cooccurrence statistics measure the overlap of skip bigrams between a candidate translation and a set of reference translationsusing the example given in section 31 s1police killed the gunman s2police kill the gunman s3the gunman kill police s4the gunman police killed each sentence has c2 6 skipbigramsfor example s1 has the following skipbigrams given translations x of length m and y of length n assuming x is a reference translation and y is a candidate translation we compute skipbigram based fmeasure as follows rskip2 2 pskip2 2 fskip2 2 2 2 22 2 1 where skip2 is the number of skipbigram matches between x and y pskip2rskip2 when fskip2rskip2__fskip2pskip2 and c is the combination functionwe call the skipbigram based fmeasure ie equation 9 rougesusing equation 9 with 1 and s1 as the reference s2s rouges score is 05 s3 is 0167 and s4 is 0333therefore s2 is better than s3 and s4 and s4 is better than s3one advantage of skipbigram vs bleu is that it does not require consecutive matches but is still sensitive to word ordercomparing skipbigram with lcs skipbigram counts all inorder matching word pairs while lcs only counts one longest common subsequencewe can limit the maximum skip distance between two inorder words to control the admission of a skipbigramwe use skip distances of 1 to 9 with increment of 1 and without any skip distance constraint in the next section we present the evaluations of bleu nist per wer rougel rougew and rouges using the orange evaluation method described in section 22 combinations c 4 6comparing automatic evaluation metrics using the orange evaluation method is straightforwardto simulate real world scenario we use nbest lists from isis stateoftheart statistical machine translation system altemp and the 2002 nist chineseenglish evaluation corpus asthe test corpusthere are 878 source sentences in chinese and 4 sets of reference translations provided by ldc3for exploration study we generate 1024best list using altemp for 872 source sentencesaltemp generates less than 1024 alternative translations for 6 out of the 878 source 3 linguistic data consortium prepared these manualtranslations as part of the darpas tides projectsentencesthese 6 source sentences are excluded from the 1024best setin order to compute bleu at sentence level we apply the following smoothing technique add one count to the ngram hit and total n gram count for n 1therefore for candidate translations with less than n words they can still get a positive smoothed bleu score from shorter ngram matches however if nothing matches then they will get zero scoreswe call the smoothed bleu bleusfor each candidate translation in the 1024best list and each reference we compute the following scores 1bleus1 to 92 nist per and wer3rougel4rougew with weight ranging from 11to 20 with increment of 01ranging from 0 to 9 and without any skip distance limit we compute the average score of the references and then rank the candidate translations and the references according to these automatic scoresthe orange score for each metric is calculated as the average rank of the average reference score over the whole corpus divided by the length of the nbest list plus 1assuming the length of the nbest list is n and the size of the corpus is s we compute orange as follows orange 1 rank is the average rank of source sentence is reference translations in nbest list i table 2 shows the results for bleus1 to 9to assess the reliability of the results 95 confidence intervals of average rank of the oracles are method orange average rank 95cil 95ciyou bleus1 3539 363 337 387 bleus2 2551 261 239 283 bleus3 2374 243 221 267 bleus4 2313 237 215 258 bleus5 2313 237 215 260 bleus6 2291 235 211 257 bleus7 2298 236 213 258 bleus8 2320 238 214 261 bleus9 2356 241 218 265 table 2orange scores for bleus1 to 9method pearson 95l 95you pearson 95l 95you bleus1 087 084 090 083 077 088 bleus2 084 081 087 085 080 090 bleus3 080 076 084 087 082 091 bleus4 076 072 080 088 083 092 bleus5 073 069 078 088 083 091 bleus6 070 065 075 087 082 091 bleus7 065 060 070 085 080 089 bleus8 058 052 064 082 076 086 bleus9 050 044 057 076 070 082 adequacy fluency table 3pearson correlation analysis bleus1 to 9 vs adequacy and fluency of 8 machine translation systems in 2003 nist chineseenglish machine translation evaluationmethod orange average rank 95cil 95ciyou rougel 2056 211 190 234 rougew11 2045 210 189 232 rougew12 2047 210 186 230 rougew13 2069 212 188 234 rougew14 2091 214 191 238 rougew15 2117 217 196 241 rougew16 2147 220 199 242 rougew17 2172 223 200 245 rougew18 2188 224 204 246 rougew19 2204 226 203 249 rougew20 2225 228 206 250 table 4orange scores for rougel and rougew11 to 20method orange average rank 95cil 95ciyou rouges0 2515 258 234 280 rouges1 2244 230 209 253 rouges2 2038 209 186 231 rouges3 1981 203 183 226 rouges4 1966 202 177 224 rouges5 1995 204 184 226 rouges6 2032 208 187 230 rouges7 2077 213 191 236 rouges8 2142 220 198 242 rouges9 2192 225 204 247 rouges 2743 281 259 304 table 5orange scores for rouges1 to 9 and rougesestimated using bootstrap resampling according to table 2 bleus6 is the best performer among all bleuses but it is statistically equivalent to bleus3 4 5 7 8 and 9 with 95 of confidencetable 3 shows pearsons correlation coefficient for bleus1 to 9 over 8 participants in 2003 nist chineseenglish machine translation evaluationaccording to table 3 we find that shorter bleus has better correlation with adequacyhowever correlation with fluency increases when longer n gram is considered but decreases after bleus5there is no consensus winner that achieves best correlation with adequacy and fluency at the same timeso which version of bleus should we usea reasonable answer is that if we would like to optimize for adequacy then choose bleus1 however if we would like to optimize for fluency then choose bleus4 or bleus5according to table 2 we know that bleus6 on average places reference translations at rank 235 in a 1024best list machine translations that is significantly better than bleus1 and bleus2therefore we have better chance of finding more humanlike translations on the top of an nbest list by choosing bleus6 instead of bleus2to design automatic metrics better than bleus6 we can carry out error analysis over the machine translations that are ranked higher than their referencesbased on the results of error analysis promising modifications can be identifiedthis indicates that the orange evaluation method provides a natural automatic evaluation metric development cycletable 4 shows the orange scores for rougel and rougew11 to 20rougew 11 does have better orange score but it is equivalent to other rougew variants and rougeltable 5 lists performance of different rouges variantsrouges4 is the best performer but is only significantly better than rouges0 rouges1 rouges9 and rougesthe relatively worse performance of rouges might to due to spurious matches such as the theor the oftable 6 summarizes the performance of 7 different metricsrouges4 is the best with an orange score of 1966 that is statistically equivalent to rougel and rouge w11 and is significantly better than bleus6 nist per and weramong them per is the worstto examine the length effect of nbest lists on the relative performance of automatic metrics we use the altemp smt system to generate a 16384 best list and compute orange scores for bleus4 per wer rougel rougew12 and rouges4only 474 source sentences that have more than 16384 alternative translations are used in this experimenttable 7 shows the resultsit confirms that when we extend the length of the n best list to 16 times the size of the 1024best the relative performance of each automatic evaluation metric group stays the samerouges4 is still the best performerfigure 1 shows the trend of orange scores for these metrics over nbest list of n from 1 to 16384 with length increment of 64it is clear that relative performance of these metrics stay the same over the entire range5 conclusionin this paper we introduce a new automatic evaluation method orange to evaluate automatic evaluation metrics for machine translationswe showed that the new method can be easily implemented and integrated with existing statistical machine translation frameworksorange assumes a good automatic evaluation metric should assign high scores to good translations and assign low scores to bad translationsusing reference translations as examples of good translations we measure the quality of an automatic evaluation metric based on the average rank of the references within a list of alternative machine translationscomparing with traditional approaches that require human judgments on adequacy or fluency orange requires no extra human involvement other than the availability of reference translationsit also streamlines the process of design and error analysis for developing new automatic metricsusing orange we have only one parameter ie orange itself to optimize vs two in correlation analysis using human assigned adequacy and fluencyby examining the rank position of the method orange average rank 95cil 95ciyou bleus6 2291 235 211 257 nist 2970 304 280 328 per 3684 378 350 403 wer 2390 245 222 268 rougel 2056 211 190 234 rougew11 2045 210 189 232 rouges4 1966 202 177 224 table 6summary of orange scores for 7 automatic evaluation metricsmethod orange average rank 95cil 95ciyou bleus4 1827 2993 2607 3474 per 2895 4744 4245 5292 wer 1936 3172 2748 3639 rougel 1622 2657 2259 3072 rougew12 1587 2600 2216 2989 rouges4 1492 2444 2028 2860 table 7summary of orange scores for 6 automatic evaluation metrics references we can easily identify the confusion set of the references and propose new features to improve automatic metricsone caveat of the orange method is that what if machine translations are as good as reference translationsto rule out this scenario we can sample instances where machine translations are ranked higher than human translationswe then check the portion of the cases where machine translations are as good as the human translationsif the portion is small then the orange method can be confidently appliedwe conjecture that this is the case for the currently available machine translation systemshowever we plan to conduct the sampling procedure to verify this is indeed the case
C04-1072
orange a method for evaluating automatic evaluation metrics for machine translationcomparisons of automatic evaluation metrics for machine translation are usually conducted on corpus level using correlation statistics such as pearsons product moment correlation coefficient or spearmans rank order correlation coefficient between human scores and automatic scoreshowever such comparisons rely on human judgments of translation qualities such as adequacy and fluencyunfortunately these judgments are often inconsistent and very expensive to acquirein this paper we introduce a new evaluation method orange for evaluating automatic machine translation evaluation metrics automatically without extra human involvement other than using a set of reference translationswe also show the results of comparing several existing automatic metrics and three new automatic metrics using orangebleu is smoothed and it considers only matching up to bi grams because this has higher correlations with human judgments than when higherordered ngrams are includedsmoothed persentence bleu was used as a similarity metric
partofspeech tagging in context we present a new hmm tagger that exploits context on both sides of a word to be tagged and evaluate it in both the unsupervised and supervised case along the way we present the first comprehensive comparison of unsupervised methods for partofspeech tagging noting that published results to date have not been comparable across corpora or lexicons observing that the quality of the lexicon greatly impacts the accuracy that can be achieved by the algorithms we present a method of hmm training that improves accuracy when training of lexical probabilities is unstable finally we show how this new tagger achieves stateoftheart results in a supervised nontraining intensive framework the empiricist revolution in computational linguistics has dramatically shifted the accepted boundary between what kinds of knowledge are best supplied by humans and what kinds are best learned from data with much of the human supplied knowledge now being in the form of annotations of dataas we look to the future we expect that relatively unsupervised methods will grow in applicability reducing the need for expensive human annotation of datawith respect to partofspeech tagging we believe that the way forward from the relatively small number of languages for which we can currently identify parts of speech in context with reasonable accuracy will make use of unsupervised methods that require only an untagged corpus and a lexicon of words and their possible parts of speechwe believe this based on the fact that such lexicons exist for many more languages than extensive humantagged training corpora exist forunsupervised partofspeech tagging as defined above has been attempted using a variety of learning algorithms while this makes unsupervised partofspeech tagging a relatively wellstudied problem published results to date have not been comparable with respect to the training and test data used or the lexicons which have been made available to the learnersin this paper we provide the first comprehensive comparison of methods for unsupervised partof speech taggingin addition we explore two new ideas for improving tagging accuracyfirst we explore an hmm approach to tagging that uses context on both sides of the word to be tagged inspired by previous work on building bidirectionality into graphical models second we describe a method for sequential unsupervised training of tag sequence and lexical probabilities in an hmm which we observe leads to improved accuracy over simultaneous training with certain types of modelsin section 2 we provide a brief description of the methods we evaluate and review published resultssection 3 describes the contextualized variation on hmm tagging that we have exploredin section 4 we provide a direct comparison of several unsupervised partofspeech taggers which is followed by section 5 in which we present a new method for training with suboptimal lexiconsin section 6 we revisit our new approach to hmm tagging this time in the supervised frameworka common formulation of an unsupervised partof speech tagger takes the form of a hidden markov model where the states correspond to partofspeech tags ti and words wi are emitted each time a state is visitedthe training of hmmbased taggers involves estimating lexical probabilities p and tag sequence probabilities pthe ultimate goal of hmm training is to find the model that maximizes the probability of a given training text which can be done easily using the forwardbackward or baumwelch algorithm these model probabilities are then used in conjunction with the viterbi algorithm to find the most probable sequence of partofspeech tags for a given sentencewhen estimating tag sequence probabilities an hmm tagger such as that described in merialdo typically takes into account a history consisting of the previous two tags eg we compute pkupiec describes a modified trigram hmm tagger in which he computes word classes for which lexical probabilities are then estimated instead of computing probabilities for individual wordswords contained within the same equivalence classes are those which possess the same set of possible parts of speechanother highlyaccurate method for partof speech tagging from unlabelled data is brills unsupervised transformationbased learner derived from his supervised transformationbased tagger utbl uses information from the distribution of unambiguously tagged data to make informed labeling decisions in ambiguous contextsin contrast to the hmm taggers previously described which make use of contextual information coming from the left side only utbl considers both left and right contextsreported tagging accuracies for these methods range from 87 to 96 but are not directly comparablekupiecs hmm classbased tagger when trained on a sample of 440000 words of the original brown corpus obtained a test set accuracy of 957brill assessed his utbl tagger using 350000 words of the brown corpus for training and found that 96 of words in a separate 200000word test set could be tagged correctlyfurthermore he reported test set accuracy of 951 for the utbl tagger trained on 120000 words of penn treebank and tested on a separate test set of 200000 words taken from the same corpusfinally using 1 million words from the associated press for training merialdos trigram tagger was reported to have an accuracy of 866this tagger was assessed using a tag set other than that which is employed by the penn treebankunfortunately none of these results can be directly compared to the others as they have used different randomized and irreproducible splits of training and test data different tag sets or different corpora altogetherthe hmm taggers we have discussed so far are similar in that they use condition only on left context when estimating probabilities of tag sequencesrecently toutanova et al presented a supervised conditional markov model partofspeech tagger which exploited information coming from both left and right contextsaccuracy on the penn treebank using two tags to the left as features in addition to the current tag was 9610when using tag to the left and tag to the right as features in addition to the current tag accuracy improved to 9655lafferty et al also compared the accuracies of several supervised partofspeech tagging models while examining the effect of directionality in graphical modelsusing a 50 50 traintest split of the penn treebank to assess hmms maximum entropy markov models and conditional random fields they found that crfs which make use of observation features from both the past and future outperformed hmms which in turn outperformed memmsin a traditional hmm tagger the probability of transitioning into a state representing tag ti is computed based on the previous two tags ti1 and ti 2 and the probability of a word wi is conditioned only on the current tag tithis formulation ignores dependencies that may exist between a word and the partofspeech tags of the words which precede and follow itfor example verbs which subcategorize strongly for a particular partof speech but can also be tagged as nouns or pronouns may benefit from modeling dependencies on future tagsto model this relationship we now estimate the probability of a word wi based on tags ti1 and ti1this change in structure which we will call a contextualized hmm is depicted in figure 1this type of structure is analogous to contextdependent phone models used in acoustic modeling for speech recognition 31 model definitionin order to build both left and rightcontext into an hmm partofspeech tagger we reformulate the figure 1 graphical structure of traditional hmm tagger and contextualized hmm tagger trigram hmm model traditionally described as n i iiiiiiiii twtwtpttwtwwptwp 1 111111 41 corpora and lexicon constructionfor our comparison of unsupervised tagging methods we implemented the hmm taggers described in merialdo and kupiec as well as the utbl tagger described in brill we also implemented a version of the contextualized hmm using the type of word classes utilized in the kupiec modelthe algorithms were trained and tested using version 3 of the penn treebank using the training development and test split described in collins and also employed by toutanova et al in testing their supervised tagging algorithmspecifically we allocated sections 00 18 for training 1921 for development and 2224 for testingto avoid the problem of unknown words each learner was provided with a lexicon constructed from tagged versions of the full treebankwe did not begin with any estimates of the likelihoods of tags for words but only the knowledge of what tags are possible for each word in the lexicon ie something we could obtain from a manuallyconstructed dictionary42 the effect of lexicon construction ontagging accuracy to our surprise we found initial tag accuracies of all methods using the full lexicon extracted from the penn treebank to be significantly lower than previously reportedwe discovered this was due to several factorsone issue we noticed which impacted tagging accuracy was that of a frequently occurring word thevb lynesesnnp ofin powdernnp springsnnp gannp havevbp filedvbn suitnn inin georgiannp statenn courtnn againstin stuartnnp jamesnnp 1none allegingvbg fraudnn lastjj weeknn cbsnnp incnnp cancelledvbd thennp peoplennp nextnnp doornnp asym discountedvbn ratenn figure 2 manuallytagged examples being mistagged during treebank construction as shown in the example in figure 2asince we are not starting out with any known estimates for probabilities of tags given a word the learner considers this tag to be just as likely as the words other more probable possibilitiesin another more frequently occurring scenario human annotators have chosen to tag all words in multi word names such as titles with the propernoun tag nnp this has the effect of adding noise to the set of tags for many closedclass wordsfinally we noticed that a certain number of frequently occurring words are sometimes labeled with infrequently occurring tags as exemplified in figure 2cin the case of the hmm taggers where we begin with uniform estimates of both the state transition probabilities and the lexical probabilities the learner finds it difficult to distinguish between more and less probable tag assignmentswe later discovered that previous implementations of utbl involved limiting which possible part of speech assignments were placed into the lexicon1 which was not explicitly detailed in the published reportswe then simulated in a similar fashion the construction of higher quality lexicons by using relative frequencies of tags for each word from the tagged treebank to limit allowable wordtag assignmentsthat is tags that appeared the tag of a particular word less than x of the time were omitted from the set of possible tags for that wordwe varied this threshold until accuracy did not significantly change on our set of heldout datathe effect of thresholding tags based on relative frequency in the training set is shown for our set of partofspeech taggers in the curve in figure 3as shown in table 1 the elimination of noisy possible partofspeech assignments raised accuracy back into the realm of previously published resultsthe best test set accuracies for the learners in the class of hmm taggers are 1 eric brill personal communication 070 075 080 085 090 095 100 0 01 02 03 threshold ta g a cc you ra c y merialdo trigram contextual trigram kupiec trigram utbl figure 3 the effect of lexicon construction on unsupervised partofspeech taggers 060 065 070 075 080 085 090 095 100 0 1 2 3 4 5 iteration ta g a cc you ra cy contextual trigram kupiec trigram merialdo trigram figure 4 test accuracy of hmms using optimzed lexicons plotted against the number of training iterations in figure 4lexicons while placing informed limitations on the tags that can be included in a lexicon can dramatically improve results it is dependent on some form of supervision either from manually tagged data or by a human editor who postfilters an automatically constructed listin the interest of being as unsupervised as possible we sought to find a way to cope with the noisy aspects of the unfiltered lexicon described in the previous sectionwe suspected that in order to better control the training of lexical probabilities having a stable model of state transition probabilities would be of helpwe stabilized this model in two waysunfiltered lexicon optimized lexicon merialdo hmm 719 939 contextualized hmm 769 940 kupiec hmm 771 959 utbl 772 959 contextualized hmm with classes 772 959 table 1 tag accuracy of unsupervised pos taggers 51 using unambiguous tag sequences toinitialize contextual probabilities first we used our unfiltered lexicon along with our tagged corpus to extract nonambiguous tag sequencesspecifically we looked for trigrams in which all words contained at most one possible partofspeech tagwe then used these ngrams and their counts to bias the initial estimates of state transitions in the hmm taggersthis approach is similar to that described in ratnaparhki who used unambiguous phrasal attachments to train an unsupervised prepositional phrase attachment model52 hmm model training revisedsecond we revised the training paradigm for hmms in which lexical and transition probabilities are typically estimated simultaneouslywe decided to train the transition model probabilities first keeping the lexical probabilities constant and uniformusing the estimates initially biased by the method previously mentioned we train the transition model until it reaches convergence on a heldout setwe then use this model keeping it fixed to train the lexical probabilities until they eventually converge on heldout data53 resultswe implemented this technique for the kupiec merialdo and contextualized hmm taggersfrom our training data we were able to extract data for on the order of 10000 unique unambiguous tag sequences which were then be used for better initializing the state transition probabilitiesas shown in table 2 this method improved tagging accuracy of the merialdo and contextual taggers over traditional simultaneous hmm training reducing error by 04 in the case of merialdo and 07 for the contextual hmm partofspeech taggerhmm tagger simultaneous model training sequential model training merialdo 939 943 contextualized 940 947 kupiec 959 959 table 2 effects of hmm training on tagger accuracy in this paradigm tagging accuracy of the kupiec hmm did not changeas one more way to assess the potential benefit from using left and right context in an hmm tagger we tested our tagging model in the supervised framework using the same sections of the treebank previously allocated for unsupervised training development and testingin addition to comparing against a baseline tagger which always chooses a words most frequent tag we implemented and trained a version of a standard hmm trigram taggerfor further comparison we evaluated these part of speech taggers against toutanova et als supervised dependencynetwork based tagger which currently achieves the highest accuracy on this dataset to datethe best result for this tagger at 9724 makes use of both lexical and tag features coming from the left and right sides of the targetwe also chose to examine this taggers results when using only as feature templates which represents the same amount of context built into our contextualized taggeras shown in table 3 incorporating more context into an hmm when estimating lexical probabilities improved accuracy from 9587 to 9659 relatively reducing error rate by 174with the contextualized tagger we witness a small improvement in accuracy over the current state of the art when using the same amount of contextit is important to note that this accuracy can be obtained without the intensive training required by toutanova et als loglinear modelsthis result falls only slightly below the fullblown training intensive dependencybased conditional modelwe have presented a comprehensive evaluation of several methods for unsupervised partofspeech tagging comparing several variations of hidden markov model taggers and unsupervised transformationbased learning using the same corpus and same lexiconswe discovered that the supervised tagger test accuracy baseline 9219 standard hmm 9587 contextualized hmm 9659 dependency using lr tag features 9655 dependency best feature set 9724 table 3 comparison of supervised taggers quality of the lexicon made available to unsupervised learner made the greatest difference to tagging accuracyfiltering the possible partof speech assignments contained in a basic lexicon automatically constructed from the commonly used penn treebank improved results by as much as 22this finding highlights the importance of the need for clean dictionaries whether they are constructed by hand or automatically when we seek to be fully unsupervisedin addition we presented a variation on hmm model training in which the tag sequence and lexical probabilities are estimated in sequencethis helped stabilize training when estimation of lexical probabilities can be noisyfinally we experimented with using left and right context in the estimation of lexical probabilities which we refer to as a contextualized hmmwithout supervision this new hmm structure improved results slightly compared to a simple trigram tagger as described in merialdo which takes into account only the current tag in predicting the lexical itemwith supervision this model achieves state of the art results without the lengthy training procedure involved in other high performing modelsin the future we will consider making an increase the contextsize which helped toutanova et al the authors wish to thank gideon mann for performing some initial experiments with a publicly available implementation of utbl and eric brill for discussions regarding his work on unsupervised transformation based learning
C04-1080
partofspeech tagging in contextwe present a new hmm tagger that exploits context on both sides of a word to be tagged and evaluate it in both the unsupervised and supervised casealong the way we present the first comprehensive comparison of unsupervised methods for partofspeech tagging noting that published results to date have not been comparable across corpora or lexiconsobserving that the quality of the lexicon greatly impacts the accuracy that can be achieved by the algorithms we present a method of hmm training that improves accuracy when training of lexical probabilities is unstablefinally we show how this new tagger achieves stateoftheart results in a supervised nontraining intensive frameworkwhile replicating earlier experiments we discover that performance was highly dependent on cleaning tag dictionaries using statistics gleaned from the tokenswe show that he expectation maximization algorithm for bi tag hmms is efficient and quite effective for acquiring accurate pos taggers given only a lexicon and certain favorable conditionswe observe that earlier unsupervised hmmthem results were artificially high due to use of optimized lexicons in which only frequentenough analyses of each word were kept
chinese segmentation and new word detection using conditional random fields chinese word segmentation is a difficult i am portant and widelystudied sequence modelingproblem this paper demonstrates the abil ity of linearchain conditional random fields to perform robust and accurate chinese word segmentation by providing a principled framework that easily supports the in tegration of domain knowledge in the form of multiple lexicons of characters and words we also present a probabilistic new word detection method which further improves performance our system is evaluated on four datasets usedin a recent comprehensive chinese word segmentation competition stateoftheart perfor mance is obtained unlike english and other western languages many asian languages such as chinese japanese and thai do not delimit words by whitespacewordsegmentation is therefore a key precursor for language processing tasks in these languagesfor chinese there has been significant research on find ing word boundaries in unsegmented sequences for a reviewun fortunately building a chinese word segmentation system is complicated by the fact that there is no standard definition of word boundaries in chineseapproaches to chinese segmentation fall roughly into two categories heuristic dictionarybased methods and statistical machine learning methodsin dictionarybased methods a predefined dictio nary is used along with handgenerated rules for segmenting input sequence howeverthese approaches have been limited by the impossibility of creating a lexicon that includes all possible chinese words and by the lack of robust statistical inference in the rulesmachine learning approaches are more desirable and have been successful in both unsupervised learning and supervised learning many current approaches suffer from either lackof exact inference over sequences or difficulty in incorporating domain knowledge effectively into seg mentationdomain knowledge is either not used used in a limited way or used in a complicated way spread across different componentsfor examplethe ngram generative language modeling based ap proach of teahan et al does not use domainknowledgegao et al uses classbased language for word segmentation where some word cat egory information can be incorporatedzhang et al use a hierarchical hidden markov model to incorporate lexical knowledgea recent advance in this area is xue in which the author uses a slidingwindow maximum entropy classifier to tag chinese characters into one of four position tags and then covert these tags into a segmentation using rulesmaximum entropy models give tremendousflexibility to incorporate arbitrary featureshow ever a traditional maximum entropy tagger as used in xue labels characters without consideringdependencies among the predicted segmentation labels that is inherent in the state transitions of finite state sequence modelslinearchain conditional random fields are models that address both issues aboveunlike heuristic methods they are principled probabilistic finite state models onwhich exact inference over sequences can be ef ficiently performedunlike generative ngram or hidden markov models they have the ability to straightforwardly combine rich domain knowledge for example in this paper in the form of multiple readilyavailable lexiconsfurthermore they arediscriminativelytrained and are often more accurate than generative models even with the same fea turesin their most general form crfs are arbitrary undirected graphical models trained to maximize the conditional probability of the desired outputs given the corresponding inputsin the linearchainspecial case we use here they can be roughly un derstood as discriminativelytrained hidden markovmodels with nextstate transition functions represented by exponential models and with great flexibility to viewthe observation sequence in terms of arbitrary over lapping features with longrange dependencies and at multiple levels of granularitythese beneficialproperties suggests that crfs are a promising ap proach for chinese word segmentationnew word detection is one of the most impor tant problems in chinese information processingmany machine learning approaches have been pro posed new word detection is normally considered as a separate process from segmentationhowever integrating them would benefit both seg mentation and new word detectioncrfs provide aconvenient framework for doing thisthey can pro duce not only a segmentation but also confidence in local segmentation decisions which can be usedto find new unfamiliar character sequences sur rounded by highconfidence segmentationsthus our new word detection is not a standalone process but an integral part of segmentationnewly detected words are reincorporated into our word lexiconand used to improve segmentationimproved seg mentation can then be further used to improve new word detectioncomparing chinese word segmentation accuracyacross systems can be difficult because many re search papers use different data sets and different groundrulessome published results claim 98 or99 segmentation precision and recall but these ei ther count only the words that occur in the lexicon or use unrealistically simple data lexicons that haveextremely small outofvocabulary rates short sentences or many numbersa recent chinese word segmentation competition has made compar isons easierthe competition provided four datasets with significantly different segmentation guidelines and consistent traintest splitsthe performance ofparticipating system varies significantly across different datasetsour system achieves top performance in two of the runs and a stateoftheart per formance on averagethis indicates that crfs are a viable model for robust chinese word segmentationconditional random fields are undirected graphical models trained to maximize a conditional probability a common specialcase graph structure is a linear chain which corresponds to a finite state machine and is suitablefor sequence labelinga linearchain crf with parameters 1 defines a conditional proba bility for a state sequence y y1yt given an input sequence x x1xt to be p 1zx exp where zx is the perinput normalization that makes the probability of all state sequences sum to onefk is a feature function which is of ten binaryvalued but can be realvalued and k is a learned weight associated with feature fkthe feature functions can measure any aspect of a statetransition yt1 yt and the entire observation se quence x centered at the current time step t for example one feature function might have value 1when yt1 is the state start yt is the state not start and xt is a word appearing in a lexicon of peoples first nameslarge positive values for kindicate a preference for such an event large nega tive values make the event unlikelythe most probable label sequence for an input x y argmaxy pcan be efficiently determined using the viterbi algorithm an n best list of labeling sequences can also be obtained using modi fied viterbi algorithm and a search the parameters can be estimated by maximum likelihoodmaximizing the conditional probabilityof a set of label sequences each given their cor responding input sequencesthe loglikelihood of training set i 1 m is written l i logp i logzxi traditional maximum entropy learning algorithms such as gis and iis canbe used to train crfshowever our implemen tation uses a quasinewton gradientclimber bfgs for optimization which has been shown to converge much faster the gradient of the likelihood is pk it fkt x t iyt pfk t crfs share many of the advantageous properties of standard maximum entropy classifiers including their convex likelihood function which guarantees that the learning procedure converges to the global maximum21 regularization in crfsto avoid overfitting loglikelihood is usually penalized by some prior distribution over the parametersa commonly used prior is a zeromean gaussianwith a gaussian prior loglikelihood is penal ized as followsl i logpk 2k 22k where 2k is the variance for feature dimension k the variance can be feature dependenthowever for simplicity constant variance is often used for all featureswe experiment an alternate version ofgaussian prior in which the variance is feature dependentwe bin features by frequency in the train ing set and let the features in the same bin share the same variancethe discounted value is set to be k dckme2 where ck is the count of features m is the bin size set by held out validation and dae is the ceiling functionsee peng and mccallum for more details and further experiments22 state transition featuresvarying statetransition structures with different markov order can be specified by different crf feature functions as determined by the number ofoutput labels y examined together in a feature func tionwe define four different state transition feature functions corresponding to different markov ordershigherorder features capture more longrange de pendencies but also because more data sparseness problems and require more memory for trainingthe best markov order for a particular application can be selected by heldout crossvalidation1firstorder here the inputs are examined inthe context of the current state onlythe feature functions are represented as fthere are no separate parameters for state tran sitions2firstordertransitions here we add parameters corresponding to state transitionsthe fea ture functions used are f fcontext of the current and previous statesfea ture function are represented as f4thirdorder here inputs are examined inthe context of the current and two previous statesfeature function are represented as fwe cast the segmentation problem as one of se quence tagging chinese characters that begin a new word are given the start tag and characters in the middle and at the end of words are given thenonstart tagthe task of segmenting new un segmented test data becomes a matter of assigning a sequence of tags to the input sequence of chinese charactersconditional random fields are configured as a linearchain for this purposeand tagging is performed using the viterbi algorithm to efficiently find the most likely label se quence for a given character sequence31 lexicon features as domain knowledgeone advantage of crfs is its flexibility in using arbitrary features of the inputto explore this advantage as well as the importance of domain knowledge we use many open features from external re sourcesto specifically evaluate the importance ofdomain knowledge beyond the training data we divide our features into two categories closed fea tures and open features the open features include a large word list a character list and additional topic or partofspeech character lexicons obtained from various sourcesthe closed features are obtained from training data alone by intersecting the character list obtainedfrom training data with corresponding open lexi consmany lexicons of chinese words and characters are available from the internet and other sourcesbesides the word list and character list our lexiconsinclude 24 lists of chinese words and characters obtained from several internet sites1 cleaned and augmented by a local native chinese speaker indepen dently of the competition datathe list of lexicons used in our experiments is shown in figure 132 feature conjunctionssince crfs are loglinear models feature conjunctions are required to form complex nonlinear de cision boundaries in the original feature spacewe 1httpwwwmandarintoolscom ftpxcinlinuxorgtwpubxcinlibtabe httpwwwgeocitiescomhao510wordlist noun verb adjective adverb auxiliary preposition number negative determiner function letter punctuation last name foreign name maybe lastname plural character pronoun unit character country name chinese place name organization name title suffix title prefix date figure 1 lexicons used in our experiments c2 second previous character in lexicon c1 previous character in lexicon c1 next character in lexicon c2 second next character in lexicon c0c1 current and next character in lexicon c1c0 current and previous character in lexicon c2c1 previous two characters in lexicon c1c0c1 previous current and next character in the lexicon figure 2 feature conjunctions used in experiments use feature conjunctions in both the open and closed tests as listed figure 2since no vocabulary list could ever be completenew word identification is an i am portant issue in chinese segmentationunknownwords because segmentation errors in that these outofvocabulary words in input text are often in correctly segmented into singlecharacter or otheroverlyshort words tradi tionally new word detection has been considered as a standalone processwe consider here new word detection as an integral part of segmentation aimingto improve both segmentation and new word detec tion detected new words are added to the word list lexicon in order to improve segmentation improved segmentation can potentially further improve new word detectionwe measure the performance ofnew word detection by its improvements on seg mentationgiven a word segmentation proposed by the crf we can compute a confidence in each segmentwe detect as new words those that are not in the existing word list yet are either highly confident segments or low confident segments that are surrounded by high confident wordsa confidence threshold of 09 is determined by crossvalidationsegment confidence is estimated using constrained forwardbackward the standard forwardbackward algorithm calculates zx the total like lihood of all label sequences y given a sequence x constrained forwardbackward algorithm calculates z x total likelihood of all paths passing through a constrained segment the confidence in this segment is then z x zx a real number between 0 and 1in order to increase recall of new words we consider not only the most likely segmen tation but the segmentations in the top n most likely segmentations and detect new words according to the above criteria in all n segmentationsmany errors can be corrected by new word detectionfor example person name hap pens four timesin the first pass of segmentation two of them are segmented correctly and the other two are mistakenly segmented as however is identified as a new word and added to the word list lexiconin the second pass of segmentation the other two mistakes are correctedto make a comprehensive evaluation we use allfour of the datasets from a recent chinese word segmentation bakeoff competition these datasets represent four different segmentation standardsa summary of the datasets is shown in table 1the standard bakeoff scoring program is used to calculate precision recall f1 and oov word recall51 experimental designsince ctb and pk are provided in the gb encod ing while as and hk use the big5 encoding we convert as and hk datasets to gb in order to make crosstrainingandtesting possiblenote that this conversion could potentially worsen performance slightly due to a few conversion errorswe use crossvalidation to choose markovorder and perform feature selectionthus each training set is randomly split80 used for training and theremaining 20 for validationand based on vali dation set performance choices are made for model structure prior and which word lexicons to includethe choices of prior and model structure shown in table 2 are used for our final testingwe conduct closed and open tests on all four datasetsthe closed tests use only material from the training data for the particular corpus being testedopen tests allows using other material such as lexicons from internetin open tests we use lexi cons obtained from various resources as described corpus abbrevencoding train words test words oov rate upenn chinese treebank ctb gb 250k 40k 181 beijing university pk gb 11m 17k 69 hong kong city you hk big 5 240k 35k 71 academia sinica as big 5 58m 12k 22 table 1 datasets statistics binsize m markov order ctb 10 firstorder transitions pk 15 firstorder transitions hk 1 firstorder as 15 firstorder transitions table 2 optimal prior and markov order setting in section 31in addition we conduct crossdataset tests in which we train on one dataset and test on other datasets52 overall resultsfinal results of crf based segmentation with newword detection are summarized in table 3the up per part of the table contains the closed test results and the lower part contains the open test resultseach entry is the performance of the given metric on the test setclosed precision recall f1 roov ctb 0828 0870 0849 0550 pk 0935 0947 0941 0660 hk 0917 0940 0928 0531 as 0950 0962 0956 0292 open precision recall f1 roov ctb 0889 0898 0894 0619 pk 0941 0952 0946 0676 hk 0944 0948 0946 0629 as 0953 0961 0957 0403 table 3 overall results of crf segmentation on closed and open tests to compare our results against other systems we summarize the competition results reported in in table 4xxc and xxo indicate the closed and open runs on datasetxx respectivelyentries contain the f1 perfor mance of each participating site on different runs with the best performance in boldour results are in the last rowcolumn siteaverage is the averagef1 performance over the datasets on which a site re ported resultscolumn ouraverage is the average f1 performance of our system over the same datasetscomparing performance across systems is diffi cult since none of those systems reported results on all eight datasets nevertheless several observations could be made from table 4first no single system achieved best results in all testsonly one site achieved two best runs with an av erage of 918 over 6 runss01 is one of the best segmentation systems in mainland china we also achieve two best runs with a comparable average of 919 over the same 6 runs and a 927 average over all the 8 runssecond performance varies significantly across dif ferent datasets indicating that the four datasets havedifferent characteristics and use very different seg mentation guidelineswe also notice that the worstresults were obtained on ctb dataset for all systemsthis is due to significant inconsistent segmen tation in training and testing we verify this by another testwe randomly split the training data into 80 training and 20testing and run the experiments for 3 times result ing in a testing f1 of 9713third consider a comparison of our results with site s12 who use a slidingwindow maximum entropy model they participated in two datasets with an average of 938our average over the same two runs is 942this gives some empirical evidenceof the advantages of linearchain crfs over sliding window maximum entropy models however this comparison still requires further investigation sincethere are many factors that could affect the performance such as different features used in both sys temsto further study the robustness of our approach to segmentation we perform crosstestingthat is training on one dataset and testing on other datasetstable 5 summarizes these results in which the rows are the training datasets and the columns are thetesting datasetsnot surprisingly cross testing re sults are worse than the results using the same asc aso ctbc ctbo hkc hko pkc pko siteaverage ouraverage s01 938 881 881 901 951 953 918 919 s02 874 912 893 872 s03 872 829 886 925 878 936 s04 939 937 938 944 s05 942 732 894 856 915 s06 945 829 924 924 906 919 s07 940 940 946 s08 904 956 936 938 934 940 s09 961 946 954 949 s10 831 901 947 959 910 908 s11 904 884 879 886 888 936 s12 959 916 938 942 956 957 849 894 928 946 941 946 927 table 4 comparisons against other systems the first column contains the 12 sites participating in bakeoff competition the second to the ninth columns contain their results on the 8 runs where a bold entry is the winner of that run column siteaverage contains the average performance of the site over the runs in which it participated where a bold entry indicates that this site performs better than our system column ouraverage is the average of our system over the same runs where a bolded entry indicates our system performs better than the other site the last row is the performance of our system over all the runs and the overall averagesource as training due to different segmentationpolicies with an exception on ctb where mod els trained on other datasets perform better than the model trained on ctb itselfthis is due to the data problem mentioned aboveoverall crfs perform robustly well across all datasetsfrom both table 3 and 5 we see as expectedimprovement from closed tests to open tests indicating the significant contribution of domain knowl edge lexiconsclosed ctb pk hk as ctb 0822 0810 0815 pk 0816 0824 0830 hk 0790 0807 0825 as 0890 0844 0864 open ctb pk hk as ctb 0863 0870 0894 pk 0852 0862 0871 hk 0861 0871 0889 as 0898 0867 0871 table 5 crossing test of crf segmentation 53 effects of new word detectiontable 6 shows the effect of new word detection on the closed testsan interesting observation is ctb pk hk as wo nwd 0792 0934 0916 0956 nwd 0849 0941 0928 0946 table 6 new word detection effects wo nwd is the results without new word detection and nwd is the results with new word detectionthat the improvement is monotonically related to the oov rate this is desirable because new word detection is most needed in situations that have high oov rateat low oov rate noisy new word detection can result in worse performance as seen in the as dataset54 error analysis and discussionseveral typical errors are observed in error anal ysisone typical error is caused by inconsistent segmentation labeling in the test setthis is mostnotorious in ctb datasetthe second most typical error is in new outofvocabulary words especially proper namesalthough our new word detec tion fixes many of these problems it is not effectiveenough to recognize proper names wellone solution to this problem could use a named entity ex tractor to recognize proper names this was found to be very helpful in wu one of the most attractive advantages of crfs is its the flexibility to easily incorporate arbitrary featureshere in the form domainknowledgeproviding lex iconshowever obtaining these lexicons is not a trivial matterthe quality of lexicons can affect the performance of crfs significantlyin addition compared to simple models like ngram language models another shortcomingof crfbased segmenters is that it requires signifi cantly longer training timehowever training is a onetime process and testing time is still linear in the length of the inputthe contribution of this paper is threefoldfirst we apply crfs to chinese word segmentation and find that they achieve stateofthe art performancesecond we propose a probabilistic new word de tection method that is integrated in segmentation and show it to improve segmentation performancethird as far as we are aware this is the first work to comprehensively evaluate on the four benchmark datasets making a solid baseline for future research on chinese word segmentationacknowledgmentsthis work was supported in part by the center for intelligent information retrieval in part by the cen tral intelligence agency the national security agencyand national science foundation under nsf grant iis 0326249 and in part by spawarsyscensd grant number n660010218903
C04-1081
chinese segmentation and new word detection using conditional random fieldschinese word segmentation is a difficult important and widelystudied sequence modeling problemthis paper demonstrates the ability of linearchain conditional random fields to perform robust and accurate chinese word segmentation by providing a principled framework that easily supports the integration of domain knowledge in the form of multiple lexicons of characters and wordswe also present a probabilistic new word detection method which further improves performanceour system is evaluated on four datasets used in a recent comprehensive chinese word segmentation competitionstateoftheart performance is obtainedthe superiority of crfs on chinese information processing was also demonstrated in word segmentationcrf is a statistical sequence modeling framework introduced by lafferty et al and we use it for the chinese word segmentation task by treating word segmentation as a binary decision taskwe first use this framework for chinese word segmentation by treating it as a binary decision task such that each character is labeled either as the beginning of a word or the continuation of onewe define the word segmentation problem as labeling each character as whether or not the previous character boundary of the current character is a word boundary
towards terascale semantic acquisition although vast amounts of textual data are freely available many nlp algorithms exploit only a minute percentage of it in this paper we study the challenges of working at the terascale we present an algorithm designed for the terascale for mining isa relations that achieves similar performance to a stateoftheart linguisticallyrich method we focus on the accuracy of these two systems as a func tion of processing time and corpus size the natural language processing com munity has recently seen a growth in corpusbased methodsalgorithms light in linguistic theories but rich in available training data have been successfully applied to several applications such as ma chine translation information extraction and question an swering in the last decade we have seen an explosion in the amount of available digital text resourcesit is estimated that the internet contains hundreds of terabytes of text data most of which is in an unstructured formatyet many nlp algorithms tap into only megabytes or gigabytes of this informationin this paper we make a step towards acquiring semantic knowledge from terabytes of datawe present an algorithm for extracting isa relations designed for the terascale and compare it to a state of the art method that employs deep analysis of text we show that by simply utilizing more data on this task we can achieve similar performance to a linguisticallyrich approachthe current state of the art co occurrence model requires an estimated 10 years just to parse a 1tb corpus instead of using a syntactically motivated cooccurrence ap proach as above our system uses lexicosyntactic rulesin particular it finds lexicopos patterns by making modifications to the basic edit distance algorithmonce these patterns have been learnt the algorithm for finding new isa relations runs in o where n is the number of sentencesin semantic hierarchies such as wordnet an isa relation between two words x and y represents a subordinate relationship many algorithms have recently been proposed to automatically mine isa relations between wordshere we focus on isa relations that are characterized by the questions whatwho is xfor example table 2 shows a sample of 10 isa relations discovered by the algorithms presented in this paperin this table we call azalea tiramisu and winona ryder in stances of the respective concepts flower dessert and actressthese kinds of isa relations would be useful for various purposes such as ontology con struction semantic information retrieval question answering etc the main contribution of this paper is a comparison of the quality of our patternbased and co occurrence models as a function of processing time and corpus sizealso the paper lays a foundation for terascale acquisition of knowledgewe will show that for very small or very large corpora or for situations where recall is valued over precision the patternbased approach is bestprevious approaches to extracting isa relations fall under two categories patternbased and co occurrencebased approaches21 patternbased approachesmarti hearst was the first to use a pat ternbased approach to extract hyponym relations from a raw corpusshe used an iterative process to semiautomatically learn patternshowever a corpus of 20mb words yielded only 400 examplesour patternbased algorithm is very similar to the one used by hearstshe uses seed examples to manually discover her patterns whearas we use a minimal edit distance algorithm to automatically discover the patterns771riloff and shepherd used a semi automatic method for discovering similar words using a few seed examples by using patternbased techniques and human supervisionberland and charniak used similar patternbased tech niques and other heuristics to extract meronymy relationsthey reported an accuracy of about 55 precision on a corpus of 100000 wordsgirju et al improved upon berland and charniak work using a machine learning filtermann and fleischman et al used part of speech patterns to extract a subset of hyponym relations involving proper nounsour patternbased algorithm differs from these approaches in two wayswe learn lexicopos patterns in an automatic wayalso the patterns are learned with the specific goal of scaling to the terascale 22 cooccurrencebased approachesthe second class of algorithms uses co occurrence statistics these systems mostly employ clustering algo rithms to group words according to their meanings in textassuming the distributional hypothesis words that occur in similar gram matical contexts are similar in meaningcurran and moens experimented with corpus size and complexity of proximity features in building automatic thesauricbc proposed by pantel and lin achieves high recall and precision in generating similarity lists of words discriminated by their meaning and senseshowever such clustering algorithms fail to name their classescaraballo was the first to use clustering for labeling isa relations using conjunction and apposition features to build noun clustersre cently pantel and ravichandran extended this approach by making use of all syntactic de pendency features for each nounmuch of the research discussed above takes a similar approach of searching text for simple sur face or lexicosyntactic patterns in a bottomup approachour cooccurrence model makes use of semantic classes like those generated by cbchyponyms are gen erated in a topdown approach by naming each group of words and assigning that name as a hypo nym of each word in the group the input to the extraction algorithm is a list of semantic classes in the form of clusters of words which may be generated from any sourcefor example following are two semantic classes discov ered by cbc peach pear pineapple apricot mango raspberry lemon cherry strawberry melon blueberry fig apple plum nectarine avocado grapefruit papaya banana cantaloupe cranberry blackberry lime orange tangerine phil donahue pat sajak arsenio hall geraldo rivera don imus larry king david letterman conan obrien rosie odonnell jenny jones sally jessy raph ael oprah winfrey jerry springer howard stern jay leno johnny carson the extraction algorithm first labels concepts and with fruit and host respectivelythen isa relationships are extracted such as apple is a fruit pear is a fruit and david letterman is a hostan instance such as pear is assigned a hypernym fruit not because it necessarily occurs in any par ticular syntactic relationship with the word fruit but because it belongs to the class of instances that doesthe labeling of semantic classes is performed in three phases as outlined below31 phase i in the first phase of the algorithm feature vec tors are extracted for each word that occurs in a semantic classeach feature corresponds to a grammatical context in which the word occursfor example catch __is a verbobject contextif the word wave occurred in this context then the con text is a feature of wavewe then construct a mutual information vector mi for each word e where mief is the pointwise mutual information between word e and context f which is defined as n c n c n c ef m j ej n i if ef mi 11 log table 2sample of 10 isa relationships discovered by our cooccurrence and patternbased systemscooccurrence system patternbased system word hypernym word hypernym azalea flower american airline bipolar disorder disease bobby bonds coach bordeaux wine radiation therapy cancer treatment flintstones television show tiramisu dessert salmon fish winona ryder actress table 1approximate processing time on a single pentium4 25 ghz machinetool 15 gb orpus 1 tb corpus pos tagger 2 days 125 days np chunker 3 days 214 days dependency parser 56 days 102 years syntactic parser 58 years 3884 years 772 where n is the number of elements to be clustered cef is the frequency count of word e in grammatical context f and n is the total frequency count of all features of all words32 phase iifollowing a committee for each semantic class is constructeda committee is a set of representative elements that unambi guously describe the members of a possible classfor example in one of our experiments the committees for semantic classes and from sec tion 3 were a peach pear pineapple apricot mango raspberry lemon blueberry b phil donahue pat sajak arsenio hall geraldo rivera don imus larry king david letterman 33 phase iiiby averaging the feature vectors of the commit tee members of a particular semantic class we obtain a grammatical template or signature for that classfor example figure 1 shows an excerpt of the grammatical signature for semantic class the vector is obtained by averaging the fea ture vectors of the words in the committee of this classthe vsubjnjokefeature indicates a sub jectverb relationship between the class and the verb joke while napponhostindicates an ap position relationship between the class and the noun hostthe two columns of numbers indicate the frequency and mutual information scoresto name a class we search its signature for cer tain relationships known to identify class labelsthese relationships automatically learned in include apposi tions nominal subjects such as relationships and like relationshipswe sum up the mutual information scores for each term that occurs in these rela tionships with a committee of a classthe highest scoring term is the name of the classthe syntactical cooccurrence approach has worstcase time complexity o where n is the number of words in the corpus and k is the feature space just to parse a 1 tb corpus this approach requires ap proximately 102 years we propose an algorithm for learning highly scalable lexicopos patternsgiven two sentences with their surface form and part of speech tags the algorithm finds the optimal lexicopos alignmentfor example consider the following 2 sentences 1 platinum is a precious metal2 molybdenum is a metalapplying a pos tagger gives the following output surface platinum is a precious metal pos nnp vbz dt jj nn surface molybdenum is a metal pos nnp vbz dt nn a very good pattern to generalize from the alignment of these two strings would be surface is a metal pos nnp we use the following notation to denote this alignment _nnp is a metal where _nnp represents the pos tag nnpto perform such alignments we introduce two wildcard operators skip and wildcard the skip operator represents 0 or 1 instance of any word while the wildcard operator represents exactly 1 instance of any word 41 algorithmwe present an algorithm for learning patterns at multiple levelsmultilevel representation is de fined as the different levels of a sentence such as the lexical level and pos levelconsider two strings a and b of lengths n and m re spectivelylet a1 and a2 be the level 1 and level 2 representa tions for the string asimilarly let b1 and b2 be the level 1 and level 2 representa tions for the string bthe algorithm consists of two parts calculation of the minimal edit dis tance and retrieval of an optimal patternthe minimal edit distance algorithm calculates the number of edit operations required to change one string to another stringthe optimal pattern is retrieved by phil donahuepat sajakarsenio hall ngenn talk show 93 1177 television show 24 1130 tv show 25 1045 show 255 998 audience 23 780 joke 5 737 vsubjn joke 39 711 tape 10 709 poke 15 687 host 40 647 cohost 4 614 banter 3 600 interview 20 589 nappon host 127 1246 comedian 12 1102 king 13 949 star 6 747 figure 1excerpt of the grammatical signature for the television host class773 keeping track of the edit operations algorithm for calculating the minimal edit distance between two strings d000 for i 1 to n do di0 di10 cost for j 1 to m do d0j d0j1 cost for i 1 to n do for j 1 to m do dij min di1j cost dij1 cost print algorithm for optimal pattern retrieval i n j m while i 0 and j 0 if dij di1j cost print i i1 else if dij dij1 cost print j j1 else if a1i b1j print i i 1 j j 1 else if a2i b2j print i i 1 j j 1 else print i i 1 j j 1 we experimentally set cost 3 cost 3 cost 0 if a1ib1j 1 if a1ib1j a2ib2j 2 if a1ib1j a2ib2j 42 implementation and filteringthe above algorithm takes o time for every pair of strings of length at most y hence if there are x strings in the collection each string having at most length y the algorithm has time complexity o to extract all the patterns in the collectionapplying the above algorithm on a corpus of 3gb with 50 isa relationship seeds we obtain a set of 600 lexicoposfollowing are two of them 1 x_jjnnjjnnnnnn _cc y_jjjjnnjj nnsnnjjnnsnnnnjjnnjjnnnn eg caldera or lava lake2 x_nnpnnpnnpnnpnnpnnpnnpccnnp nnpvbnnnnnvbgnnnn _ _dt y_nninnnjjjjnnjjnnnninnnp nnpnnpnnnnjjnnjjnnnn eg leukemia the cancer of note that we store different pos variations of the anchors x and y as shown in example 1 the pos variations of the anchor x are the variations for anchor y are the reason is quite straightforward we need to determine the boundary of the anchors x and y and a reasonable way to delimit them would be to use pos informationall the patterns produced by the multilevel pattern learning algo rithm were generated from positive examplesfrom amongst these patterns we need to find the most important onesthis is a critical step because frequently occurring patterns have low precision whereas rarely occurring patterns have high preci sionfrom the information extraction point of view neither of these patterns is very usefulwe need to find patterns with relatively high occurrence and high precisionwe apply the log likeli hood principle to compute this scorethe top 15 patterns according to this metric are listed in table 3 some of these patterns are similar to the ones discovered by hearst while other patterns are similar to the ones used by fleischman et al 43 time complexityto extract hyponym relations we use a fixed number of patterns across a corpussince we treat each sentences independently from others the algorithm runs in linear time o over the corpus size where n is number of sentences in the corpusin this section we empirically compare the patternbased and cooccurrencebased models pre sented in section 3 and section 4the focus is on the precision and recall of the systems as a func tion of the corpus size51 experimental setupwe use a 15gb newspaper corpus consisting of trec9 trec 2002 yahoonews 05gb ap newswire 2gb new york times 2gb reuters 08gb wall street journal 12gb and various online news website 15gbfor our experiments we extract from this corpus six data sets of differ ent sizes 15mb 15 mb 150 mb 15gb 6gb and 15gbfor the cooccurrence model we used minipar a broad coverage parser to parse each data setwe collected the frequency counts of the grammatical relationships output by minipar and used them to compute the pointwise mutual information vectors described in section 31for the patternbased approach we use brills pos tagger to tag each data set52 precisionwe performed a manual evaluation to estimate the precision of both systems on each datasetfor each dataset both systems extracted a set of isa table 3top 15 lexicosyntactic patterns discovered by our systemx or y x _dt y _ y like x and x y x _rb known as y _nn x and other y x y x y including x y or x y such as x y such as x x is a y x _rb called y y especially x 774relationshipssix sets were extracted for the patternbased approach and five sets for the co occurrence approach from each resulting set we then randomly se lected 50 words along with their top 3 highest ranking isa relationshipsfor example table 4 shows three randomly selected names for the pat ternbased system on the 15gb datasetfor each word we added to the list of hypernyms a human generated hypernym we also appended the wordnet hypernyms for each word each of the 11 random samples contained a maximum of 350 isa relationships to manually evaluate we presented each of the 11 random samples to two human judgesthe 50 randomly selected words together with the system human and wordnet generated isa relationships were ran domly orderedthat way there was no way for a judge to know the source of a relationship nor each systems ranking of the relationshipsfor each relationship we asked the judges to assign a score of correct partially correct or incorrectwe then computed the average precision of the system human and wordnet on each datasetwe also computed the percentage of times a correct rela tionship was found in the top 3 isa relationships of a word and the mean reciprocal rank for each word a system receives an mrr score of 1 m where m is the rank of the first name judged correcttable 5 shows the results comparing the two automatic systemstable 6 shows similar results for a more lenient evaluation where both correct and partially correct are judged correctfor small datasets the pattern based method achieves higher precision since the cooccurrence method requires a certain critical mass of statistics before it can extract useful class signatures on the other hand the patternbased approach has relatively constant precision since most of the isa relationships se lected by it are fired by a single patternonce the cooccurrence system reaches its critical mass it generates much more precise hypo nymsthe kappa statistics for our experiments were all in the range 078 085table 7 and table 8 compare the precision of the patternbased and cooccurrencebased methods with the human and wordnet hyponymsthe variation between the human and wordnet scores across both systems is mostly due to the relative cleanliness of the tokens in the cooccurrencebased system wordnet consistently generated higher precision relationships although both algorithms approach wordnet quality on 6gb furthermore wordnet only generated a hyponym 40 of the timethis is mostly due to the lack of proper noun coverage in wordneton the 6 gb corpus the cooccurrence approach took approximately 47 single pentium4 25 ghz processor days to complete whereas it took the patternbased approach only four days to complete on 6 gb and 10 days on 15 gb53 recallthe cooccurrence model has higher precision than the patternbased algorithm on most datasetstable 4isa relationships assigned to three randomly selected words random word human wordnet patternbased system sanwa bank bank none subsidiary lender bank mci worldcom inc telecommunications company none phone company competitor company cappuccino beverage none item food beverage table 5average precision top3 precision and mrr for both systems on each datasetpattern system cooccurrence system prec top3 mrr prec top3 mrr 15mb 387 410 410 43 80 73 15mb 391 430 415 146 320 243 150mb 406 460 455 511 730 670 15gb 404 390 390 567 880 777 6gb 463 520 497 649 900 788 15gb 559 540 520 too large to process table 6lenient average precision top3 precision and mrr for both systems on each datasetpattern system cooccurrence system prec top3 mrr prec top3 mrr 15mb 566 600 600 124 200 152 15mb 573 630 610 232 500 373 150mb 507 560 550 606 780 732 15gb 526 510 510 697 930 858 6gb 618 690 675 787 920 862 15gb 678 670 650 too large to process 775 however figure 2 shows that the patternbased approach extracts many more relationshipssemantic extraction tasks are notoriously diffi cult to evaluate for recallto approximate recall we defined a relative recall measure and conducted a question answering task of answering defi nition questions531 relative recall although it is impossible to know the number of isa relationships in any nontrivial corpus it is possible to compute the recall of a system relative to another systems recallthe recall of a system a ra is given by the following formula c c r aa where ca is the number of correct isa relation ships extracted by a and c is the total number of correct isa relationships in the corpuswe define relative recall of system a given system b rab as b a b a ba c c r r are using the precision estimates pa from the pre vious section we can estimate ca pa a where a is the total number of isa relationships discov ered by system a hence bp ap r b a ba figure 3 shows the relative recall of a pattern based approach relative to b cooccurrence modelbecause of sparse data the patternbased approach has much higher precision and recall than the cooccurrence approach on the small 15mb datasetin fact only on the 150mb dataset did the cooccurrence system have higher recallwith datasets larger than 150mb the co occurrence algorithm reduces its running time by filtering out grammatical relationships for words that occurred fewer than k 40 times and hence recall is affected 532 definition questions following fleischman et al we select the 50 definition questions from the trec2003 question setthese questions are of the form who is xand what is xfor each question we extract its respective instance look up their corresponding hyponyms from our isa table and present the corresponding hyponym as the answerwe compare the results of both our systems with wordnetwe extract at most the top 5 hyponyms provided by each systemwe manually evaluate the three systems and assign 3 classes correct partially correct or incorrect to each answerthis evaluation is different from the evaluation performed by the trec organizers for definition questionshowever by being consistent across all total number of isa relationships vs dataset 0 200000 400000 600000 800000 1000000 1200000 1400000 15mb 15mb 150mb 15gb 6gb 15gb datasets to ta l is a re la tio n s hi ps s patternbased system cooccurrencebased system figure 2number of isa relationships extracted by the patternbased and cooccurrencebased approachestable 7average precision of the patternbased sys tem vs wordnet and human hyponymsprecision mrr patwnet human patwnet human 15mb 387 458 830 410 844 830 15mb 391 524 810 415 950 910 150mb 406 494 840 455 889 940 15gb 404 434 790 390 933 890 6gb 463 465 760 497 750 760 15gb 559 456 790 520 780 790 table 8average precision of the cooccurrence based system vs wordnet and human hyponymsprecision mrr coocc wnet human coocc wnet human 15mb 43 427 527 73 877 950 15mb 146 381 487 243 866 950 150mb 511 575 658 670 851 980 15gb 567 628 703 777 930 980 6gb 649 689 752 788 943 980 relative recall 000 100 200 300 400 500 600 700 15mb 15mb 150mb 15gb 6gb 15gb datesets re la tiv e re ca ll figure 3relative recall of the patternbased approach relative to the cooccurrence approach776 systems during the process these evaluations give an indication of the recall of the knowledge basewe measure the performance on the top 1 and the top 5 answers returned by each systemtable 9 and table 10 show the resultsthe corresponding scores for wordnet are 38 accuracy in both the top1 and top5 categories as seen in this experiment the results for both the patternbased and cooccurrencebased systems report very poor per formance for data sets up to 150 mbhowever there is an increase in performance for both systems on the 15 gb and larger datasetsthe per formance of the system in the top 5 category is much better than that of wordnet there is promise for increasing our system accuracy by re ranking the outputs of the top5 hypernymsthere is a long standing need for higher quality performance in nlp systemsit is possible that semantic resources richer than wordnet will en able them to break the current quality ceilingsboth statistical and symbolic nlp systems can make use of such semantic knowledgewith the increased size of the web more and more training data is becoming available and as banko and brill showed even rather simple learning algo rithms can perform well when given enough datain this light we see an interesting need to de velop fast robust and scalable methods to mine semantic information from the webthis paper compares and contrasts two methods for extracting isa relations from corporawe presented a novel patternbased algorithm scalable to the terascale which outperforms its more informed syntactical cooccurrence counterpart on very small and very large dataalbeit possible to successfully apply linguisti callylight but datarich approaches to some nlp applications merely reporting these results often fails to yield insights into the underlying theories of language at playour biggest challenge as we venture to the terascale is to use our new found wealth not only to build better systems but to i am prove our understanding of language
C04-1111
towards terascale semantic acquisitionalthough vast amounts of textual data are freely available many nlp algorithms exploit only a minute percentage of itin this paper we study the challenges of working at the terascalewe present an algorithm designed for the terascale for mining isa relations that achieves similar performance to a stateoftheart linguisticallyrich methodwe focus on the accuracy of these two systems as a function of processing time and corpus sizewe propose a similar highly scalable approach based on an editdistance technique to learn lexicopos patterns showing both good performance and efficiencywe extend isa relation acquisition towards terascale and automatically identified hypernym patterns by minimal edit distancewe propose in the scenario of extracting isa relations one patternbased approach and compared it with a baseline syntactic distributional similarity method
characterising measures of lexical distributional similarity this work investigates the variation in a words dis tributionally nearest neighbours with respect to the similarity measure used we identify one type ofvariation as being the relative frequency of the neighbour words with respect to the frequency of the target word we then demonstrate a threeway connec tion between relative frequency of similar words aconcept of distributional gnerality and the seman tic relation of hyponymy finally we consider theimpact that this has on one application of distributional similarity methods over recent years many natural language pro cessing techniques have been developedthat might benefit from knowledge of distribu tionally similar words ie words that occur in similar contextsfor example the sparse dataproblem can make it difficult to construct language models which predict combinations of lex ical eventssimilaritybased smoothing is an intuitivelyappealing approach to this problem where prob abilities of unseen cooccurrences are estimatedfrom probabilities of seen cooccurrences of dis tributionally similar eventsother potential applications apply the hy pothesised relationship betweendistributional similarity and semantic similar ity ie similarity in the meaning of words can be predicted from their distributional similarityone advantage of automatically generated the sauruses over largescale manually cre ated thesauruses such as wordnet is that they might be tailored to a partic ular genre or domainhowever due to the lack of a tight defini tion for the concept of distributional similarity and the broad range of potential applications alarge number of measures of distributional similarity have been proposed or adopted previous work on the evaluation of dis tributional similarity methods tends to either compare sets of distributionally similar words to a manually created semantic resource or be orientedtowards a particular task such as language mod elling the first approach is not ideal since it assumes that the goal of distributional similarity methods is topredict semantic similarity and that the semantic resource used is a valid gold standardfurther the second approach is clearly advanta geous when one wishes to apply distributional similarity methods in a particular application areahowever it is not at all obvious that oneuniversally best measure exists for all applica tions thus applying adistributional similarity technique to a new ap plication necessitates evaluating a large number of distributional similarity measures in addition to evaluating the new model or algorithmwe propose a shift in focus from attemptingto discover the overall best distributional sim ilarity measure to analysing the statistical and linguistic properties of sets of distributionally similar words returned by different measuresthis will make it possible to predict in advanceof any experimental evaluation which distributional similarity measures might be most appro priate for a particular applicationfurther we explore a problem faced by the automatic thesaurus generation community which is that distributional similarity methodsdo not seem to offer any obvious way to distinguish between the semantic relations of syn onymy antonymy and hyponymyprevious work on this problem involves identifying specific phrasal patterns within text eg xs and other ysis used as evidence that x is a hyponym of y our work explores the connection between relativefrequency distributional generality and seman tic generality with promising resultsthe rest of this paper is organised as followsin section 2 we present ten distributional simi larity measures that have been proposed for use in nlpin section 3 we analyse the variation in neighbour sets returned by these measuresin section 4 we take one fundamental statisticalproperty and analyse correla tion between this and the nearest neighbour setsgeneratedin section 5 we relate relative fre quency to a concept of distributional generalityand the semantic relation of hyponymyin sec tion 6 we consider the effects that this has on a potential application of distributional similarity techniques which is judging compositionality of collocationsin this section we introduce some basic con cepts and then discuss the ten distributional similarity measures used in this studythe cooccurrence types of a target word are the contexts c in which it occurs and these have associated frequencies which may be used to form probability estimatesin our work thecooccurrence types are always grammatical de pendency relationsfor example in sections 3 to 5 similarity between nouns is derived fromtheir cooccurrences with verbs in the direct object positionin section 6 similarity between verbs is derived from their subjects and objectsthe k nearest neighbours of a target word w are the k words for which similarity with w is greatestour use of the term similarity measure encompasses measures which should strictly bereferred to as distance divergence or dissimilar ity measuresan increase in distance correlates with a decrease in similarityhowever eithertype of measure can be used to find the k near est neighbours of a target wordtable 1 lists ten distributional similarity mea suresthe cosine measure returns the cosine of the angle between two vectorsthe jensenshannon divergence measure and the skew divergence measure are based on the kullbackleibler divergence measurethe kl divergenceor relative entropy d between two prob ability distribution functions p and q is defined as the inefficiency of assuming that the distribution is q when the true distribution is p d c p log p q however d if there are any con texts c for which p 0 and q 0thusthis measure cannot be used directly on maxi mum likelihood estimate probabilitiesone possible solution is to use the js diver gence measure which measures the cost of usingthe average distribution in place of each individual distributionanother is the skew diver gence measure which uses the p distribution tosmooth the q distributionthe value of the pa rameter controls the extent to which the kl divergence is approximatedwe use 099 since this provides a close approximation to the kl divergence and has been shown to provide good results in previous research the confusion probability is an estimate of the probability that one word can be substituted for anotherwords w1 and w2 are completely confusable if we are equally as likely to see w2 in a given context as we are to see w1 in that contextjaccards coefficient calculates the proportion of features be longing to either word that are shared by both wordsin the simplest case the features of a word are defined as the contexts in which it has been seen to occursimjami is a variant in which the features of a word are thosecontexts for which the pointwise mutual infor mation between the word and the context is positive where mi can be calculated usingi log p p the related dice coeffi cient is omitted here since it has been shown that dice and jaccards coefficients are monotonic in each otherlins measure is based on his informationtheoretic similarity theorem whichstates the similarity between a and b is measured by the ratio between the amount of in formation needed to state the commonality of a and b and the information needed to fully describe what a and b arethe final three measures are settings in the additive mibased cooccurrence retrieval model we can measure the precisionand the recall of a potential neighbours re trieval of the cooccurrences of the target wordwhere the sets of required and retrieved co occurrences and f respectively are those cooccurrences for which mi is positiveneighbours with both high precision and high recall retrieval can be obtained by computing measure function cosine simcm c p p c p 2 c p 2 jensshandistjs 12 d where p p and q p skew dist d p where p p and q p confprobsimcp c p p p p jaccards simja f f f f where f c p 0 jaccmi simjami f f f f where f c i 0 lins simlin f f i f i f i where f c i 0 precision simp f f i f i where f c i 0 recall simr f f i f i where f c i 0 harmmean simhm 2simp simr simp simr where f c i 0 table 1 ten distributional similarity measures their harmonic mean we have described a number of ways of calculating distributional similaritywe now con sider whether there is substantial variation ina words distributionally nearest neighbours ac cording to the chosen measurewe do this by calculating the overlap between neighbour setsfor 2000 nouns generated using different mea sures from directobject data extracted from the british national corpus 31 experimental setupthe data from which sets of nearest neighbours are derived is directobject data for 2000 nouns extracted from the bnc using a robust accurate statistical parser for reasons of computational efficiencywe limit ourselves to 2000 nouns and directobject relation datagiven the goal of comparing neighbour sets generated by different mea sures we would not expect these restrictions to affect our findingsthe complete set of 2000 nouns is the union of two sets wshigh and wslow for which nouns were selected on the basis of frequency wshigh contains the 1000 most frequently occurring nouns and wslow contains the nouns ranked 30014000 by excludingmidfrequency nouns we obtain a clear sepa ration between high and low frequency nounsthe complete dataset consists of 1596798 cooccurrence tokens distributed over 331079 co occurrence typesfrom this data we computedthe similarity between every pair of nouns according to each distributional similarity mea surewe then generated ranked sets of nearest neighbours for each word and each measurefor a given word we compute the overlap between neighbour sets using a comparison tech nique adapted from lin given a word w each word win wscomp is assigned a rankscore of k rank if it is one of the k near est neighbours of w using measure m and zero otherwiseif ns is the vector of such scores for word w and measure m then theoverlap cns of two neigh bour sets is the cosine between the two vectors cns w rm1rm2 k i1 i2 the overlap score indicates the extent to which sets share members and the extent to whichthey are in the same orderto achieve an over lap score of 1 the sets must contain exactly the same items in exactly the same orderan overlap score of 0 is obtained if the sets do not contain any common itemsif two sets share roughly half their items and these shared items are dispersed throughout the sets in a roughlysimilar order we would expect the overlap be tween sets to be around 05cm js cp ja jami lin cm 10 069 053 033 026 028 032 js 069 10 081 046 048 049 055 053 081 10 061 04 039 048 cp 033 046 061 10 024 020 029 ja 026 048 04 024 10 081 069 jami 028 049 039 020 081 10 081 lin 032 055 048 029 069 081 10 table 2 crosscomparison of first seven similarity measures in terms of mean overlap of neighbour sets and corresponding standard deviationsp r hm cm 018 031 030 js 019 055 051 008 074 041 cp 003 057 025 ja 036 038 074 jami 042 040 086 lin 046 052 095table 3 mean overlap scores for seven simi larity measures with precision recall and the harmonic mean in the amcrm32 resultstable 2 shows the mean overlap score between every pair of the first seven measures in table 1 calculated over wscomptable 3 shows the mean overlap score between each of these measures and precision recall and the harmonic mean inthe amcrmin both tables standard devia tions are given in brackets and boldface denotes the highest levels of overlap for each measurefor compactness each measure is denoted by its subscript from table 1although overlap between most pairs of measures is greater than expected if sets of 200 neighbours were generated randomly from wscomp and is not significantly greaterthan this at the 1 level there are substantial differences between the neighbour sets gen erated by different measuresfor example for many pairs neighbour sets do not appear to have even half their members in commonwe have seen that there is a large variation inneighbours selected by different similarity mea suresin this section we analyse how neighboursets vary with respect to one fundamental statis tical property word frequencyto do this we measure the bias in neighbour sets towards high frequency nouns and consider how this varies depending on whether the target noun is itself a high frequency noun or low frequency noun41 measuring biasif a measure is biased towards selecting high frequency words as neighbours then we would ex pect that neighbour sets for this measure wouldbe made up mainly of words from wshighfur ther the more biased the measure is the more highly ranked these high frequency words will tend to bein other words there will be highoverlap between neighbour sets generated con sidering all 2000 nouns as potential neighbours and neighbour sets generated considering just the nouns in wshigh as potential neighboursin the extreme case where all of a nouns k nearestneighbours are high frequency nouns the over lap with the high frequency noun neighbour set will be 1 and the overlap with the low frequency noun neighbour set will be 0the inverse is ofcourse true if a measure is biased towards se lecting low frequency words as neighboursif nswordset is the vector of neighbours for a given word w andsimilarity measure m and generated considering just the words in wordset as potential neigh bours then the overlap between two neighboursets can be computed using a cosine if chigh c and clow c then we compute the bias towards high frequency neighbours for word w us ing measure m as biashighm chigh chighclow the value of this normalised score lies in the range 01 where 1 indicates a neighbour set completely made up of high frequency words 0 indicates a neighbour set completely made up oflow frequency words and 05 indicates a neighbour set with no biases towards high or low fre quency wordsthis score is more informative than simply calculating the proportion of high high freqlow freqtarget nouns target nouns cm 090 087 js 094 070 098 090 cp 100 099 ja 099 021 jami 095 014 lin 085 038 p 012 004 r 099 098 hm 092 028 table 4 mean value of biashigh according to measure and frequency of target nounand low frequency words in each neighbour set because it weights the importance of neighbours by their rank in the setthus a large numberof high frequency words in the positions clos est to the target word is considered more biased than a large number of high frequency words distributed throughout the neighbour set42 resultstable 4 shows the mean value of the biashigh score for every measure calculated over the set of high frequency nouns and over the set of low frequency nounsthe standard deviations all lie in the range 002any deviation from 05 of greater than 00234 is significant at the 1 levelfor all measures and both sets of target nouns there appear to be strong tendencies toselect neighbours of particular frequenciesfurther there appears to be three classes of mea sures those that select high frequency nouns as neighbours regardless of the frequency of thetarget noun those that select low frequency nouns as neighbours regard less of the frequency of the target noun and those that select nouns of a similar frequency to the target noun this can also be considered in terms of distri butional generalityby definition recall preferswords that have occurred in more of the con texts that the target noun has regardless of whether it occurs in other contexts as well ie it prefers distributionally more general wordsthe probability of this being the case increasesas the frequency of the potential neighbour increases and so recall tends to select high fre quency wordsin contrast precision prefers words that have occurred in very few contextsthat the target word has not ie it prefers distributionally more specific wordsthe prob ability of this being the case increases as the frequency of the potential neighbour decreases and so precision tends to select low frequencywordsthe harmonic mean of precision and re call prefers words that have both high precision and high recallthe probability of this beingthe case is highest when the words are of sim ilar frequency and so the harmonic mean will tend to select words of a similar frequencyin this section we consider the observed fre quency effects from a semantic perspectivethe concept of distributional generality in troduced in the previous section has parallels with the linguistic relation of hyponymy where a hypernym is a semantically more general term and a hyponym is a semantically more specifictermfor example animal is an hypernym of dog and conversely dog is an hyponym of animalalthough one can obviously think of counterexamples we would generally expect that the more specific term dog can only be used in contexts where animal can be used and that the more general term animal might be used in all of the contexts where dogis used and possibly othersthus we might ex pect that distributional generality is correlated with semantic generality a word has high recalllow precision retrieval of its hyponymscooccurrences and high precisionlow recall re trieval of its hypernymscooccurrencesthus if n1 and n2 are related and p r we might expect that n2 is a hy ponym of n1 and vice versahowever having discussed a connection between frequency and distributional generality we might also expect to find that the frequency of the hypernymic term is greater than that of the hyponymictermin order to test these hypotheses we ex tracted all of the possible hyponymhypernym pairs from our list of 2000 nouns we then calculatedthe proportion for which the direction of the hy ponymy relation could be accurately predicted by the relative values of precision and recall andthe proportion for which the direction of the hy ponymy relation could be accurately predictedby relative frequencywe found that the direc tion of the hyponymy relation is correlated in the predicted direction with the precisionrecall 1there may be other concepts in the hypernym chain between dog and animal eg carnivore and mammalvalues in 71 of cases and correlated in the pre dicted direction with relative frequency in 70 of casesthis supports the idea of a threewaylinking between distributional generality rela tive frequency and semantic generalitywe now consider the impact that this has on a potential application of distributional similarity methodsin its most general sense a collocation is a habitual or lexicalised word combinationhow ever some collocations such as strong tea arecompositional ie their meaning can be determined from their constituents whereas oth ers such as hot dog are notboth types areimportant in language generation since a sys tem must choose between alternatives but onlynoncompositional ones are of interest in language understanding since only these colloca tions need to be listed in the dictionarybaldwin et al explore empiricalmodels of compositionality for nounnoun com pounds and verbparticle constructionsbased on the observation that compositional collocations tend to be hyponyms of their head constituent they propose a model which considers the semantic similarity between a collocation and its constituent wordsmccarthy et al also investigate sev eral tests for compositionality including one based on the observation that compositional collocations tend to be similar inmeaning to their constituent partsthey ex tract cooccurrence data for 111 phrasal verbs and their simplex constituents from the bnc using rasp and cal culate the value of simlin between each phrasal verb and its simplex constituentthe test simplexscore is used to rank the phrasal verbs according to their similarity with their simplexconstituentthis ranking is correlated with hu man judgements of the compositionality of the phrasal verbs using spearmans rank correlationcoefficientthe value obtained is dis appointing since it is not statistically significant2 however haspelmath notes that a compositional collocation is not just similar to one of its constituents it can be considered tobe a hyponym of its head constituentfor ex ample strong teais a type of teaand to2other tests for compositionality investigated by mc carthy et al do much bettermeasure rs p under h0 simlin 00525 02946 precision 0160 00475 recall 0219 00110 harmonic mean 0011 04562 table 5 correlation with compositionality for different similarity measures rip upis a way of rippingthus we hypothesised that a distributional measure which tends to select more generalterms as neighbours of the phrasal verb would do better than measures that tend to select more specific terms or measures that tend to select terms of a similar specificity table 5 shows the results of using different similarity measures with the simplexscore test and data of mccarthy et al we now see significant correlation between compositionality judgements and distributional similarity of thephrasal verb and its head constituentthe cor relation using the recall measure is significant at the 5 level thus we can conclude that if the simplex verb has high recall retrieval of the phrasal verbs cooccurrences then the phrasal is likely to be compositionalthe correlation score using the precision measure is negative since we would not expect the simplex verb to be a hyponym of the phrasal verb and thus ifthe simplex verb does have high precision re trieval of the phrasal verbs cooccurrences it is less likely to be compositionalfinally we obtained a very similar result by ranking phrasals according to their inverse relative frequency with their simplex constituent freq thus it would seem that the threeway connection betweendistributional generality hyponymy and rela tive frequency exists for verbs as well as nounswe have presented an analysis of a set of dis tributional similarity measureswe have seen that there is a large amount of variation in the neighbours selected by different measures andtherefore the choice of measure in a given appli cation is likely to be importantwe also identified one of the major axes ofvariation in neighbour sets as being the fre quency of the neighbours selected relative to the frequency of the target wordthere are threemajor classes of distributional similarity mea sures which can be characterised as 1 higher frequency selecting or high recall measures 2lower frequency selecting or high precision mea sures and 3 similar frequency selecting or high precision and recall measuresa word tends to have high recall similarity with its hyponyms and high precision similarity with its hypernymsfurther in the majority ofcases it tends to be more frequent than its hy ponyms and less frequent than its hypernymsthus there would seem to a three way corre lation between word frequency distributional generality and semantic generalitywe have considered the impact of these observations on a technique which uses a distributional similarity measure to determine compositionality of collocationswe saw that in this ap plication we achieve significantly better resultsusing a measure that tends to select higher frequency words as neighbours rather than a mea sure that tends to select neighbours of a similar frequency to the target wordthere are a variety of ways in which this workmight be extendedfirst we could use the ob servations about distributional generality andrelative frequency to aid the process of organising distributionally similar words into hierar chiessecond we could consider the impact of frequency characteristics in other applicationsthird for the general application of distribu tional similarity measures it would be usefulto find other characteristics by which distribu tional similarity measures might be classifiedacknowledgementsthis work was funded by a uk epsrc stu dentship to the first author uk epsrc project grs2640801 and uk epsrc project grn3649401 we would liketo thank adam kilgarriff and bill keller for use ful discussions
C04-1146
characterising measures of lexical distributional similaritythis work investigates the variation in a word distributionally nearest neighbours with respect to the similarity measure usedwe identify one type of variation as being the relative frequency of the neighbour words with respect to the frequency of the target wordwe then demonstrate a threeway connection between relative frequency of similar words a concept of distributional gnerality and the semantic relation of hyponymyfinally we consider the impact that this has on one application of distributional similarity methods abstracting from results for concrete test sets we try to identify statistical and linguistic properties on that the performance of similarity metrics generally dependswe also found that frequency played a large role in determining the direction of entailment with the more general term often occurring more frequentlywe analyzed the variation in a word distribution ally nearest neighbours with respect to a variety of similarity measureswe attempted to refine the distributional similarity goal to predict whether one term is a generalizationspecification of the other
widecoverage semantic representations from a ccg parser this paper shows how to construct semantic representations from the derivations producedby a widecoverage ccg parser unlike the dependency structures returned by the parser itself these can be used directly for semantic in terpretation we demonstrate that wellformed semantic representations can be produced for over 97 of the sentences in unseen wsj textwe believe this is a major step towards wide coverage semantic interpretation one of the key objectives of the field of nlp the levels of accuracy and robustness recently achieved by statistical parsers charniak have led to their use in a num ber of nlp applications such as questionanswering machine translation sentence simplifica tion and a linguists search engine such parsers typically return phrasestructure trees in the styleof the penn treebank but without traces and co indexationhowever the usefulness of this outputis limited since the underlying meaning is difficult to reconstruct from such skeletal parse treesin this paper we demonstrate how a widecoverage statistical parser using combinatory categorial grammar can be used to generate semantic representationsthere are a number of ad vantages to using ccg for this taskfirst ccg provides surface compositionalanalysis of certainsyntactic phenomena such as coordination and ex traction allowing the logical form to be obtained for such cases in a straightforward waysecond ccg isa lexicalised grammar and only uses a small num ber of semantically transparent combinatory rules tocombine ccg categorieshence providing a compositional semantics for ccg simply amounts to assigning semantic representations to the lexical en tries and interpreting the combinatory rulesandthird there exist highly accurate efficient and ro bust ccg parsers which can be used directly for this task the existing ccg parsers deliver predicate argu ment structures but not semantic representations that can be used for inferencethe present paper seeks to extend one of these wide coverage parsers by using it to build logical forms suitable for use invarious nlp applications that require semantic in terpretationwe show how to construct firstorder represen tations from ccg derivations using the calculus and demonstrate that semantic representations can be produced for over 97 of the sentences in unseen wsj textthe only other deep parser we are aware of to achieve such levels of robustness for the wsj is kaplan et al the use of the calculusis integral to our methodhowever firstorder rep resentations are simply used as a proofofconcept we could have used drss or some other representation more tailored to the ap plication in handthere is some existing work with a similar motivation to oursbriscoe and carroll gen erate underspecified semantic representations fromtheir robust parsertoutanova et al and ka plan et al combine statistical methods with a linguistically motivated grammar formalism in an attempt to achieve levels of robustness and accuracy comparable to the penn treebank parsers however there is a key difference between these approaches and oursin our approach the creation of the semantic representations forms a completely it could cost taxpayers 15 million to install and residents 1 million a year to maintain np
C04-1180
widecoverage semantic representations from a ccg parserthis paper shows how to construct semantic representations from the derivations produced by a widecoverage ccg parserunlike the dependency structures returned by the parser itself these can be used directly for semantic interpretationwe demonstrate that wellformed semantic representations can be produced for over 97 of the sentences in unseen wsj textwe believe this is a major step towards widecoverage semantic interpretation one of the key objectives of the field of nlpwe present an algorithm that learns ccg lexicons with semantics but requires fully specified ccg derivations in the training datawe consider the challenging problem of constructing broadcoverage semantic representations with ccg but do not learn the lexicon
semantic role labeling via integer linear programming inference we present a system for the semantic role la beling task the system combines a machine learning technique with an inference procedurebased on integer linear programming that supports the incorporation of linguistic and struc tural constraints into the decision process thesystem is tested on the data provided in conll 2004 shared task on semantic role labeling and achieves very competitive results semantic parsing of sentences is believed to be animportant task toward natural language understand ing and has immediate applications in tasks such information extraction and question answeringwe study semantic role labelingfor each verb in a sentence the goal is to identify all constituents that fill a semantic role and to determine their rolessuch as agent patient or instrument and their ad juncts such as locative temporal or mannerthe propbank project provides a large humanannotated corpus of semantic verbargument relationsspecifically we use the data provided in the conll2004 shared task of semanticrole labeling which consists of a portion of thepropbank corpus allowing us to compare the per formance of our approach with other systemsprevious approaches to the srl task have madeuse of a full syntactic parse of the sentence in or der to define argument boundaries and to determine the role labels in this work following the conll2004 shared task definition we assume thatthe srl system takes as input only partial syn tactic information and no external lexicosemantic knowledge basesspecifically we assume as input resources a partofspeech tagger a shallow parser that can process the input to the level of basedchunks and clauses and a namedentity recognizer we do not assume a full parse as inputsrl is a difficult task and one cannot expecthigh levels of performance from either purely man ual classifiers or purely learned classifiersrather supplemental linguistic information must be used to support and correct a learning systemso farmachine learning approaches to srl have incorpo rated linguistic information only implicitly via theclassifiersfeaturesthe key innovation in our ap proach is the development of a principled method tocombine machine learning techniques with linguistic and structural constraints by explicitly incorpo rating inference into the decision processin the machine learning part the system we present here is composed of two phasesfirst a set of argument candidates is produced using twolearned classifiersone to discover beginning po sitions and one to discover end positions of each argument typehopefully this phase discovers a small superset of all arguments in the sentence in a second learning phase the candi date arguments from the first phase are rescored using a classifier designed to determine argument type given a candidate argumentunfortunately it is difficult to utilize global prop erties of the sentence into the learning phaseshowever the inference level it is possible to incorporate the fact that the set of possible rolelabelings is restricted by both structural and lin guistic constraintsfor example arguments cannotstructurally overlap or given a predicate some ar gument structures are illegalthe overall decision problem must produce an outcome that consistent with these constraintswe encode the constraints aslinear inequalities and use integer linear programming as an inference procedure to make a final decision that is both consistent with the con straints and most likely according to the learningsystemalthough ilp is generally a computationally hard problem there are efficient implementations that can run on thousands of variables and constraintsin our experiments we used the commer cial ilp package and were able to process roughly twenty sentences per secondthe goal of the semanticrole labeling task is to dis cover the verbargument structure for a given input sentencefor example given a sentence i left my pearls to my daughterinlaw in my will the goal is to identify different arguments of the verb left which yields the outputa0 i v left a1 my pearls a2 to my daughter inlaw amloc in my willhere a0 represents the leaver a1 represents the thing left a2 represents the benefactor amloc is an adjunct indicating the location of the action and v determines the verbfollowing the definition of the propbank and conll2004 shared task there are six different types of arguments labelled as a0a5 and aathese labels have different semantics for each verbas specified in the propbank frame filesin addi tion there are also 13 types of adjuncts labelled as amxxx where xxx specifies the adjunct typein some cases an argument may span over differ ent parts of a sentence the label cxxx is used to specify the continuity of the arguments as shown in the example belowa1 the pearls a0 i v said ca1 were left to my daughterinlawmoreover in some cases an argument might be a relative pronoun that in fact refers to the actual agentoutside the clausein this case the actual agent is la beled as the appropriate argument type xxx while the relative pronoun is instead labeled as rxxxfor example a1 the pearls ra1 which a0 i v left a2 to my daughterinlaw are fakesee the details of the definition in kingsbury and palmer and carreras and marquez our semantic role labeling system consists of two phasesthe first phase finds a subset of arguments from all possible candidatesthe goal here is tofilter out as many as possible false argument candidates while still maintaining high recallthe sec ond phase focuses on identifying the types of thoseargument candidatessince the number of candi dates is much fewer the second phase is able to use slightly complicated features to facilitate learning a better classifierthis section first introduces the learning system we use and then describes how we learn the classifiers in these two phases31 snow learning architecturethe learning algorithm used is a variation of the winnow update rule incorporated in snow a multiclass classifier that is specifically tailored for large scale learningtaskssnow learns a sparse network of linear functions in which the targets arerepresented as linear functions over a common fea ture spaceit incorporates several improvements over the basic winnow multiplicative update rulein particular a regularization term is added which has the effect of trying to separate the data with a thick separator in the work presented here we use this regu larization with a fixed parameterexperimental evidence has shown that snow activations are monotonic with the confidence in the predictiontherefore it can provide a goodsource of probability estimationwe use soft max over the raw activation values as conditional probabilities and also the score of the targetspecifically suppose the number of classes is n and the raw activation values of class i is actithe posterior estimation for class i is derived by the following equationscore pi e acti 1jn eactj the score plays an important role in different placesfor example the first phase uses the scoresto decide which argument candidates should be filtered outalso the scores output by the second phase classifier are used in the inference procedure to reason for the best global labeling32 first phase find argument candidatesthe first phase is to predict the argument candidates of a given sentence that correspond to the active verbunfortunately it turns out that it is difficult to predict the exact arguments accuratelytherefore the goal here is to output a superset of the correct arguments by filtering out unlikely candidatesspecifically we learn two classifiers one to de tect beginning argument locations and the otherto detect end argument locationseach multi class classifier makes predictions over fortythree classesthirtytwo argument types ten continuousargument types and one class to detect not begin ningnot endfeatures used for these classifiers are word feature includes the current word two words before and two words afterpartofspeech tag feature includes the pos tags of all words in a window of size twochunk feature includes the bio tags for chunks of all words in a window of size twopredicate lemma pos tag show the lemma form and pos tag of the active predicatevoice feature is the voice of the current predicatethis is extracted with a simple rule a verb is identified as passive if it follows a tobe verb in the same phrase chunk and its pos tag is vbn or it immediately follows a noun phraseposition feature describes if the current word is before or after the predicatechunk pattern encodes the sequence of chunks from the current words to the predicateclause tag indicates the boundary of clausesclause path feature is a path formed from a semiparsed tree containing only clauses and chunkseach clause is named with the chunk preceding itthe clause path is the path from predicate to target word in the semiparse treeclause position feature is the position of the target word relative to the predicate in the semiparse tree containing only clausesthereare four configurations target word and pred icate share the same parent target word parent is an ancestor of predicate predicate parent is an ancestor of target word or otherwisebecause each argument consists of a single be ginning and a single ending these classifiers can be used to construct a set of potential arguments although this phase identifies typed arguments the second phasewill rescore each phrase using phrasebased classifiers therefore the goal of the first phase is sim ply to identify nontyped phrase candidatesin thistask we achieves 9896 and 8865 recall on the training and the develop ment set respectivelybecause these are the onlycandidates passed to the second phase the final sys tem performance is upperbounded by 886533 second phase argument classificationthe second phase of our system assigns the final argument classes to of the argument can didates supplied from the first phaseagain thesnow learning architecture is used to train a multi class classifier to label each argument to one of the argument types plus a special classno argumenttraining examples are created from the argu ment candidates supplied from the first phase using the following features predicate lemma pos tag voice position clause path clause position chunk pattern same features as those in the first phaseword pos tag from the argument includ ing the firstlastand head1 word and tagnamed entity feature tells if the target argu ment is embeds overlaps or is embedded in a named entity with its typechunk tells if the target argument is embeds overlaps or is embedded in a chunk with its typelengths of the target argument in the numbers of words and chunks separatelyverb class feature is the class of the active predicate described in propbank framesphrase type uses simple heuristics to identify the target argument as vp pp or npsubcategorization describes the phrase structure around the predicatewe separate the clause where the predicate is in into three partsthe predicate chunk segments before and after the predicate and use the sequence of phrase types of these three segmentsbaseline features identified not in the main verb chunk as amneg and modal verb in the main verb chunk as ammodclause coverage describes how much of the local clause is covered by the target argumentchunk pattern length feature counts the num ber of patterns in the argumentconjunctions join every pair of the above fea tures as new featuresboundary words pos tag include twowordstags before and after the target argu mentbigrams are pairs of wordstags in the window from two words before the target to the first word of the target and also from the last word to two words after the argument1we use simple rules to first decide if a candidate phrase type is vp np or ppthe headword of an np phrase is the rightmost nounsimilarly the leftmost verbproposition of a vppp phrase is extracted as the headword sparse collocation picks one wordtag from the two words before the argument the first wordtag the last wordtag of the argument and one wordtag from the two words after the argument to join as featuresalthough the predictions of the secondphaseclassifier can be used directly the labels of argu ments in a sentence often violate some constraintstherefore we rely on the inference procedure to make the final predictionsideally if the learned classifiers are perfect arguments can be labeled correctly according to the classifierspredictionsin reality labels assigned to ar guments in a sentence often contradict each otherand violate the constraints arising from the struc tural and linguistic informationin order to resolve the conflicts we design an inference procedure thattakes the confidence scores of each individual argument given by the secondphase classifier as in put and outputs the best global assignment that also satisfies the constraintsin this section we firstintroduce the constraints and the inference prob lem in the semantic role labeling taskthen wedemonstrate how we apply integer linear program ming to reason for the global label assignment41 constraints over argument labelingformally the argument classifier attempts to assign labels to a set of arguments s1m indexed from 1 to m each argument si can take any label from a set of argument labels p and the indexed set of arguments can take a set of labels c1m pm if we assume that the classifier returns a score score corresponding to the likelihood ofseeing label ci for argument si then given a sentence the unaltered inference task is solved by max imizing the overall score of the arguments c1m argmax c1mpm score argmax c1mpm mi1 score in the presence of global constraints derived from linguistic information and structural considerationsour system seeks for a legitimate labeling that max imizes the scorespecifically it can be viewed asthe solution space is limited through the use of a filter function f that eliminates many argument labelings from considerationit is interesting to con trast this with previous work that filters individual phrases here we are concerned with global constraints as well as constraints on the argumentstherefore the final labeling becomes c1m argmax c1mf mi1 score the filter function used considers the following con straints 1arguments cannot cover the predicate exceptthose that contain only the verb or the verb and the following word2arguments cannot overlap with the clauses3if a predicate is outside a clause its argumentscannot be embedded in that clause4no overlapping or embedding arguments7if there is cv then there should be a sequenceof consecutive v a1 and cv patternfor ex ample when split is the verb in split it up the a1 argument is itand cv argument is up8if there is an rxxx argument then there hasto be an xxx argumentthat is if an ar gument is a reference to some other argument xxx then this referenced argument must exist in the sentence9if there is a cxxx argument then there hasto be an xxx argument in addition the c xxx argument must occur after xxxthis is stricter than the previous rule because the order of appearance also needs to be considered10given the predicate some argument classesare illegal this linguistic information can be found in propbank frameswe reformulate the constraints as linear equalities by introducing indicator variablesthe optimization problem is solved using ilp42 using integer linear programmingas discussed previously a collection of potential arguments is not necessarily a valid semantic label ing since it must satisfy all of the constraintsin this context inference is the process of finding the best valid semantic labels that satisfy all of the specified constraintswe take a similar approach that has been previously used for entityrelation recognition and model this inference procedure as solving an ilpan integer linear program is basically the same as a linear programthe cost function and the equality constraints are all linear in terms of thevariablesthe only difference in an ilp is the vari ables can only take integers as their valuesin our inference problem the variables are in fact binarya general binary integer programming problem can be stated as followsgiven a cost vector p d a set of variables z and cost matrices c1 t1 dc2 t2d where t1 and t2 are the numbers of inequality and equality constraints and d is the number of binary variablesthe ilp solution zis the vector that maximizes the cost function z argmax z01d p z subject to c1z b1 and c2z b2 where b1b2 d and for all z z z 0 1to solve the problem of equation 2 in this set ting we first reformulate the original cost functionmi1 score as a linear function over sev eral binary variables and then represent the filter function f using linear inequalities and equalitieswe set up a bijection from the semantic labeling to the variable set z this is done by setting z to a set of indicator variablesspecifically let zic si c be the indicator variable that represents whether or not the argument type c is assigned to si and let pic scoreequation 1 can then be written as an ilp cost function as argmax z01d mi1 pc1 piczic subject to pc1 zic 1 zic z which means that each argument can take only one typenote that this new constraint comes from thevariable transformation and is not one of the con straints used in the filter function f constraints 1 through 3 can be evaluated on a per argument basis the sake of efficiency arguments that violate these constraints are eliminated even before given the secondphase classifiernext we show how to transform the constraints in the filter function into the form of linear equalities over z and use them in this ilp settingconstraint 4 no overlapping or embedding if arguments sj1 sjk occupy the same word in asentence then this constraint restricts only one ar guments to be assigned to an argument typein other words k 1 arguments will be the special class null which means the argument candidate is not a legitimate argumentif the special class null is represented by the symbol then for every set ofsuch arguments the following linear equality repre sents this constraintk i1 zji k 1 constraint 5 no duplicate argument classeswithin the same sentence several types of argu ments cannot appear more than oncefor example a predicate can only take one a0this constraint can be represented using the following inequalitym i1 zia0 1 constraint 6 exactly one v argument for each verb there is one and has to be one v argumentwhich represents the active verbsimilarly this con straint can be represented by the following equalitym i1 ziv 1constraint 7 va1cv pattern this constraint is only useful when there are three consec utive candidate arguments in a sentencesuppose arguments sj1 sj2 sj3 are consecutiveif sj3 iscv then sj1 and sj2 have to be v and a1 respec tivelythis ifthen constraint can be represented by the following two linear inequalitieszj3cv zj1v and zj3cv zj2a1 constraint 8 rxxx arguments suppose the referenced argument type is a0 and the reference type is ra0the linear inequalities that represent this constraint are m 1 m mi1 zia0 zmra0 if there are reference argument pairs then the total number of inequalities needed is m constraint 9 cxxx arguments this constraint is similar to the reference argument constraintsthe difference is that the continued argu ment xxx has to occur before cxxxassumethat the argument pair is a0 and ca0 and argu ment sji appears before sjk if i k the linear inequalities that represent this constraint are m 2 m j1i1 zjia0 zmra0 constraint 10 illegal argument types given aspecific verb some argument types should never oc curfor example most verbs do not have arguments a5this constraint is represented by summing all the corresponding indicator variables to be 0m i1 zia5 0using ilp to solve this inference problem en joys several advantageslinear constraints are very general and are able to represent many types of constraintsprevious approaches usually relyon dynamic programming to resolve non over lappingembedding constraints when the data is sequential but are unable to han dle other constraintsthe ilp approach is flexibleenough to handle constraints regardless of the structure of the dataalthough solving an ilp problem is nphard with the help of todays commer cial numerical packages this problem can usually be solved very fast in practicefor instance it onlytakes about 10 minutes to solve the inference prob lem for 4305 sentences on a pentiumiii 800 mhz machine in our experimentsnote that ordinarysearch methods are not neces sarily faster than solving an ilp problem and do not guarantee the optimal solution5 experimental resultsthe system is evaluated on the data provided in the conll2004 semanticrole labeling shared task which consists of a portion of propbank corpusthe training set is extracted from treebank section 1518 the development setused in tuning parameters of the system from sec tion 20 and the test set from section 21we first compare this system with the basic taggerthat we have the cscl shallow parser from which is equivalent to us ing the scoring function from the first phase with only the nonoverlappingembedding constraintsin precrecf1 1stphase nonoverlap 7054 6150 6571 1stphase all const7097 6074 6546 2ndphase nonoverlap 6969 6475 6713 2ndphase all const7196 6493 6826 table 1 summary of experiments on the development setall results are for overall performanceprecision recall f1 without inference 8695 8724 8710 with inference 8803 8823 8813 table 2 results of second phase phrase prediction and inference assuming perfect boundary detection inthe first phaseinference improves performance by re stricting label sequences rather than restricting structuralproperties since the correct boundaries are givenall re sults are for overall performance on the development setaddition we evaluate the effectiveness of using only this constraint versus all constraints as in sec4table 1 shows how additional constraints over thestandard nonoverlapping constraints improve per formance on the development setthe argument scoring is chosen from either the first phase or the second phase and each is evaluated by considering simply the nonoverlappingembedding constraint or the full set of linguistic constraintsto make a fair comparison parameters were set separately to optimize performance when using the first phase resultsin general using all constraints increasesf1 by about 1 in this system but slightly de creases the performance when only the first phaseclassifier is usedalso using the twophase archi tecture improves both precision and recall and the enhancement reflected in f1 is about 25it is interesting to find out how well the secondphase classifier can perform given perfectly seg mented argumentsthis evaluates the quality of the argument classifier and also provides a conceptual upper boundtable 2 first shows the results without using inference pm the secondrow shows adding inference to the phrase classifica tion can further improve f1 by 1finally the overall result on the official test set is given in table 3note that the result here is not comparable with the best in this domain where the full parse tree is assumed givenfor a fair comparison our system was among the best at conll04 where the best system achieve a 6949 f1 scorewe show that linguistic information is useful for se mantic role labeling both in extracting features and dist precrecf1 overall 10000 7007 6307 6639 a0 2687 8113 7770 7938 a1 3573 7421 6302 6816 a2 744 5416 4104 4669 a3 156 4706 2667 3404 a4 052 7143 6000 6522 amadv 320 3936 3616 3769 amcau 051 4595 3469 3953 amdir 052 4250 3400 3778 amdis 222 5200 6714 5861 amext 015 4667 5000 4828 amloc 238 3347 3465 3405 ammnr 266 4519 3686 4060 ammod 351 9249 9496 9370 amneg 132 8592 9606 9071 ampnc 089 3279 2353 2740 amtmp 778 5977 5689 5830 ra0 166 8133 7673 7896 ra1 073 5882 5714 5797 ra2 009 10000 2222 3636 ramtmp 015 5455 4286 4800 table 3 results on the test setderiving hard constraints on the outputwe also demonstrate that it is possible to use integer linear programming to perform inference that incorporates a wide variety of hard constraints which would be difficult to incorporate using existing methodsin addition we provide further evidence supporting the use of scoring arguments over scoring argument boundaries for complex tasksin the future we planto use the full propbank corpus to see the improvement when more training data is providedin addition we would like to explore the possibility of integer linear programming approach using soft constraintsas more constraints are considered we ex pect the overall performance to improvewe thank xavier carreras and llus marquez for the data and scripts martha palmer and the anonymous referees for their useful comments amd for their equipment donation and dash optimization for the free academic use of their xpressmp softwarethis research is supported by nsf grants itriis0085836 itriis0085980 and iis9984168 eia 0224453 and an onr muri award
C04-1197
semantic role labeling via integer linear programming inferencewe present a system for the semantic role labeling taskthe system combines a machine learning technique with an inference procedure based on integer linear programming that supports the incorporation of linguistic and structural constraints into the decision processthe system is tested on the data provided in conll2004 shared task on semantic role labeling and achieves very competitive resultswe formulate srl as a constituentbyconstituent tagging problem
determining the sentiment of opinions identifying sentiments is a challenging problem we present a system that given a topic automatically finds the people who hold opinions about that topic and the sentiment of each opinion the system contains a module for determining word sentiment and another for combining sentiments within a sentence we experiment with various models of classifying and combining sentiment at word and sentence levels with promising results what is an opinionthe many opinions on opinions are reflected in a considerable literature recent computational work either focuses on sentence subjectivity concentrates just on explicit statements of evaluation such as of films or focuses on just one aspect of opinion eg on adjectiveswe wish to study opinion in general our work most closely resembles that of since an analytic definition of opinion is probably impossible anyway we will not summarize past discussion or try to define formally what is and what is not an opinionfor our purposes we describe an opinion as a quadruple topic holder claim sentiment in which the holder believes a claim about the topic and in many cases associates a sentiment such as good or bad with the belieffor example the following opinions contain claims but no sentiments i believe the world is flatthe gap is likely to go bankruptbin laden is hiding in pakistanwater always flushes anticlockwise in the southern hemispherelike yu and hatzivassiloglou we want to automatically identify sentiments which in this work we define as an explicit or implicit expression in text of the holders positive negative or neutral regard toward the claim about the topicsentiments always involve the holders emotions or desires and may be present explicitly or only implicitly i think that attacking iraq would put the us in a difficult position the us attack on iraq is wrong i like ike we should decrease our dependence on oil reps tom petri and william f goodling asserted that counting illegal aliens violates citizensbasic right to equal representation in this paper we address the following challenge problemgiven a topic and a set of texts about the topic find the sentiments expressed about the topic in each text and identify the people who hold each sentimentto avoid the problem of differentiating between shades of sentiments we simplify the problem to identify just expressions of positive negative or neutral sentiments together with their holdersin addition for sentences that do not express a sentiment but simply state that some sentiment exist return these sentences in a separate setfor example given the topic what should be done with medicarethe sentence after years of empty promises congress has rolled out two medicare prescription plans one from house republicans and the other from the democratic sentence pos tagger verbs nounsadjectives adjective senti ment classifier sentiment sentiment sentence sentiment classifier opinion region polarity holder holder finder named entity tagger sentence sentence texts topic sentiment sentiment sentiment v rbs verb senti ment classifier nouns noun senti ment classifier wordnet sentence figure 1 system architecturesens bob graham of florida and zell miller of georgiashould be returned in the separate setwe approach the problem in stages starting with words and moving on to sentenceswe take as unit sentiment carrier a single word and first classify each adjective verb and noun by its sentimentwe experimented with several classifier modelsbut combining sentiments requires additional care as table 1 showscalifornia supreme court agreed that the states new termlimit law was constitutionalcalifornia supreme court disagreed that the states new termlimit law was constitutionalcalifornia supreme court agreed that the states new termlimit law was unconstitutionalcalifornia supreme court disagreed that the states new termlimit law was unconstitutionaltable 1 combining sentimentsa sentence might even express opinions of different peoplewhen combining wordlevel sentiments we therefore first determine for each holder a relevant region within the sentence and then experiment with various models for combining word sentimentswe describe our models and algorithm in section 2 system experiments and discussion in section 3 and conclude in section 4given a topic and a set of texts the system operates in four stepsfirst it selects sentences that contain both the topic phrase and holder candidatesnext the holderbased regions of opinion are delimitedthen the sentence sentiment classifier calculates the polarity of all sentimentbearing words individuallyfinally the system combines them to produce the holders sentiment for the whole sentencefigure 1 shows the overall system architecturesection 21 describes the word sentiment classifier and section 22 describes the sentence sentiment classifier21 word sentiment classifier211 word classification models for word sentiment classification we developed two modelsthe basic approach is to assemble a small amount of seed words by hand sorted by polarity into two listspositive and negativeand then to grow this by adding words obtained from wordnet we assume synonyms of positive words are mostly positive and antonyms mostly negative eg the positive word goodhas synonyms virtuous honorable righteousand antonyms evil disreputable unrighteousantonyms of negative words are added to the positive list and synonyms to the negative oneto start the seed lists we selected verbs and adjectives adding nouns latersince adjectives and verbs are structured differently in wordnet we obtained from it synonyms and antonyms for adjectives but only synonyms for verbsfor each seed word we extracted from wordnet its expansions and added them back into the appropriate seed listsusing these expanded lists we extracted an additional cycle of words from wordnet to obtain finally 5880 positive adjectives 6233 negative adjectives 2840 positive verbs and 3239 negative verbshowever not all synonyms and antonyms could be used some had opposite sentiment or were neutralin addition some common words such as great strong take and getoccurred many times in both positive and negative categoriesthis indicated the need to develop a measure of strength of sentiment polarity to determine how strongly a word is positive and also how strongly it is negativethis would enable us to discard sentimentambiguous words but retain those with strengths over some thresholdarmed with such a measure we can also assign strength of sentiment polarity to as yet unseen wordsgiven a new word we use wordnet again to obtain a synonym set of the unseen word to determine how it interacts with our sentiment seed liststhat is we compute w is the unseen word and synn are the wordnet synonyms of w to compute equation we tried two different models of sentiment class c which is also a member of the synonym set of w and count is the total number of occurrences of fk in the synonym set of w p is the number of words in class c divided by the total number of words consideredthis model derives from document classificationwe used the synonym and antonym lists obtained from wordnet instead of learning word sets from a corpus since the former is simpler and does not require manually annotated data for trainingequation shows the second model for a word sentiment classifier of word w given a sentiment class c we count the occurrence of ws synonyms in the list of c the intuition is that the more synonyms occuring in c the more likely the word belongswe computed both positive and negative sentiment strengths for each word and compared their relative magnitudestable 2 shows several examples of the system output computed with equation in which represents positive category strength and negativethe word amusing for example was classified as carrying primarily positive sentiment and blameas primarily negativethe absolute value of each category represents the strength of its sentiment polarityfor instance afraidwith strength 099 represents strong negavitity while abysmalwith strength 061 represents weaker negativityabysmal negative 03811 06188 adequate positive 09999 00484e11 afraid negative 00212e04 09999 ailing negative 00467e8 09999 amusing positive 09999 00593e07 answerable positive 08655 01344 apprehensible positive 09999 00227e07 averse negative 00454e05 09999 blame negative 02530 07469 table 2 sample output of word sentiment classifier22 sentence sentiment classifieras shows in table 1 combining sentiments in a sentence can be trickywe are interested in the sentiments of the holder about the claimmanual analysis showed that such sentiments can be found most reliably close to the holder without either holder or topicclaim nearby as anchor points even humans sometimes have trouble reliably determining the source of a sentimentwe therefore included in the algorithm steps to identify the topic and any likely opinion holders near each holder we then identified a region in which sentiments would be considered any sentiments outside such a region we take to be of undetermined origin and ignore we then defined several models for combining the sentiments expressed within a region 221 holder identification we used bbns named entity tagger identifinder to identify potential holders of an opinionwe considered person and organization as the only possible opinion holdersfor sentences with more than one holder we chose the one closest to the topic phrase for simplicitythis is a very crude stepa more sophisticated approach would employ a parser to identify syntactic relationships between each holder and all dependent expressions of sentiment222 sentiment region lacking a parse of the sentence we were faced with a dilemma how large should a region bewe therefore defined the sentiment region in various ways and experimented with their effectiveness as reported in section 3window1 full sentence window2 words between holder and topic window3 window2 2 words window4 window2 to the end of sentence table 3 four variations of region size223 classification models we built three models to assign a sentiment category to a given sentence each combining the individual sentiments of sentimentbearing words as described above in a different waymodel 0 simply considers the polarities of the sentiments not the strengths model 0 the intuition here is something like negatives cancel one another outhere the system assigns the same sentiment to both the california supreme court agreed that the states new termlimit law was constitutionaland the california supreme court disagreed that the states new termlimit law was unconstitutionalfor this model we also included negation words such as not and never to reverse the sentiment polaritymodel 1 is the harmonic mean of the sentiment strengths in the region model 1 cwcp wcp cn scp ij n i i is the number of words in the region whose sentiment category is c if a region contains more and stronger positive than negative words the sentiment will be positivemodel 2 is the geometric mean model 2 cwcpif wcpscp ij n i i cn of section 211 compared against a baseline that randomly assigns a sentiment category to each word the system achieves lower agreement than humans but higher than the random processof the test data the algorithm classified 9307 of adjectives and 8327 of verbs as either positive and negativethe remainder of adjectives and verbs failed to be classified since they did not overlap with the synonym set of adjectives and verbsin table 5 the seed list included just a few manually selected seed words we decided to investigate the effect of more seed wordsafter collecting the annotated data we added half of it to the training set retaining the other half for the testas table 6 shows agreement of both adjectives and verbs with humans improvesrecall is also improvedadjective verb lenient agreement lenient agreement h1m h2m recall h1m h3m recall 7566 7788 9784 8120 7906 9323 table 6 results including manual data32 sentence sentiment classifier321 data 100 sentences were selected from the duc 2001 corpus with the topics illegal alien term limits gun control and naftatwo humans annotated the 100 sentences with three categories to measure the agreement between humans we used the kappa statistic the kappa value for the annotation task of 100 sentences was 091 which is considered to be reliable322 test on human annotated data we experimented on section 223s 3 models of sentiment classifiers using the 4 different window definitions and 4 variations of wordlevel classifiers since model 0 considers not probabilities of words but only their polarities the two word level classifier equations yield the same resultsconsequently model 0 has 8 combinations and models 1 and 2 have 16 eachto test the identification of opinion holder we first ran models with holders that were annotated by humans then ran the same models with the automatic holder finding strategiesthe results appear in figures 2 and 3the models are numbered as follows m0 through m4 represent 4 sentence classifier models table 5agreement between humans and systemadjective verb lenient agreement lenient agreement h1m h2m recall h1m h3m recall random selection 5935 5781 100 5902 5659 100 basic method 6837 6860 9307 7584 7272 8327 p1p2 and p3p4 represent the word classifier models in equation and equation with normalization and without normalization respectively03 04 05 06 07 08 09 m0p1 m0p3 m1p1 m1p2 m1p3 m1p4 m2p1 m2p2 m2p3 m2p4 ac cu ra cy window 1 window 2 window 3 window 4 03 04 05 06 07 08 09 m0p1 m0p3 m1p1 m1p2 m1p3 m1p4 m2p1 m2p2 m2p3 m2p4 ac cu rac y window 1 window 2 window 3 window 4 human 1 machine human 2 machine figure 2 results with manually annotated holder03 04 05 06 07 08 09 m0p1 m0p3 m1p1 m1p2 m1p3 m1p4 m2p1 m2p2 m2p3 m2p4 ac cu rac y window 1 window 2 window 3 window 4 03 04 05 06 07 08 09 m0p1 m0p3 m1p1 m1p2 m1p3 m1p4 m2p1 m2p2 m2p3 m2p4 ac cu rac y window 1 window 2 window 3 window 4 human 1 machine human 2 machine figure 3 results with automatic holder detectioncorrectness of an opinion is determined when the system finds both a correct holder and the appropriate sentiment within the sentencesince human1 classified 33 sentences positive and 33 negative random classification gives 33 out of 66 sentencessimilarly since human2 classified 29 positive and 34 negative random classification gives 34 out of 63 when the system blindly marks all sentences as negative and 29 out of 63 when it marks all as positivethe systems best model performed at 81 accuracy with the manually provided holder and at 67 accuracy with automatic holder detection33 problems331 word sentiment classification as mentioned some words have both strong positive and negative sentimentfor these words it is difficult to pick one sentiment category without considering contextsecond a unigram model is not sufficient common words without much sentiment alone can combine to produce reliable sentimentfor example in term limits really hit at democracysays prof fenno the common and multimeaning word hitwas used to express a negative point of view about term limitsif such combinations occur adjacently we can use bigrams or trigrams in the seed word listwhen they occur at a distance however it is more difficult to identify the sentiment correctly especially if one of the words falls outside the sentiment region332 sentence sentiment classification even in a single sentence a holder might express two different opinionsour system only detects the closest oneanother difficult problem is that the models cannot infer sentiments from facts in a sentenceshe thinks term limits will give women more opportunities in politicsexpresses a positive opinion about term limits but the absence of adjective verb and noun sentimentwords prevents a classificationalthough relatively easy task for people detecting an opinion holder is not simple eitheras a result our system sometimes picks a wrong holder when there are multiple plausible opinion holder candidates presentemploying a parser to delimit opinion regions and more accurately associate them with potential holders should help34 discussionwhich combination of models is bestthe best overall performance is provided by model 0apparently the mere presence of negative words is more important than sentiment strengthfor manually tagged holder and topic model 0 has the highest single performance though model 1 averages bestwhich is better a sentence or a regionwith manually identified topic and holder the region window4 performs better than other regionshow do scores differ from manual to automatic holder identificationtable 7 compares the average results with automatic holder identification to manually annotated holders in 40 different modelsaround 7 more sentences were misclassified by the automatic detection methodpositive negative total human1 5394 1667 7060 human2 4984 1714 6698 table 7 average difference between manual and automatic holder detectionhow does adding the neutral sentiment as a separate category affect the scoreit is very confusing even for humans to distinguish between a neutral opinion and non opinion bearing sentencesin previous research we built a sentence subjectivity classifierunfortunately in most cases it classifies neutral and weak sentiment sentences as nonopinion bearing sentencessentiment recognition is a challenging and difficult part of understanding opinionswe plan to extend our work to more difficult cases such as sentences with weakopinionbearing words or sentences with multiple opinions about a topicto improve identification of the holder we plan to use a parser to associate regions more reliably with holderswe plan to explore other learning techniques such as decision lists or svmsnonetheless as the experiments show encouraging results can be obtained even with relatively simple models and only a small amount of manual seeding effort
C04-1200
determining the sentiment of opinionsidentifying sentiments is a challenging problemwe present a system that given a topic automatically finds the people who hold opinions about that topic and the sentiment of each opinionthe system contains a module for determining word sentiment and another for combining sentiments within a sentencewe experiment with various models of classifying and combining sentiment at word and sentence levels with promising resultswe try to determine the final sentiment orientation of a given sentence by combining sentiment words within itwe start with two lists of positive and negative seed wordswe use wordnet synonyms and antonyms to expand two lists of positive and negative seed words
sentence compression beyond word deletion in this paper we generalise the sentence compression task rather than sim ply shorten a sentence by deleting words or constituents as in previous work we rewrite it using additional operations such as substitution reordering and insertion we present a new corpus that is suitedto our task and a discriminative treeto tree transduction model that can naturallyaccount for structural and lexical mis matches the model incorporates a novelgrammar extraction method uses a lan guage model for coherent output and canbe easily tuned to a wide range of compres sion specific loss functions automatic sentence compression can be broadly described as the task of creating a grammaticalsummary of a single sentence with minimal information lossit has recently attracted much attention in part because of its relevance to applicationsexamples include the generation of sub titles from spoken transcripts the display of text on small screens such as mobile phones or pdas and notably summarisation most prior work has focused on a specific instantiation of sentence compression namely word deletiongiven an input sentence of words w 1 w 2 w n a compression is formed by dropping any subset of these words some rights reservedand marcu 2002the simplification renders the task computationally feasible allowing efficient decoding using a dynamic program furthermore constraining the problemto word deletion affords substantial modeling flexibilityindeed a variety of models have been successfully developed for this task ranging from in stantiations of the noisychannel model to largemargin learn ing and integer linear programming however the simplification also renders the tasksomewhat artificialthere are many rewrite operations that could compress a sentence besides deletion including reordering substitution and inser tionin fact professional abstractors tend to use these operations to transform selected sentences from an article into the corresponding summary sentences therefore in this paper we consider sentence compression from a more general perspective and generate abstracts rather than extractsin this framework the goal is to find a summary of theoriginal sentence which is grammatical and conveys the most important information without necessarily using the same words in the same or derour task is related to but different from paraphrase extraction we must not only have access to paraphrases but also be able to combine them in order to generate new text while attempting to produce a shorter resulting stringquirk et al present an endtoend paraphrasing system inspired byphrasebased machine translation that can both ac quire paraphrases and use them to generate new stringshowever their model is limited to lexical substitution no reordering takes place and is 137 lacking the compression objectiveonce we move away from extractive compres sion we are faced with two problemsfirst wemust find an appropriate training set for our abstractive taskcompression corpora are not natu rally available and existing paraphrase corpora do not normally contain compressionsour second problem concerns the modeling task itselfideally our learning framework should handle structural mismatches and complex rewriting operationsin what follows we first present a new cor pus for abstractive compression which we created by having annotators compress sentences while rewriting thembesides obtaining useful data formodeling purposes we also demonstrate that ab stractive compression is a meaningful taskwe then present a treetotree transducer capable of transforming an input parse tree into a compressed parse treeour approach is based on synchronous tree substitution grammar a formalism that can account for structural mismatches and is trained discriminativelyspecifi cally we generalise the model of cohn and lapata to our abstractive taskwe present a noveltreetotree grammar extraction method which acquires paraphrases from bilingual corpora and ensure coherent output by including a ngram language model as a featurewe also develop a number of loss functions suited to the abstractive compression taskwe hope that some of the work described here might be of relevance to other gen eration tasks such as machine translation multidocument summarisation and text simplification a stumbling block to studying abstractive sentence compression is the lack of widely available corpora for training and testingprevious work has beenconducted almost exclusively on ziffdavis a cor pus derived automatically from document abstractpairs or on human authored corpora unfortunatelynone of these data sources are suited to our problem since they have been produced with a single rewriting operation namely word deletional though there is a greater supply of paraphrasing corpora such as the multipletranslation chinese corpus 1 and themicrosoft research paraphrase corpus they are also not ideal since they have not been created 1 available by the ldc catalog number ldc2002t01 isbn 1585632171with compression in mindthey contain amplerewriting operations however they do not explic itly target information lossfor the reasons just described we created our own corpuswe collected 30 newspaper articles from the british national corpus and the american news text corpus forwhich we obtained manual compressionsin or der to confirm that the task was feasible five of these documents were initially compressed by two annotators the annotators weregiven instructions that explained the task and defined sentence compression with the aid of examplesthey were asked to paraphrase while preserv ing the most important information and ensuring the compressed sentences remained grammaticalthey were encouraged to use any rewriting opera tions that seemed appropriate eg to delete words add new words substitute them or reorder themassessing interannotator agreement is notori ously difficult for paraphrasing tasks since there can be many valid outputs for a given inputalso our task is doubly subjective in deciding which information to remove from the sentence and how to rewrite itin default of an agreement measure that is well suited to the task and takes both decisions into account we assessedthem separatelywe first examined whether the annotators compressed at a similar levelthe com pression rate was 56 for one annotator and 54 for the other2 we also assessed whether theyagreed in their rewrites by measuring bleu the interannotator bleu score was 2379 compared with the source agreement bleu of only 1322both the compression rateand bleu score indicate that the task is welldefined and the compressions validthe remaining 25 documents were compressed by a single an notator to ensure consistencyall our experiments used the data from this annotator3table 1 illustrates some examples from our corpusas can be seen some sentences contain a single rewrite operationfor instance a pp is para phrased with a genitive a subordinate clause with a present participle a passive sentence with an active one however in most cases many rewrite decisions take place allat onceconsider sentence here the conjunc tion high winds and snowfalls is abbreviated to 2 the term compression raterefers to the percentage of words retained in the compression3 available from httphomepagesinfedacuk tcohnparaphrase138 1athe future of the nation is in your hands1bthe nations future is in your hands2aas he entered a polling booth in katutura he said2bentering a polling booth in katutura he said3amr usta was examined by dr raymond crockett a harley street physician specialising in kidney disease3bdr raymond crockett a harley street physician ex amined mr usta4ahigh winds and snowfalls have however grounded at a lower level the powerful us navy sea stallion helicopters used to transport the slabs4bbad weather however has grounded the helicopters transporting the slabs5ato experts in international law and relations the usaction demonstrates a breach by a major power of in ternational conventions5bexperts say the us are in breach of international con ventionstable 1 compression examples from our corpus sen tences are the source sentences the targetbad weather and the infinitive clause to transport to the present participle transportingnote that the prenominal modifiers us navy sea stallion and the verb used have been removedin sentence the verb say is added and the np a breach by amajor power of international conventions is para phrased by the sentence the us are in breach of international conventionsour work builds on the model developed by cohnand lapata they formulate sentence compression as a treetotree rewriting taska syn chronous tree substitution grammar licenses the space of all possible rewriteseach grammar rule is assigned a weight and these weights are learnt in discriminative trainingfor prediction a specialised generation algorithmfinds the best scoring compression using the grammar rulescohn and lapata apply this model to ex tractive compression with stateoftheart resultsthis model is appealing for our task for severalreasonsfirstly the synchronous grammar provides expressive power to model consistent syntactic effects such as reordering changes in nonterminal categories and lexical substitutionsec ondly it is discriminatively trained which allowsfor the incorporation of all manner of powerful featuresthirdly the learning framework can be tai lored to the task by choosing an appropriate loss functionin the following we describe their model in more detail with emphasis on the synchronous grammar the model structure and the predictionand training algorithmssection 4 presents our ex tensions and modificationsgrammar the grammar defines a space oftree pairs over uncompressed and compressed sen grammar rules s snp 1 vbd 2 np 3 np 1 vbd 2 np 3 s snp 1 vbd 2 np 3 np 3 was vbn 2 by np 1 np nphe himnp nphe henp nphe petervbd vbnsang sungnp npa song a songinput tree s np he np vp sang vbd np a dt song nn output trees s np he vp sang np a song s np him vp sang np a song s np peter vp sang np a song s np a song vp was vp sung pp by he s np a song vp was vp sung pp by him s np a song vp was vp sung pp by peter figure 1 example grammar and the output trees it licences for an input treethe numbered boxes in the rules denote linked variablespreterminal categories are not shown for the output trees for the sake of brevitytences which we refer to henceforth as the source and targetwe use the grammar to find the set of sister target sentences for a given source sentencefigure 1 shows a toy grammar and the set of possi ble target trees for the given source treeeach output tree is created by applying a se ries of grammar rules where each rule matches a fragment of the source and creates a fragment of the target treea rule in the grammar consists of a pair of elementary trees and a mapping between the variables in both treesa derivation is a sequence of rules yielding a target tree with no remaining variablescohn and lapata extract a stsg froma parsed wordaligned corpus of source and target sentencesspecifically they extract the mini mal set of synchronous rules which can describe each tree pairthese rules are minimal in the sensethat they cannot be made smaller while still honouring the wordalignmentdecoding the grammar allows us to search for all sister trees for a given treethe decoder maximises over this space y argmax ysx where ry here x is the source tree y is a derivation which produces the source tree s x and a target tree t 4and r is a gram mar rulethe function scores the derivation and 4 equation 1 optimises over derivations rather than target trees to allow tractable inference139 is defined in as a linear function over the rules usedeach rules score is an inner product between its feature vector and the model parame ters the feature functions are set by hand while the model parameters are learned in trainingthe maximisation problem in can be solved efficiently using a dynamic programderivations will have common substructures whenever they transduce the same source subtree into a target subtreethis is captured in a chart leading to an efficient bottomup algorithmthe asymptotic time complexity of this search is o where s is the number of source nodes andr is the number of rules matching a given nodetraining the model is trained using svm struct a large margin method for structured output problems this training method allows the use of a configurable loss function whichmeasures the extent to which the models predic tion y differs from the reference y centralto training is the search for a derivation which is both high scoring and has high loss compared to the gold standard5 this requires finding the maximiser of h in one of h s h m where the subscripts s and m denote slack and margin rescaling which are different formulations of the training problem and taskar et al for detailsthe search for the maximiser of h in requires the tracking of the loss valuethis can be achieved by extending the decoding algorithmsuch that the chart cells also store the loss param eters consequently this extension leads to a considerably higher time and space complexity compared to decodingfor example with precision loss the time complexity is o as each step must consider o pos sible loss parameter valuesin this section we present our extensions of cohnand lapatas modelthe latter was de signed with the simpler extractive compression in mind and cannot be readily applied to our task5 spurious ambiguity in the grammar means that there areoften many derivations linking the source and targetwe fol low cohn and lapata by choosing the derivation with the most rules which should provide good generalisationgrammar it is relatively straightforward to extract a grammar from our corpusthis grammar will contain many rules encoding deletions and structural transformations but there will be many unobserved paraphrases no matter how good the extraction method for this reason we ex tract a grammar from our abstractive corpus in the manner of cohn and lapata and augment it with a larger grammar obtained from a parallel bilingual corpuscrucially our second grammar will not contain com pression rules just paraphrasing oneswe leave itto the model to learn which rules serve the com pression objectiveour paraphrase grammar extraction method uses bilingual pivoting to learn paraphrases over syntax tree fragments ie stsg rulespivoting treats the paraphrasing problem as a twostage translation processsome english text is translated to a foreign language and then translated back into english p f pp where p is the probability of translating an english string e into a foreign string f and p the probability of translating the same for eign string into some other english string e wethus obtain englishenglish translation probabili ties p by marginalizing out the foreign textinstead of using strings we use elementary trees on the en glish side resulting in a monolingual stsgweobtain the elementary trees and foreign strings us ing the gkhm algorithm this takes as input a bilingual wordaligned corpus with trees on one side and finds the minimal set of tree fragments and their corresponding strings which is consistent with the word alignmentthis process is illustrated in figure 2 where the aligned pair on the left gives rise to the rules shown onthe rightnote that the english rules and for eign strings shown include variable indices where they have been generalisedwe estimate p and p from the set of treetostring rules and then then pivot each tree fragment to produce stsg rulesfigure 3 illustrates the process for the vp does not vp fragmentmodeling and decoding our grammar is much larger and noisier than a grammar extractedsolely for deletionbased compressionso in order to encourage coherence and inform lexical se 140 snp vp vbz does rb gohe not ne pasil va prp vp np he il prp go vavp vp vbz does rb not ne pas vp s np vp 1 2 1 1 1 2 figure 2 treetostring grammar extraction using the ghkm algorithm showing the aligned sentence pair and the resulting rules as tree fragments and their matching stringsthe boxed numbers denote variablesvp vbz does rb not ne pas vp n ne ne peut vp md will rb not vb vp vbp do rb not vb 1 1 1 1 1 1 1 figure 3 pivoting the vp does not vp fragmentlection we incorporate a ngram language model as a featurethis requires adapting the scor ing function in to allow features over target ngrams ry mt where m are the ngrams and is a new fea ture function over these ngrams sadly the scoring function in renders the chartbased search used for training and decoding intractablein order to provide sufficient context to the chart based algorithm we must also store in each chart cell the n 1 target tokens at the left and right edges of its yieldthis is equivalent to using as our grammar the intersection between the original grammar and the ngram lm and increases the decoding complexity to an infeasible ov wherel is the size of the lexiconwe adopt a popular approach in syntaxinspiredmachine translation to address this problem the idea is to use a beamsearch overthe intersection grammar coupled with the cube pruning heuristicthe beam limits the number ofitems in a given chart cell to a fixed constant re gardless of the number of possible lm contexts and nonterminal categoriescubepruning furtherlimits the number of items considered for inclu sion in the beam reducing the time complexity to a more manageable o where b is the beam sizewe refer the interested reader to chiang for detailstraining the extensions to the model in also necessitate changes in the training proce durerecall that training the basic model of cohn and lapata requires finding the maximiserof h in their model uses a chartbased al gorithm for this purposeas in decoding we also use a beam search for training thereby avoiding the exponential time complexity of exact searchthe beam search requires an estimate of the qual ity for incomplete derivationswe use the margin rescaling objective h m in and approximatethe loss using the current loss param eter values in each chart cellwe use a wide beam of 200 unique items or 500 items in total to reduce the impact of the approximationour loss functions are tailored to the task anddraw inspiration from metrics developed for ex tractive compression but also for summarisation and machine translationthey are based on the hamming distance over unordered bags of itemsthis measures the number of predicted items that did not appear in the reference along with a penalty for short output hamming fmax 0 where t and f are the number of true and falsepositives respectively when comparing the pre dicted target y with the reference y and l isthe length of the referencethe second term pe nalises short output as predicting very little or nothing would otherwise be unpenalisedwe have three hamming loss functions over 1 tokens 2 ngrams or 3 cfg productionstheselosses all operate on unordered bags and therefore might reward erroneous predictionsfor ex ample a permutation of the reference tokens has zero tokenlossthe cfg and ngram losses have overlapping items which encode a partial order and therefore are less affectedin addition we developed a fourth loss func tion to measure the edit distance between themodels prediction and the reference both as bags oftokensthis measures the number of insertionsand deletionsin contrast to the previous loss func tions this requires the true positive counts to be clipped to the number of occurrences of each type in the referencethe edit distance is given by edit p q 2 i min where p and q denote the number of target tokensin the predicted and reference derivation respec tively and p i and q i are the counts for type i 141 adjpnpsubject pp to np 1 part pp of np 1 advprbas well also adjpjjtoo little insufficient ss s 1 and s 2 s 2 and s 1 npnpdt 1 nn 2 dt 1 nn 2 npnpdt 1 nn 2 nn 2 table 2 sample grammar rules extracted from the training set pivoted set or generated from the source in this section we present our experimental set up for assessing the performance of our modelwe give details on the corpora and grammars we used model parameters and features 6 the baselineused for comparison with our approach and ex plain how our system output was evaluatedgrammar extraction our grammar usedrules extracted directly from our compression cor pus and a bilingual corpus theformer corpus was wordaligned using the berke ley aligner initialised with a lexicon of word identity mappings and parsed with bikels parserfrom this we extracted grammar rules following the technique described in cohn and lapata for the pivot grammarwe use the frenchenglish europarl v2 which con tains approximately 688k sentencesagain the corpus was aligned using the berkeley aligner and the english side was parsed with bikels parserweextracted treetostring rules using our implementation of the ghkm methodto ameliorate the effects of poor alignments on the grammar we re moved singleton rules before pivotingin addition to the two grammars described wescanned the source trees in the compression cor pus and included stsg rules to copy each cfg production or delete up to two of its childrenthis is illustrated in table 2 where the last two rules are derived from the cfg production npdt nn inthe source treeall trees are rooted with a distinguished top nonterminal which allows the ex plicit modelling of sentence spanning subtreesthese grammars each had 44199 7813 and 22555 ruleswe took their union resulting in 58281 unique rules and 13619 unique source elementary treesmodel parameters our model was trainedon 480 sentences 36 sentences were used for de velopment and 59 for testingwe used a variety of syntaxbased lexical and compressionspecific 6 the software and corpus can be downloaded from httphomepagesinfedacuktcohnparaphrasefor every rule origin of rule for each origin o log p o log p o log p o s are t are s are t r s t s t s t both s and t are preterminals and s t or s 6 t number of terminalsvariablesdropped variables ordering of variables as numbersnonterminals nonterminal sequence of vars identical after reordering preterminal or terminal sequences are identical numberidentity of commoninserteddropped terminals source is shorterlonger than target target is a compression of the source using deletes for every ngram log p table 3 the feature setrules were drawn from the training set bilingual pivoting and directly from the source treess andt are the source and target elementary trees in a rule the sub script r references the root nonterminal w are the terminals in the target treefeatures these are summarised in table 3we also use a trigram language model trained on the bnc using the sri language modeling toolkit with modified kneserney smoothingan important parameter in our modeling frame work is the choice of loss functionwe evaluatedthe loss functions presented in section 4 on the de velopment setwe ran our system for each of the four loss functions and asked two human judgesto rate the output on a scale of 1 to 5the ham ming loss over tokens performed best with a meanrating of 318 closely followed by the edit dis tance we chose the former over the latter as it is less coarsely approximated during searchbaseline there are no existing models thatcan be readily trained on our abstractive com pression datainstead we use cohn and lapatas extractive model as a baselinethe latter was trained on an extractive compression corpus drawn from the bnc and tunedto provide a similar compression rate to our sys temnote that their model is a strong baseline it performed significantly better than competitive approaches across a variety of compression corporaevaluation methodology sentence compres sion output is commonly evaluated by eliciting human judgmentsfollowing knight and marcu we asked participants to rate the grammati cality of the target compressions and howwell they preserved the most important information from the sourcein both cases they used a five pointrating scale where a high number indicates better performancewe randomly selected 30 sen tences from the test portion of our corpusthese 142 models grammaticality importance compr extract 310 243 825 abstract 338 285 792 gold 451 402 584table 4 mean ratings on compression output elicited by hu mans significantly different from the gold standard sig nificantly different from the baselinesentences were compressed automatically by our model and the baselinewe also included goldstandard compressionsour materials thus con sisted of 90 sourcetarget sentenceswe collected ratings from 22 unpaid volunteers all self reported native english speakersboth studies were conducted over the internet using a custom built web interfaceour results are summarised in table 4 where we show the mean ratings for our system the baseline and the gold standardwe first performed an analysis of variance to examine the effect of different system compres sionsthe anova revealed a reliable effect on both grammaticality and importance we next examined in more detail between system differencesposthoc tukey tests revealed that our abstractive model received significantlyhigher ratings than the baseline in terms of impor tance we conjecture that this is due to the synchronous grammar we employ which is larger and more expressive than the baselinein the extractive case a word sequence is eitherdeleted or retainedwe may however want to re tain the meaning of the sequence while rendering the sentence shorter and this is precisely what our model can achieve eg by allowing substitutionsas far as grammaticality is concerned our abstractive model is numerically better than the extrac tive baseline but the difference is not statistically significantnote that our model has to work a lotharder than the baseline to preserve grammatical ity since we allow arbitrary rewrites which maylead to agreement or tense mismatches and selec tional preference violationsthe scope for errors is greatly reduced when performing solely deletionsfinally both the abstractive and extractive out puts are perceived as significantly worse than the gold standard both in terms of grammaticalityand importance this is not surpris ing humanauthored compressions are more fluentand tend to omit genuinely superfluous informa tionthis is also mirrored in the compression ratesshown in table 4when compressing humans emo kurtz came from missouri and at the age of 14 hitch hiked to los angeles seeking top diving coachese kurtz came from missouri and at 14 hitchhiked to los angeles seeking top diving coachesa kurtz hitchhiked to los angeles seeking top diving coachesg kurtz came from missouri and at 14 hitchhiked to los angeles seeking diving coacheso the scheme was intended for people of poor or moderate meanse the scheme was intended for people of poor meansa the scheme was planned for poor peopleg the scheme was intended for the pooro he died last thursday at his home from complications following a fall said his wife author margo kurtze he died last at his home from complications following a fall said wife author margo kurtza his wife author margo kurtz died from complications after a declineg he died from complications following a fallo but a month ago she returned to britain taking the chil dren with here she returned to britain taking the childrena but she took the children with himg but she returned to britain with the childrentable 5 compression examples including human and systemoutput ploy not only linguistic but also world knowledge which is not accessible to our modelalthough thesystem can be forced to match the human compression rate the grammaticality and information con tent both suffermore sophisticated features could allow the system to narrow this gapwe next examined the output of our system inmore detail by recording the number of substitu tions deletions and insertions it performed on the test datadeletions accounted for 67 of rewrite operations substitutions for 27 and insertions for 6interestingly we observe a similar ratio in the human compressionshere deletions arealso the most common rewrite operation fol lowed by substitutions and insertions the ability to perform substitutions and insertions increases the compression potential of our system but can also result in drastic meaning changesin most cases the compressions produced byour system did not distort the meaning of the orig inalhumans are clearly better at this 965 of their compressions were meaning preservingwe illustrate example output of our system in table 5for comparison we also present the gold standard compressions and baseline outputin thefirst sentence the system rendered kurtz the sub ject of hitchhikedat the same time it deleted the verb and its adjunct from the first conjunct as well as the temporal modi fier at the age of 14 from the second conjunctthe second sentence shows some paraphrasing the verb intended is substituted with planned and 143 poor is now modifying people rather than meansin the third example our system applies multi ple rewritesit deletes last thursday at his homemoves wife author margo kurtz to the subject position and substitutes fall with declineunfortu nately the compressed sentence expresses a rather different meaning from the originalit is not margo kurtz who died but her husbandfinally our last sentence illustrates a counterintuitive substitution the pronoun her is rewritten as himthis is becausethey share the french translation lui and thus piv oting learns to replace the less common word her with himthis problem could be addressed by pivoting over multiple bitexts with different foreign languagespossible extensions and improvements to the current model are many and variedfirstly ashinted at above the model would benefit from extensive feature engineering including source con ditioned features and ngram features besides thelma richer grammar would also boost perfor mancethis could be found by pivoting over more bitexts in many foreign languages or making use of existing or paraphrase corporafinally we planto apply the model to other paraphrasing tasks in cluding fully abstractive document summarisation acknowledgements the authors acknowledge the support of epsrc special thanks to phil blunsom james clarke and miles osborne for their insightful suggestions
C08-1018
sentence compression beyond word deletionin this paper we generalise the sentence compression taskrather than simply shorten a sentence by deleting words or constituents as in previous work we rewrite it using additional operations such as substitution reordering and insertionwe present a new corpus that is suited to our task and a discriminative treetotree transduction model that can naturally account for structural and lexical mismatchesthe model incorporates a novel grammar extraction method uses a language model for coherent output and can be easily tuned to a wide range of compression specific loss functionsdifferent from prior research we achieve sentence compression using a combination of several operations including word deletion substitution insertion and reordering based on a statistical model which is similar to our paraphrase generation processwe present a model that can both compress and paraphrase individual sentences without however generating documentlevel summariesour abstractive methods sheds more light on how people compress sentences but do not always manage to outperform extractive methodswe expand the task to include substitutions insertions and reorderings that are automatically learned from parallel textswe propose the first abstractive compression method
a classifierbased approach to preposition and determiner error correction in l2 english in this paper we present an approach to the automatic identification and correction ofpreposition and determiner errors in non native english writing we show that models of use for these parts of speech can be learned with an accuracy of 7006 and 9215 respectively on l1 text and present first results in an error detection task for l2 writing the field of research in natural language processing applications for l2 language is constantly growingthis is largely driven by the ex panding population of l2 english speakers whose varying levels of ability may require different types of nlp tools from those designed primarily for native speakers of the languagethese include applications for use by the individual and within instructional contextsamong the key tools are errorchecking applications focusing particularly on areas which learners find the most challengingprepositions and determiners are known to be oneof the most frequent sources of error for l2 en glish speakers a finding supported by our analysisof a small errortagged corpus we created there fore in developing a system for automatic error detection in l2 writing it seems desirable to focus on these problematic and very common parts of speech this paper gives a brief overview of the prob lems posed by these pos and of related workwe c 2008licensed under the creative commonsattributionnoncommercialshare alike 30 unported li cense some rights reservedthen present our proposed approach on both l1 and l2 data and discuss the results obtained so far21 prepositionsprepositions are challenging for learners because they can appear to have an idiosyncratic behaviour which does not follow any predictable pattern even across nearly identical contextsfor example we say i study in boston but i study at mit or he is independent of his parents but dependent on his sonas it is hard even for l1 speakers to articulatethe reasons for these differences it is not surprising that learners find it difficult to master preposi tions22 determinersdeterminers pose a somewhat different problem from prepositions as unlike them their choice is more dependent on the wider discourse contextthan on individual lexical itemsthe relation be tween a noun and a determiner is less strict than that between a verb or noun and a preposition the main factor in determiner choice being the specific properties of the nouns contextfor example wecan say boys like sport or the boys like sport depending on whether we are making a general state ment about all boys or referring to a specific groupequally both she ate an apple and she ate the ap ple are grammatically wellformed sentences butonly one may be appropriate in a given context de pending on whether the apple has been mentioned previouslytherefore here too it is very hard tocome up with clearcut rules predicting every pos sible kind of occurrence169although in the past there has been some research on determiner choice in l1 for applications such as generation and machine translation output work to date on automatic error detection in l2 writing hasbeen fairly limitedizumi et al train a maximum entropy classifier to recognise various er rors using contextual featuresthey report results for different error types but there is no breakdown of results byindividual poshan et al use a maxi mum entropy classifier to detect determiner errors achieving 83 accuracychodorow et al present an approach to preposition error detectionwhich also uses a model based on a maximum entropy classifier trained on a set of contextual fea tures together with a rulebased filterthey report 80 precision and 30 recallfinally gamon etal use a complex system including a decision tree and a language model for both preposi tion and determiner errors while yi et al propose a web countbased system to correct de terminer errors the work presented here displays some similar ities to the papers mentioned above in its use of a maximum entropy classifier and a set of featureshowever our feature set is more linguistically sophisticated in that it relies on a full syntactic analysis of the datait includes some semantic compo nents which we believe play a role in correct class assignmentdeterminers 41 feature setthe approach proposed in this paper is based on the belief that although it is difficult to formulatehard and fast rules for correct preposition and determiner usage there is enough underlying regularity of characteristic syntactic and semantic con texts to be able to predict usage to an acceptabledegree of accuracywe use a corpus of grammat ically correct english to train a maximum entropyclassifier on examples of correct usagethe classifier can therefore learn to associate a given preposition or determiner to particular contexts and re liably predict a class when presented with a novel instance of a context for one or the otherthe l1 source we use is the british national head noun applenumber singular noun type count named entityno wordnet category food plant prep modificationyes onobject of prepno adj modificationyes juicyadj grade superlative pos 3 vv dt jjs in dt nn table 1 determiner feature set for pick the juiciest apple on the treepos modified verb lexical item modified drivewordnet category motion subcat frame pp to pos of object noun object lexical item londonnamed entityyes type location pos 3 nnp vbd nnp grammatical relation iobj table 2 preposition feature set for john drove to londoncorpus as we believe this offers a represen tative sample of different text typeswe represent training and testing items as vectors of values for linguistically motivated contextual featuresour feature vectors include 18 feature categories for determiners and 13 for prepositions the main ones are illustrated in table 1 and table 2 respectivelyfurther determiner features note whether the nounis modified by a predeterminer possessive nu meral andor a relative clause and whether it ispart of a there is phraseadditional preposi tion features refer to the grade of any adjectives or adverbs modified and to whether the items modified are modified by more than one pp 1 in de felice and pulman we described some of the preprocessing required and offered some motivation for this approachas for ourchoice of features we aim to capture all the ele ments of a sentence which we believe to have an effect on preposition and determiner choice and which can be easily extracted automatically this is a key consideration as all the features derivedrely on automatic processing of the textgrammatical relations refer to raspstyle grammatical re lations between heads and complements in which the preposition occurs 170 author accuracy baseline 2694 gamon et al 08 6493 chodorow et al 07 6900 our model 7006 table 3 classifier performance on l1 prepositions 2006semantic word type information is takenfrom wordnet lexicographer classes 40 broad se mantic categories which all nouns and verbs in wordnet belong to 2 while the postags are from the penn treebank tagset we note the pos of three words either side of the target word 3 for eachoccurrence of a preposition or determiner in the corpus we obtain a feature vector consisting ofthe preposition or determiner and its context de scribed in terms of the features noted above51 prepositionsat the moment we restrict our analysis to the nine most frequent prepositions in the data at by for from in of on to and with to ensure a sufficient amount of data for trainingthis gives a training dataset comprising 8898359 instanceswe use a standard maximum entropy classifier 4 and donot omit any features although we plan to experiment with different feature combinations to deter mine if and how this would impact the classifiers performancebefore testing our model on learner data it is important to ascertain that it can correctlyassociate prepositions to a given context in gram matical welledited datawe therefore tested themodel on a section of the bnc not used in train ing section j our best result to date is 7006 accuracy table 3 relates our results to others reported in the literature on comparable tasksthe baseline refers to always choosing the most frequent option namely ofwe can see that our models performance com pares favourably to the best results in the literature although direct comparisons are hard to draw sincedifferent groups train and test on different preposi tion sets and on different types of data 7428 9047 to 2064 8599 8173 in 1768 6015 6760 for 801 5547 4378 on 654 5852 4581 with 603 5813 4633 at 472 5744 5212 by 469 6383 5651 from 386 5920 3207 table 4 l1 results individual prepositions onfurthermore it should be noted that gamon et al report more than one figure in their results as there are two components to their model one determining whether a preposition is needed and the other deciding what the preposition should bethe figure reported here refers to the latter taskas it is the most similar to the one we are evalu atingadditionally chodorow et al also discusssome modifications to their model which can in crease accuracy the result noted here is the one more directly comparable to our own approach511 further discussion to fully assess the models performance on the l1data it is important to consider factors such as performance on individual prepositions the relation ship between training dataset size and accuracy and the kinds of errors made by the modeltable 4 shows the classifiers performance on in dividual prepositions together with the size of their training datasetsat first glance a clear correlationappears between the amount of data seen in training and precision and recall as evidenced for ex ample by of or to for which the classifier achievesa very high scorein other cases however the cor relation is not so clearcutfor example by has one of the smallest data sets in training but higher scores than many of the other prepositions whilefor is notable for the opposite reason namely hav ing a large dataset but some of the lowest scoresthe absence of a definite relation between dataset size and performance suggests that theremight be a cline of learnabilityfor these prepo sitions different prepositionscontexts may be more or less uniquely identifiable or they mayhave more or fewer senses leading to less confusion for the classifierone simple way of verify ing the latter case is by looking at the number of senses assigned to the prepositions by a resource 171 target prep confused with at by for from in of on to with at xx 465 1082 295 3683 1946 917 1028 585 by 654 xx 850 258 4138 1944 541 1004 610 for 819 393 xx 191 2567 3612 560 1129 728 from 619 414 672 xx 2698 2674 770 1645 507 in 716 928 1068 301 xx 4340 1092 896 659 of 395 200 1881 336 4021 xx 946 1477 743 on 549 385 866 229 3288 2792 xx 1220 671 to 977 382 1149 371 2486 2795 943 xx 895 with 366 443 1206 224 2808 2663 681 1610 xx table 5 confusion matrix for l1 data prepositions such as the oxford english dictionaryhowever we find no good correlation between the two as the preposition with the most senses is of and that with the fewest is from thus negating the idea that fewer senses make a preposition easierto learnthe reason may therefore be found else where eg in the lexical properties of the contextsa good picture of the models errors can be had by looking at the confusion matrix in table 5which reports for each preposition what the clas sifiers incorrect decision wasanalysis of these errors may establish whether they are related to thedataset size issue noted above or have a more lin guistically grounded explanationfrom the table the frequency effect appears evi dent in almost every case the three most frequentwrong choices are the three most frequent prepo sitions to of and in although interestingly not inthat order in usually being the first choiceconversely the less frequent prepositions are less of ten suggested as the classifiers choicethis effectprecludes the possibility at the moment of draw ing any linguistic conclusionsthese may only be gleaned by looking at the errors for the three more frequent prepositionswe see for example that there seems to be a strong relation between of and for the because of which is not immediately clear perhaps they both often occur within noun phrasesmore pre dictable is the confusion between to and from andbetween locative prepositions such as to and at al though the effect is less strong for other potentially confusable pairs such as in and at or ontable 6 gives some examples of instances where the classifiers chosen preposition differs from thatfound in the original textin most cases the clas sifiers suggestion is also grammatically correct classifier choice correct phrase demands of the sector demands forcondition for development condition oftravel to speed travel atlook at the usa look totable 6 examples of classifier errors on preposi tion l1 task author accuracy baseline 5983 han et al 06 8300 gamon et al 08 8607 turner and charniak 07 8674 our model 9215 table 7 classifier performance l1 determiners but the overall meaning of the phrases changes somewhatfor example while the demands of the sector are usually made by the sector itself the demands for the sector suggest that someoneelse may be making themthese are subtle dif ferences which it may be impossible to capture without a more sophisticated understanding of the wider contextthe example with travel on the other hand yields an ungrammatical resultwe assume thatthe classifier has acquired a very strong link be tween the lexical item travel and the preposition tothat directs it towards this choice this suggests that individual lexical items play an important role in preposition choice along with other more general syntactic and semantic properties of the context172 of training data precrecall a 961 7052 5350 the 2919 8517 9151 null 6120 9863 9879 table 8 l1 results individual determiners 52 determinersfor the determiner task we also consider only the three most frequent cases which gives us a training dataset consisting of 4043925 instanceswe achieve accuracy of 9215 on thel1 data as shown in table 7again the baseline refers to the most fre quent class nullthe best reported results to date on determiner selection are those in turner and charniak our model outperforms their ngram languagemodel approach by over 5since the two approaches are not tested on the same data this com parison is not conclusive but we are optimistic that there is a real difference in accuracy since the type of texts used are not dissimilaras in the case of the prepositions it is interesting to see whether this high performance is equally distributed across thethree classes this information is reported in ta ble 8here we can see that there is a very strongcorrelation between amount of data seen in training and precision and recallthe indefinite arti cles lower learnability and its lower frequency appears not to be peculiar to our data as it is also found by gamon et al among othersthe disparity in training is a reflection of the dis tribution of determiners in the english languageperhaps if this imbalance were addressed the model would more confidently learn contexts of use for a too which would be desirable in view of using this information for error correctionon theother hand this would create a distorted represen tation of the composition of english which maynot be what we want in a statistical model of lan guagewe plan to experiment with smaller scale more similar datasets to ascertain whether the issue is one of training size or of inherent difficulty in learning about the indefinite articles occurrencein looking at the confusion matrix for determin ers it is interesting to note that for theclassifiers mistakes involving a or the the erroneous choice is in the almost always the other de terminer rather than the null casethis suggeststhat the frequency effect is not so strong as to over target det confused with a the null a xx 9292 708 the 8066 xx 1934 null 1451 8549 xx table 9 confusion matrix for l1 determiners ride any true linguistic information the model has acquired otherwise the predominant choice wouldalways be the null caseon the contrary these results show that the model is indeed capable of distinguishing between contexts which require a determiner and those which do not but requires fur ther fine tuning to perform better in knowing which of the two determiner options to chooseperhaps the introduction of a discourse dimension might assist in this respectwe plan to experiment withsome simple heuristics for example given a se quence determiner noun has the noun appeared in the preceding few sentencesif so we might expect the to be the correct choice rather than a61 working with l2 textto evaluate the models performance on learner data we use a subsection of the cambridge learner corpus 5 we envisage our model toeventually be of assistance to learners in analysingtheir writing and identifying instances of preposi tion or determiner usage which do not correspond to what it has been trained to expect the more probable instance would be suggested as a more appropriate alternativein using nlp tools and techniques which have been developed with and for l1 language a loss of performance on l2 data is to be expectedthese methods usually expect grammatically wellformed input learner text is often ungrammatical misspelled and different in content and structure from typical l1 resources such as the wsj and the bnc62 prepositionsfor the preposition task we extract 2523 instances of preposition use from the clc and ask the classifier to mark them 5 the clc is a computerised database of contemporary written learner english it wasdeveloped jointly by cambridge esol and cambridge uni versity pressthe cambridge error coding system has been developed and applied manually to the data by cambridge university press173 instance type accuracy correct 667 incorrect 70table 10 accuracy on l2 data prepositionsac curacy on incorrect instances refers to the classifier successfully identifying the preposition in the text as not appropriate for that contextas correct or incorrectthe results from this taskare presented in table 10these first results sug gest that the model is fairly robust the accuracy rate on the correct data for example is not much lower than that on the l1 datain an application designed to assist learners it is important to aim to reduce the rate of false alarms cases where the original is correct but the model flags an error toa minimum so it is positive that this result is com paratively highaccuracy on error identification is at first glance even more encouraginghowever ifwe look at the suggestions the model makes to re place the erroneous preposition we find that theseare correct only 515 of the time greatly reduc ing its usefulness621 further discussion a first analysis of the classifiers decisions and itserrors points to various factors which could be i am pairing its performancespelling mistakes in theinput are one of the most immediate onesfor ex ample in the sentence i am franch responsable on the computer services the classifier is not able to suggest a correct alternative to the erroneous onsince it does not recognise the adjective as a misspelling of responsible it loses the information associated with this lexical feature which could po tentially determine the preposition choicea more complex problem arises when poor grammar in the input misleads the parser so thatthe information it gives for a sentence is incor rect especially as regards pp attachmentin this example i wold like following equipment to my speech computer modem socket and microphone the missing the leads the parser to treat following as a verb and believes it to be the verb to which the preposition is attachedit therefore suggests from as a correction which is a reasonable choice given the frequency of phrases such as to follow fromhowever this was not what the pp was meant to modify impaired performance from the parser could be a significant negative factor in the models performanceit would be interesting to test themodel on texts written by students of different lev els of proficiency as their grammar may be more errorfree and more likely to be parsed correctlyalternatively we could modify the parser so as to skip cases where it requires several attempts before producing a parse as these more challenging casescould be indicative of very poorly structured sentences in which misused prepositions are depen dent on more complex errorsa different kind of problem impacting our accu racy scores derives from those instances where theclassifier selects a preposition which can be cor rect in the given context but is not the correct one in that particular casein the example i received a beautiful present at my birthday the classifier identifies the presence of the error and suggests the grammatically and pragmatically appropriate correction forthe corpus annotators however indicate on as the correct choicesince we use their annotations as the benchmark against which to evaluate the model this instance is counted as the classifier being wrong because it disagrees with the annotatorsa better indication of the modelsperformance may be to independently judge its de cisions to avoid being subject to the annotatorsbiasfinally we are beginning to look at the rela tions between preposition errors and other types oferror such as verb choice and how these are anno tated in the dataan overview of the classifiers error patterns forthe data in this task shows that they are largely similar to those observed in the l1 datathis sug gests that the gap in performance between l1 and l2 is due more to the challenges posed by learner text than by inherent shortcomings in the model and therefore that the key to better performance is likely to lie in overcoming these problemsin future work we plan to use l2 data where someof the spelling errors and nonpreposition or deter miner errors have been corrected so that we can see which of the other errors are worth focussing on first63 determinersour work on determiner error correction is still in the early stageswe follow a similar procedure to the prepositions task selecting a number of both correct and incorrect instanceson the former accuracy is comparable to that on l1data 922the danger of false alarms then ap pears not to be as significant as for the prepositions 174 taskon the incorrect instances however accuracy is less than 10preliminary error analysis shows that the modelis successful at identifying cases of misused deter miner eg a for the or vice versa doing so in overtwothirds of caseshowever by far the most fre quent error type for determiners is not confusion between indefinite and definite article but omitting an article where one is neededat the moment themodel detects very few of these errors no doubt in fluenced by the preponderance of null cases seen in trainingfurthermore some of the issues raised earlier in discussing the application of nlp tools to l2 language hold for this task tooin addition to those though in this task more than for prepositions we believe that differences intext type between the training texts the bnc and the testing material learner essays has a sig nificant negative effect on the modelin this taskthe lexical items play a crucial role in class assign mentif the noun in question has not been seen in training the classifier may be unable to make an informed choicealthough the bnc comprises a wide variety of texts there may not be a sufficient number covering topics typical of learner essays such as business lettersor postcards to penpalsalso the bnc was created with material from almost 20 years ago and learners writing in contem porary english may use lexical items which are notvery frequently seen in the bnca clear exam ple of this discrepancy is the noun internet which requires the definite article in english but not inseveral other languages leading to countless sen tences such as i saw it in internet i booked it on internet and so onthis is one of the errors themodel never detects a fact which is not surpris ing when we consider that this noun occurs only four times in the whole of the training datait may be therefore necessary to consider using alternative sources of training data to overcome this problem and improve the classifiers performancein developing this model our first aim was not to create something which learns like a human butsomething that works in the best and most effi cient possible wayhowever it is interesting to see whether human learners and classifiers display similar patterns of errors in preposition choicethis information has twofold value as well as being of pedagogical assistance to instructors of en glish l2 were the classifier to display studentlike error patterns insights into error triggerscould be derived from the l2 pedagogical literature to improve the classifierthe analysis of the typesof errors made by human learners yields some insights which might be worthy of further investi gationa clear one is the confusion between the three locative and temporal prepositions at in and on this type of error is made often by both learners and the model on both types of data suggesting that perhaps further attentionto features might be necessary to improve discrim ination between these three prepositionsthere are also interesting divergencesfor ex ample a common source of confusion in learners is between by and from as in i like it becauseits from my favourite bandhowever this confu sion is not very frequent in the model a difference which could be explained either by the fact that as noted above performance on from is very low and so the classifier is unlikely to suggest it or that in training the contexts seen for by are sufficiently distinctive that the classifier is not misled like the learnersfinally a surprising difference comes from looking at what to is confused withthe model often suggests at where to would be correctthis is perhaps not entirely unusual as both can occur with locative complements and this similarity could be con fusing the classifierlearners however although they do make this kind of mistake are much more hampered by the confusion between for and to as in she was helpful for me or this is interesting for youin other words for learners it seems that the abstract use of this preposition its benefactive sense is much more problematic than the spatial sensewe can hypothesise that the classifier is less distracted by these cases because the effect of the lexical features is strongera more detailed discussion of the issues arising from the comparison of confusion pairs cannot be had herehowever in noting both divergences and similarities between the two learners human and machine we may be able to derive useful insights into the way the learning processes operate and what factors could be more or less important for them175this paper discussed a contextual feature based approach to the automatic acquisition of models of use for prepositions and determiners whichachieve an accuracy of 7006 and 9215 re spectively and showed how it can be applied to anerror correction task for l2 writing with promis ing early resultsthere are several directions that can be pursued to improve accuracy on both types of datathe classifier can be further finetuned to acquire more reliable models of use for the two poswe can also experiment with its confidencethresholds for example allowing it to make an other suggestion when its confidence in its first choice is lowfurthermore issues relating to the use of nlp tools with l2 data must be addressed such as factoring out spelling or other errors in the data and perhaps training on text types which are more similar to the clcin the longer term we also envisage mining the information implicit inour training data to create a lexical resource de scribing the statistical tendencies observedacknowledgementswe wish to thank stephen clark and laura rimell for stim ulating discussions and the anonymous reviewers for their helpful commentswe acknowledge cambridge university presss assistance in accessing the cambridge learner corpusdatarachele de felice was supported by an ahrc scholar ship for the duration of her studies
C08-1022
a classifierbased approach to preposition and determiner error correction in l2 englishin this paper we present an approach to the automatic identification and correction of preposition and determiner errors in nonnative english writingwe show that models of use for these parts of speech can be learned with an accuracy of 7006 and 9215 respectively on l1 text and present first results in an error detection task for l2 writingin the context of automated preposition and determiner error correction in l2 english we note that the process is often disrupted by misspellings
estimation of conditional probabilities with decision trees and an application to finegrained pos tagging we present a hmm partofspeech tag ging method which is particularly suited for pos tagsets with a large number of finegrained tags it is based on three ideas splitting of the pos tags into attributevectors and decomposition of the contex tual pos probabilities of the hmm into aproduct of attribute probabilities esti mation of the contextual probabilities with decision trees and use of highorder hmms in experiments on german andczech data our tagger outperformed state oftheart pos taggers a hiddenmarkovmodel partofspeech tagger computes the most probable pos tag sequence t n 1 t 1 t n for a given word sequence w n 1 t n 1 argmax t n 1 pthe joint probability of the two sequences is de fined as the product of context probabilities and lexical probabilities over all pos tags p n i1 p context probp lexical probhmm taggers are fast and were successfully applied to a wide range of languages and training cor porac 2008licensed under the creative commonsattributionnoncommercialshare alike 30 unported li cense some rights reservedpos taggers are usually trained on corpora with between 50 and 150 different pos tagstagsets of this size contain little or no information aboutnumber gender case and similar morphosyntactic featuresfor languages with a rich morphol ogy such as german or czech more finegrained tagsets are often considered more appropriatetheadditional information may also help to disam biguate the part of speechwithout gender information for instance it is difficult for a tagger to correctly disambiguate the german sentence ist das realitatthe word das is ambiguous between an article and a demonstrativebecause of the lack of gender agreement between das and the noun realitat the article reading must be wrongthe german tiger treebank is an example of a corpus with a more finegrainedtagset large tagsets aggra vate sparse data problemsas an example take the german sentence das zu versteuernde einkommen sinkt this sentence should be tagged as shown in table 1das artdefnomsgneut zu partzu versteuernde adjaposnomsgneut einkommen nregnomsgneut sinkt vfinfull3sgpresind sympunsenttable 1 correct pos tags for the german sentence das zu versteuernde einkommen sinktunfortunately the pos trigram consisting of the tags of the first three words does not occurin the tiger corpusthe unsmoothed 777context probability of the third pos tag is there fore 0if the probability is smoothed with the backoff distribution p the most probable tag is adjaposaccsgfem rather thanadjaposnomsgneutthus the agreement be tween the article and the adjective is not checked anymorea closer inspection of the tiger corpus reveals that it actually contains all the information needed to completely disambiguate each component of the pos tag adjaposnomsgneut all words appearing after an article and the infinitive particle zu are at tributive adjectives all adjectives appearing after an article and a particle have the degree positive all adjectives appearing after a nominative article and a particle have nominative case all adjectives appearing after a singular arti cle and a particle are singular all adjectives appearing after a neuter article and a particle are neuter by decomposing the context probability of adjaposnomsgneut into a product of attribute probabilities p p p p p and selecting the relevant context attributes for the prediction of each attribute we obtain the following expression for the context probability p p p p p the conditional probability of each attribute is 1hence the context probability of the whole tag is also 1without having observed the given context it is possible to deduce that the observed pos tag is the only possible tag in this contextthese considerations motivate an hmm tagging approach which decomposes the pos tags into a set of simple attributes and uses decision trees toestimate the probability of each attributedecision trees are ideal for this task because the iden tification of relevant attribute combinations is at the heart of this methodthe backoff smoothing methods of traditional ngram pos taggers require an ordering of the reduced contexts which is not available herediscriminatively trained taggers on the other hand have difficulties to handle the huge number of features which are active at the same time if any possible combination of context attributes defines a separate featuredecision trees are normally used as classifiers ie they assign classes to objects which are represented as at tribute vectorsthe nonterminal nodes are labeledwith attribute tests the edges with the possible out comes of a test and the terminal nodes are labeled with classesan object is classified by evaluating the test of the top node on the object following the respective edge to a daughter node evaluating thetest of the daughter node and so on until a termi nal node is reached whose class is assigned to the objectdecision trees are turned into probability estimation trees by storing a probability for each pos sible class at the terminal nodes instead of a singleresult classfigure 1 shows a probability estima tion tree for the prediction of the probability of the nominative attribute of nouns21 induction of decision treesdecision trees are incrementally built by first selecting the test which splits the manually anno tated training sample into the most homogeneous subsets with respect to the classthis test which maximizes the information gain 1 wrtthe class is 1the information gain measures how much the test de creases the uncertainty about the classit is the difference between the entropy of the empirical distribution of the class variable in the training set and the weighted average entropy 778 2nreg p0571 p0938 p0999 0nname 1artnom 0nname 0nname p0948 p0998 1adjanom yes yes no noyes no yes no no yesfigure 1 probability estimation tree for the nomi native case of nounsthe test 1artnom checks if the preceding word is a nominative articleassigned to the top nodethe tree is recursivelyexpanded by selecting the best test for each sub set and so on until all objects of the current subsetbelong to the same classin a second step the decision tree may be pruned in order to avoid overfit ting to the training dataour tagger generates a predictor for each feature instead of using a single tree for the prediction of all possible values of a feature the tagger builds a separate decision tree for each valuethe motivation was that a tree which predicts a single value does notfragment the data with tests which are only relevant for the distinction of two other values 2 furthermore we observed that such twoclass decision trees require no optimization of the pruning threshold the tree induction algorithm only considers bi nary tests which check whether some particular attribute is present or notthe best test for each node is selected with the standard information gaincriterionthe recursive tree building process ter minates if the information gain is 0the decision tree is pruned with the pruning criterion described belowsince the tagger creates a separate tree for eachattribute the probabilities of a set of competing at tributes such as masculine feminine and neuter will not exactly sum up to 1to understand whyassume that there are three trees for the gender attributestwo of them consist of a single terminal node in the two subsetsthe weight of each subset is proportional to its size2we did not directly compare the two alternatives because the implementational effort required would have been too largewhich returns a probability of 03the third tree for neuter has one nonterminal and two terminalnodes returning a probability of 03 and 05 re spectivelythe sum of probabilities is therefore either 09 or 11 but never exactly 1this problem is solved by renormalizing the probabilitiesthe probability of an attribute is always conditioned on the respective base pos in order to make sure that the probabil ity of an attribute is 0 if it never appeared with the respective base posall context attributes other than the base pos are always used in combination with the base posa typical context attribute is 1artnomwhich states that the preceding tag is an article with the attribute nom1artis also a valid attribute specification but 1nomis notthe tagger further restricts the set of possible test attributes by requiring that some attribute ofthe pos tag at position ik must have been used be fore an attribute of the pos tag at position i may be examinedthis restriction improved the tagging accuracy for large contexts22 pruning criterionthe tagger applies 3the criticalvalue pruning strat egy proposed by a node ispruned if the information gain of the best test mul tiplied by the size of the data subsample is below a given thresholdto illustrate the pruning assume that d is the data of the current node with 50 positive and 25 negative elements and that d 1 and d 2 are the two subsets induced by the best testthe entropy of d is 23 log 2 23 13 log 2 13 092 the entropy ofd 1 is12 log 2 1212 log 2 12 1 and the entropy of d 2 is 67 log 2 67 17 log 2 17 059the information gain is therefore 092 011the resulting score is 75 011 825given a threshold of 6 the node is therefore not prunedwe experimented with prepruning as well as postpruning the performance of prepruning was slightly better and it was less dependent on the choice of the pruning thresholda threshold of 6 consistently produced optimal or near optimal results for prepruningthus prepruning with a threshold of 6 was used in the experimentsthe tagger treats dots in pos tag labels as attribute separatorsthe first attribute of a pos tag is the main categorythe number of additional attributes is fixed for each main categorythe additionalattributes are categoryspecificthe singular at tribute of a noun and an adjective pos tag are therefore two different attributes4each position in the pos tags of a given category corresponds to a featurethe attributes oc curring at a certain position constitute the value set of the featureour tagger is a hmm tagger which decomposes the context probabilities into a product of attribute probabilitiesthe probability of an attribute given the attributes of the preceding pos tags as well as the preceding attributes of the predicted pos tagis estimated with a decision tree as described be forethe probabilities at the terminal nodes of the decision trees are smoothed with the parent node probabilities the smoothing is implemented by adding the weighted class probabilities p p of theparent node to the frequencies f before normal izing them to probabilities p f p p c fthe weight was fixed to 1 after a few experiments on development datathis smoothing strat egy is closely related to wittenbell smoothingthe probabilities are normalized by dividing them by the total probability of all attribute values of the respective feature the best tag sequence is computed with theviterbi algorithmthe main differences of our tag ger to a standard trigram tagger are that the order of the markov model is not fixed 4 this is the reason why the attribute tests in figure 1 used complex attributes such as artnom rather than nomand that the context probability p is internally computed as a product of attribute probabili tiesin order to increase the speed the tagger also applies a beamsearch strategy which prunes allsearch paths whose probability is below the prob ability of the best path times a thresholdwith a threshold of 10 3or lower the influence of prun ing on the tagging accuracy was negligible41 supplementary lexiconthe tagger may use an external lexicon which sup plies entries for additional words which are not found in the training corpus and additional tags for words which did occur in the training dataif anexternal lexicon is provided the lexical probabili ties are smoothed as follows the tagger computes the average tag probabilities of all words with the same set of possible pos tagsthe wittenbellmethod is then applied to smooth the lexical prob abilities with the average probabilitiesif the word w was observed with n different tags and f is the joint frequency of w and pos tag t and p is the average probability of t among words with the same set of possible tags as w then the smoothed probability of t given w is defined as follows p f np f nthe smoothed estimates of p are di vided by the prior probability p of the tag and used instead of p5 42 unknown wordsthe lexical probabilities of unknown words areobtained as follows the unknown words are di vided into four disjoint classes 6with numeric ex pressions words starting with an uppercase letter words starting with a lowercase letter and a fourthclass for the other wordsthe tagger builds a suf fix trie for each class of unknown words using the known word types from that classthe maximal length of the suffixes is 7the suffix tries are pruned until all suffixeshave a frequency of at least 5 and the information gain multiplied by the suffix frequency and di 5 p is equal to ppp and p is a constant if the tokenization is unambiguoustherefore dropping the factor p has no influence on the ranking of the different tag sequences6in earlier experiments we had used a much larger num ber of word classesdecreasing their number to 4 turned out to be better780 vided by the number of different pos tags is above a threshold of 1more precisely if t is the set of pos tags that occurred with suffix t is the size of the set t f is the frequency of suffix and p is the probability of pos tag t among the words with suffix then the following condition must hold f at a tt ap a log p a p 1 the pos probabilities are recursively smoothedwith the pos probabilities of shorter suffixes us ing wittenbell smoothingour tagger was first evaluated on data from thegerman tiger treebankthe results were com pared to those obtained with the tnt tagger and the svmtool which is based on support vec tor machines7 the training of the svmtool took more than a daytherefore it was not possible to optimize the parameters systematicallywe tookstandard features from a 5 word window and m4lrl training without optimization of the regular ization parameter c in a second experiment our tagger was also evaluated on the czech academic corpus 10 and compared to the tnt tag ger51 tiger corpusthe german tiger treebank contains over 888000 tokensit is annotated with pos tags from the coarsegrained stts tagsetand with additional features encoding informa tion about number gender case person degreetense and moodafter deleting problematic sentences and automatically correcting some easily detectable er rors 885707 tokens were leftthe first 80 were used as training data the first half of the rest as development data and the last 10 as test datasome of the 54 stts labels were mapped to new labels with dots which reduced the numberof main categories to 23examples are the nom inal pos tags nn and ne which were mapped tonreg and nnamesome lexically decidable dis tinctions missing in the tiger corpus have been 7 it was planned to include also the stanford tagger in this comparison but it was not possible to train it on the tiger dataautomatically addedexamples are the distinc tion between definite and indefinite articles and the distinction between hyphens slashes left andright parentheses quotation marks and other sym bols which the tiger treebank annotates with were annotated with thiscase prepositions which select gen itive case but also occur with dative case 8 were tagged with apprgen the more frequent ones of the remaining prepositions such as in werelexicalized the refined tagset alo dis tinguished between the auxiliaries sein haben andwerden and used lexicalized tags for the coor dinating conjunctions aber doch denn wie bis noch and als whose distribution differs from thedistribution of prototypical coordinating conjunc tions such as und or oder for evaluation purposes the refined tags are mapped back to the original tagsthis mapping is unambiguous8 in german the genitive case of arguments is more and more replaced by the dative781 tagger default refined reflexicon baseline 673 673 694 tnt 863 869 904 svmtool 866 866 2 tags 870 879 915 10 tags 876 885 922 table 2 tagging accuracies on development data in percentresults for 2 and for 10 preceding pos tags as context are reported for our tagger512 results table 2 summarizes the results obtained with different taggers and tagsets on the development datathe accuracy of a baseline tagger which chooses the most probable tag 9ignoring the context is 673 without and 694 with the supple mentary lexiconthe tnt tagger achieves 863 accuracy on the default tagseta tag is considered correct if allattributes are correctthe tagset refinement increases the accuracy by about 06 and the ex ternal lexicon by another 35the svmtool is slightly better than the tnttagger on the default tagset but shows little i am provement from the tagset refinementapparently the lexical features used by the svmtool encode most of the information of the tagset refinementwith a context of two preceding pos tags our tagger outper forms tnt by 07 on the default tagset by 1 on the refined tagset and by 11 on the refined tagset plus the additional lexicona larger context of up to 10 preceding pos tags further increased the accuracy by 06 06 and 07 respectivelydefault refined reflexicon tnt stts 9728 tnt tiger 9717 9726 9751 10 tags 9739 9757 9797 table 3 stts accuracies of the tnt tagger trained on the stts tagset the tnt tagger trained on the tiger tagset and our tagger trained on the tiger tagsetthese figures are considerably lower than eg the 967 accuracy reported in brants for the negra treebank which is annotated with stts tags without agreement featuresthis is to 9unknown words are tagged by choosing the most fre quent tag of words with the same capitalizationbe expected however because the stts tagset ismuch smallertable 3 shows the results of an eval uation based on the plain stts tagsetthe first result was obtained with tnt trained on tiger data which was mapped to stts beforethe second row contains the results for the tnt tagger when it is trained on the tiger data and the output ismapped to sttsthe third row gives the corre sponding figures for our tagger914915 916917 918919 92921 922923 2 3 4 5 6 7 8 9 10 figure 2 tagging accuracy on development data depending on context size figure 2 shows that the tagging accuracy tends to increase with the context sizethe best results are obtained with a context size of 10what type of information is relevant across a distance of ten wordsa good example is the decision tree for the attribute first person of finite verbs which looks for a first person pronoun at positions 1 through 10 in this ordersince german is a verbfinal language these tests clearly make sensetable 4 shows the performance on the test dataour tagger was used with a context size of 10the suffix length parameter of the tnt tagger was set to 6 without lexicon and to 3 with lexiconthese values were optimal on the development datatheaccuracy of our tagger is lower than on the devel opment datathis could be due to the higher rate of unknown words relative tothe tnt tagger however the accuracy is quite sim ilar for test and development datathe differences between the two taggers are significant10 tagger default refined reflexicon tnt 8345 8411 8914 our tagger 8500 8592 9107 table 4 tagging accuracies on test databy far the most frequent tagging error was the confusion of nominative and accusative caseif 10 726 sentences were better tagged by tnt 1450 sentences were better tagged by our taggerthe resulting score of a binomial test is below 0001782 this error is not counted the tagging accuracy on the development data rises from 9217 to 9427our tagger is quite fast although not as fast asthe tnt taggerwith a context size of 3 it annotates 7000 tokens per second on a com puter with an athlon x2 4600 cputhe training with a context size of 10 took about 4 minutes52 czech academic corpuswe also evaluated our tagger on the czech aca demic corpus which contains 652131 tokens and about 1200 different pos tagsthe data was divided into 80 training data 10 development data and 10 test data885 886 887 888 889 89 2 3 4 5 6 7 8 9 10 contextdata2figure 3 accuracy on development data depend ing on context sizethe best accuracy of our tagger on the develop ment set was 889 obtained with a context of 4 preceding pos tagsthe best accuracy of the tnt tagger was 882 with a maximal suffix length of 5the corresponding figures for the test data are8953 for our tagger and 8888 for the tnt tag gerthe difference is significantour tagger combines two ideas the decompositionof the probability of complex pos tags into a prod uct of feature probabilities and the estimation of the conditional probabilities with decision treesa similar idea was previously presented in kempe but apparently never applied againthe tagging accuracy reported by kempe was below that of a traditional trigram taggerunlike him we found that our tagging method outperformed stateoftheart pos taggers on finegrained pos tagging even if only a trigram context was usedschmid and marquez used decision trees for the estimation of contextual tag prob abilities but without a decomposition of the tagprobabilitymagerman applied probabilistic decision trees to parsing but not with a genera tive modelprovost domingos noted that well known decision tree induction algorithms such as c45 or cart fail to produce accurate probability esti matesthey proposed to grow the decision trees to their maximal size without pruning and to smooth the probability estimates with add1 smoothing ferri et al describe a more complex backoffsmoothing methodcontrary to them we applied pruning and found that some pruning gives better results than no pruning another difference is that we used n twoclass trees with normalization to predict the prob abilities of n classesthese twoclass trees can be pruned with a fixed pruning thresholdhence there is no need to put aside training data for parameter tuningan open question is whether the svmtool could outperform the presented tagger if the same decompo sition of pos tags and the same context size was usedwe think that this might be the case if the svm features are restricted to the set of relevant attribute combinations discovered by the decision tree but we doubt that it is possible to train thesvmtool without such a restriction given the difficul ties to train it with the standard context sizeczech pos tagging has been extensively stud ied in the past spoustov etal compared several pos taggers includ ing an ngram tagger and a discriminatively trained tagger and evaluated them on the praguedependency treebank morces tag ging accuracy was 9512 03 better than the ngram taggera hybrid system based on four different tagging methods reached an accuracy of 9568because of the different corpora used andthe different amounts of lexical information avail able a direct comparison to our results is difficultfurthermore our tagger uses no corpusspecific heuristics whereas morce eg is optimized for czech pos taggingthe german tagging results are to the best ofour knowledge the first published results for fine grained pos tagging with the tiger tagset783we presented a hmm pos tagger for finegrained tagsets which splits the pos tags into attributevectors and estimates the conditional probabilities of the attributes with decision treesin ex periments with german and czech corpora this method achieved a higher tagging accuracy than two stateoftheart generalpurpose pos taggers
C08-1098
estimation of conditional probabilities with decision trees and an application to finegrained pos taggingwe present a hmm partofspeech tagging method which is particularly suitedfor pos tagsets with a large number of finegrained tagsit is based on three ideas splitting of the pos tags into attribute vectors and decomposition of the contextual pos probabilities of the hmm into a product of attribute probabilities estimation of the contextual probabilities with decision trees and use of highorder hmmsin experiments on german and czech data our tagger outperformed stateoftheart pos taggersour finegrained tag set contains approximately 800 tags
learning entailment rules for unary templates most work on unsupervised entailment rule acquisition focused on rules between templates with two variables ignoring unary rules entailment rules betweentemplates with a single variable in this paper we investigate two approaches for unsupervised learning of such rules and com pare the proposed methods with a binary rule learning method the results show that the learned unary rulesets outperform the binary ruleset in addition a novel directional similarity measure for learning entailment termed balancedinclusion is the best performing measure in many nlp applications such as question an swering and information extraction it is crucial to recognize whether a specific target meaning is inferred from a textfor example a qa system has to deduce that sco sued ibmis inferred from sco won a lawsuit against ibmto answer whom did sco suethis type of reasoning has been identified as a core semanticinference paradigm by the generic textual entail ment framework an important type of knowledge needed for such inference is entailment rulesan entailmentrule specifies a directional inference relation be tween two templates text patterns with variables such as x win lawsuit against y x sue y applying this rule by matching x win lawsuit against y in the above text allows a qa system to c 2008licensed under the creative commonsattributionnoncommercialshare alike 30 unported li cense some rights reservedinfer x sue y and identify ibm y s instantiation as the answer for the above questionentail ment rules capture linguistic and worldknowledge inferences and are used as an important building block within different applications egone reason for the limited performance of generic semantic inference systems is the lack of broadscale knowledgebases of entailment rules supervised learning of broad coverage rulesets is an arduous taskthis sparked intensive research on unsupervised acquisition of entailment rules egmost unsupervised entailment rule acquisitionmethods learn binary rules rules between tem plates with two variables ignoring unary rules rules between unary templates however a predicate quite of ten appears in the text with just a single variable where infer ence requires unary rules eg x take a napx sleepin this paper we focus on unsupervised learning of unary entailment rulestwo learning ap proaches are proposedin our main approach rules are learned by measuring how similar the variable instantiations of two templates in a corpusarein addition to adapting stateoftheart similar ity measures for unary rule learning we propose a new measure termed balancedinclusion which balances the notion of directionality in entailment with the common notion of symmetric semantic similarityin a second approach unary rules arederived from binary rules learned by stateofthe art binary rule learning methodswe tested the various unsupervised unary rule 849learning methods as well as a binary rule learn ing method on a test set derived from a standard ie benchmarkthis provides the first comparisonbetween the performance of unary and binary rule setsseveral results rise from our evaluation while most work on unsupervised learning ignored unary rules all tested unary methods outperformed the binary method it is better to learn unary rules directly than to derive them from a binary rulebase our proposed balancedinclusion measure outperformed all other tested methods interms of f1 measuremoreover only balancedinclusion improved f1 score over a baseline infer ence that does not use entailment rules at all this section reviews relevant distributional simi larity measures both symmetric and directional which were applied for either lexical similarity or unsupervised entailment rule learningdistributional similarity measures follow the distributional hypothesis which states that words that occur in the same contexts tend to have similar meanings various measures wereproposed in the literature for assessing such simi larity between two words you and v given a word q its set of features f q and feature weights w q for f f q a common symmetric similarity measure is lin similarity lin ff you f v w you w v ff you w you ff v w v where the weight of each feature is the pointwise mutual information between the word and the feature w q log pr pr weeds and weir proposed to measure thesymmetric similarity between two words by av eraging two directional scores the coverage of each words features by the otherthe coverage of you by v is measured by cover ff you f v w you ff you w you the average can be arithmetic or harmonic weedsa 1 2 cover cover weedsh 2 cover cover cover cover weeds et al also used pmi for feature weightsbinary rule learning algorithms adopted suchlexical similarity approaches for learning rules between templates where the features of each tem plate are its variable instantiations in a corpus such as xsco y ibmfor the example in section 1some works focused on learningrules from comparable corpora containing com parable documents such as different news articles from the same date on the same topic such corpora are highly informative for identifying variations of the same meaning since typically when variableinstantiations are shared across comparable docu ments the same predicates are describedhoweverit is hard to collect broadscale comparable cor pora as the majority of texts are noncomparablea complementary approach is learning from the abundant regular noncomparable corporayetin such corpora it is harder to recognize varia tions of the same predicatethe dirt algorithm learns nondirectional binary rules for templates that are paths in a depen dency parsetree between two noun variables x and y the similarity between two templates t and t is the geometric average dirt lin x lin y where lin xis the lin similarity between xs in stantiations of t and xs instantiations of t in a corpus some workstake the combination of the two variable instantiations in each template occurrence as a single complex feature eg xy scoibm and com pare between these complex features of t and t directional measures most rule learning meth ods apply a symmetric similarity measure between two templates viewing them as paraphrasing eachotherhowever entailment is in general a direc tional relationfor example x acquire y x own y and countersuit against x lawsuit against x propose a directional measure for learning hyponymy between twowords l r by giving more weight to the cov erage of the features of l by are weedsdcovercover when 1 this measure degenerates into cover termed precisionwith 850 precision we obtain a softversion of the inclusion hypothesis presented in which expects l to entail r if the importantfeatures of l appear also in r similarly the ledir algorithm identifies the entailment direction between two binary templates l and r which participate in a relation learned by dirt by measuring the proportion of instantiations of l that are covered by the instantiations of r as far as we know only and learn rules between unary tem plateshowever relies on comparable corpora for identifying paraphrasesand simply takes any two templates from comparable sentences that share a named entity instan tiation to be paraphrasessuch approach is notfeasible for noncomparable corpora where statis tical measurement is required learnsrules only between templates related by local dis course in addition their template structure islimited to only verbs and their direct syntactic ar guments which may yield incorrect rules eg forlight verbs to overcome this limitation we use a more expressive template struc ture31 motivationsmost unsupervised rule learning algorithms focused on learning binary entailment ruleshow ever using binary rules for inference is not enoughfirst a predicate that can have multiple arguments may still occur with only one of its argumentsfor example in the acquisition of tca was successful tcais the only argument of acqui sitionsecond some predicate expressions are unary by naturefor example modifiers such as the elected x or intransitive verbsin addition it appears more tractable to learn all variations for each argument of a predicate separately than to learn them for combinations of argument pairsfor these reasons it seems that unary rule learn ing should be addressed in addition to binary rule learningwe are further motivated by the fact thatsome works in ie found learn ing unary templates useful for recognizing relevant named entities though they did notattempt to learn generic knowledge bases of entail ment rulesthis paper investigates acquisition of unary entailment rules from regular noncomparable cor porawe first describe the structure of unarytemplates and then explore two conceivable approaches for learning unary rulesthe first ap proach directly assesses the relation between twogiven templates based on the similarity of their in stantiations in the corpusthe second approachwhich was also mentioned in derives unary rules from learned binary rules32 unary template structureto learn unary rules we first need to define theirstructurein this paper we work at the syntac tic representation leveltexts are represented by dependency parse trees and templates by parse subtreesgiven a dependency parse tree any subtree can be a candidate template setting some of its nodesas variables however the num ber of possible templates is exponential in the sizeof the sentencein the binary rule learning litera ture the main solution for exhaustively learning allrules between any pair of templates in a given corpus is to restrict the structure of templatestypi cally a template is restricted to be a path in a parse tree between two variable nodes following this approach we chose the structure of unary templates to be paths as well where oneend of the path is the templates variablehow ever paths with one variable have more expressive power than paths between two variables since the combination of two unary paths may generate a binary template that is not a pathfor example the combination of x call indictableand call y indictableis the template x call y indictable which is not a path between x and y for every noun node v in a parsed sentence we generate templates with v as a variable as follows 1traverse the path from v towards the root ofthe parse treewhenever a candidate pred icate is encountered the path from that node to v is taken as a templatewe stop when the first verb orclause boundary is encountered which typically represent the syn tactic boundary of a specific predicate851 2to enable templates with control verbs andlight verbs eg x help preventing xmake noise whenever a verb is encountered we generate templates that are paths between v and the verbs modifiers either ob jects prepositional complements or infinite or gerund verb forms 3to capture noun modifiers that act as predicates eg the losingx we extract template paths between v and each of its modifiers nouns or adjectives that are derived from a verbwe use the catvar database to identify verb derivations as an example for the procedure the templates extracted from the sentence the losing party played it safewith partyas the variable are losing x x playand x play safe33 direct learning of unary ruleswe applied the lexical similarity measures pre sented in section 2 for unary rule learningeach argument instantiation of template t in the corpus is taken as a feature f and the pmi between t and f is used for the features weightwe first adapteddirt for unary templates as well as its output filtering by ledirthe various weeds measures were also applied 1 symmetric arithmetic average symmetric harmonic average weighted arithmetic average and precisionafter initial analysis we found that given a right hand side template r symmetric measures such as lin generally tend to prefer relations l rin which l and r are related but do not necessarily participate in an entailment or equivalence relation eg the wrong rule kill x injure xon the other hand directional measures such as weeds precision tend to prefer directional rules inwhich the entailing template is infrequentif an in frequent template has common instantiations with another template the coverage of its features istypically high whether or not an entailment relation exists between the two templatesthis behav ior generates highscore incorrect rulesbased on this analysis we propose a new measure that balances the two behaviors termed 1we applied the best performing parameter values pre sented in and balancedinclusion binc identifies entail ing templates based on a directional measure but penalizes infrequent templates using a symmetric measure binc lin precision 34 deriving unary rules from binary rulesan alternative way to learn unary rules is to first learn binary entailment rules and then derive unary rules from themwe derive unary rules from a given binary rulebase in two stepsfirst for each binary rule we generate all possible unary rules that are part of that rule for example from x find solu tion to y x solve y we generate the unary rules x findx solve x find solutionxsolve solution to y solve y and find solu tion to y solve y the score of each generated rule is set to be the score of the original binary rulethe same unary rule can be derived from different binary rulesfor example hire y employ y is derived both from x hire y x them ploy y and hire y for z employ y for z having a different score from each original binary rulethe second step of the algorithm aggregates the different scores yielded for each derived rule to produce the final rule scorethree aggregationfunctions were tested sum aver age and maximum we want to evaluate learned unary and binary rule bases by their utility for nlp applications throughassessing the validity of inferences that are per formed in practice using the rule baseto perform such experiments we need a test set of seed templates which correspond to a set of target predicates and a corpus annotated with allargument mentions of each predicatethe evaluation assesses the correctness of all argument ex tractions which are obtained by matching in the corpus either the seed templates or templates that entail them according to the rulebase following we found the ace 2005 event training set 2useful for this pur posethis standard ie dataset includes 33 types of event predicates such as injure sue and divorce2 httpprojectsldcupenneduace 852all event mentions are annotated in the corpus in cluding the instantiated arguments of the predicateace guidelines specify for each event its possible arguments each associated with a semantic rolefor instance some of the injure event arguments are agent victim and timeto utilize the ace dataset for evaluating entail ment rule applications we manually represented each ace event predicate by unary seed templatesfor example the seed templates for injure are a injure injure v and injure in t we mapped each event role annotation to the corresponding seed template variable eg agentto a and victimto v in the above exampletemplatesare matched using a syntactic matcher that han dles simple morphosyntactic phenomena as in a rule application is considered correct if the matched argument is annotated by the corresponding ace rolefor testing binary rulebases we automatically generated binary seed templates from any twounary seeds that share the same predicatefor ex ample for injure the binary seeds a injure v a injure in t and injure v in t were automatically generated from the above unary seedswe performed two adaptations to the ace dataset to fit it better to our evaluation needsfirst our evaluation aims at assessing the correctness of inferring a specific target semantic meaning which is denoted by a specific predicate using rulesthus four events that correspond ambiguously tomultiple distinct predicates were ignoredfor instance the transfermoney event refers to both do nating and lending money and thus annotations ofthis event cannot be mapped to a specific seed tem platewe also omitted 3 events with less than 10mentions and were left with 26 events additionally we regard all entailing mentions under the textual entailment definition as correcthowever event mentions are annotated as correct in ace only if they explicitly describe the targeteventfor instance a divorce mention does entail a preceding marriage event but it does not ex plicitly describe it and thus it is not annotated as a marry eventto better utilize the ace dataset we considered for a target event the annotations of other events that entail it as being correct as wellwe note that each argument was considered sep aratelyfor example we marked a mention of a divorced person as entailing the marriage of that person but did not consider the place and time of the divorce act to be those of the marriage we implemented the unary rule learning algo rithms described in section 3 and the binary dirt algorithm we executed each method over the reuters rcv1 corpus 3 learning for each template r in the corpus the top100 rules in which r is entailed by another tem plate l l rall rules were learned in canonical form the rulebase learned by binary dirt was taken as the input for deriving unary rules from binary rulesthe performance of each acquired rulebase was measured for each ace eventwe measured the percentage of correct argument mentions extracted out of all correct argument mentions annotated for the event and out of all argument mentionsextracted for the event we also mea sured f1 their harmonic average and report macro average recall precision and f1 over the 26 event typesno threshold setting mechanism is suggested inthe literature for the scores of the different algo rithms especially since rules for different right hand side templates have different score rangesthus we follow common evaluation practice and test each learned ruleset by taking the top k rules for each seed template wherek ranges from 0 to 100whenk0 no rules are used and mentions are ex tracted only by direct matching of seed templatesour rule application setting provides a rather simplistic ie system it is thus useful for comparing different rulebasesthough the absolute extraction figures do not re flect the full potential of the rulesin secion 52 we analyze the fullsystems errors to isolate the rulescontribution to overall system performance51 resultsin this section we focus on the best performing variations of each algorithm type binary dirt unary dirt unary weeds harmonic binc and derivedaveragewe omitted the results of methods that were clearly inferior to others weedsa weedsd and weedsprecision did not increase 3 httpaboutreuterscomresearchandstandardscorpus 853recall over not using rules because rules with in frequent templates scored highest and arithmetic averaging could not balance well these high scores out of the methods for deriving unary rules from binary rulebases derivedaverage performed best filtering with ledir did not improve the performance of unary dirtfigure 1 presents recall precision and f1 of themethods for different cutoff pointsfirst we observe that even when matching only the seed tem plates unary seeds outperform the binary seeds in terms of both precision and recallthis surprising behavior is consistent through all rulecutoff points all unary learning algorithms per form better than binary dirt in all parametersthe inferior behavior of binary dirt is analyzed in section 52the graphs show that symmetric unary approaches substantially increase recall but dramati cally decrease precision already at the top 10 rulesas a result f1 only decreases for these methodslin similarity and weedsharmonic show similar behaviorsthey consistently outperform derivedaverageone reason for this is that incorrectunary rules may be derived even from correct bi nary rulesfor example from x gain seat on y elect x to y the incorrect unary rule x gainelectxis also generatedthis problem is less frequent when unary rules are directly scored based on their corpus statisticsthe directional measure of binc yields a more accurate rulebase as can be seen by the much slower precision reduction rate compared to theother algorithmsas a result it is the only algo rithm that improves over the f1 baseline of k0with the best cutoff point at k20bincs re call increases moderately compared to other unarylearning approaches but it is still substantially bet ter than not using rules we found that many of the correct mentions missed by binc but identified by other methods are due to occasional extractions of incorrect frequent rules such as partial templates this is reflected in the very low precision of the other methodson the other hand some correct rules were only learned by binc eg countersuit againstxx sueand x take wife x marrywhen only one argument is annotated for a specific event mention binary rules either miss that mention orextract both the correct argument and another in correct oneto neutralize this bias we also testedthe various methods only on event mentions an notated with two or more arguments and obtained similar results to those presented for all mentionsthis further emphasizes the general advantage of using unary rules over binary rules854 52 analysisbinarydirt we analyzed incorrect rules both for binarydirt and binc by randomly samplingfor each algorithm 200 rules that extracted incor rect mentionswe manually classified each rule l r as either correct the rule is valid insome contexts of the event but extracted some in correct mentions partial template l is only apart of a correct template that entails r for exam ple learning x decidex meetinstead of x decide to meet x meet incorrect other incorrect rules eg charge x convict xtable 1 summarizes the analysis and demonstrates two problems of binarydirtfirst rela tive to binc it tends to learn incorrect rules for high frequency templates and therefore extractedmany more incorrect mentions for the same num ber of incorrect rulessecond a large percentage of incorrect mentions extracted are due to partial templates at the rule lefthandsidesuch rules are leaned because many binary templates have a more complex structure than paths between argumentsas explained in section 32 the unary template structure we use is more expressive enabling to learn the correct rulesfor example binc learnedtake y into custody arrest y while binary dirt learned x take y x arrest y system level analysis we manually analyzedthe reasons for false positives and false negatives of binc at its best performing cutoff point by sampling 200 extractions of each typefrom the false positives analysis we see that 39 of the errors are due to incorrect rulesthe main reasons for learning such rules are those discussed in section 33 related templates that are not entailing infrequent templatesall learning methods suffer from these issuesas wasshown by our results binc provides a first step to wards reducing these problemsyet these issues require further researchapart from incorrectly learned rules incorrect template matching and context mismatch contribute together 46 of theerrorscontext mismatches occur when the entail ing template is matched in inappropriate contextsfor example slam x attack xshould not be applied when x is a ball only when it is a personthe ruleset net effect on system precision is better estimated by removing these errors and fixing the annotation errors which yields 72 precisionbinary dirt balanced inclusion correct 16 38 partial template 27 6 incorrect 157 156 total 200 200 table 1 rule type distribution of a sample of 200rules that extracted incorrect mentionsthe corre sponding numbers of incorrect mentions extracted by the sampled rules is shown in parenthesesreason mentions incorrect rule learned 390 context mismatch 270 match error 190 annotation problem 150 table 2 distribution of reasons for false positives by binc at k20reason mentions rule not learned 615 match error 250 discourse analysis needed 120 argument is predicative 15 table 3 distribution of reasons for false negatives by binc at k20table 3 presents the analysis of false negativesfirst we note that 12 of the arguments cannotbe extracted by rules alone due to necessary discourse analysisthus a recall upper bound for en tailment rules is 88many missed extractions aredue to rules that were not learned how ever 25 of the mentions were missed because of incorrect syntactic matching of correctly learned rulesby assuming correct matches in these cases we isolate the recall of the ruleset which yields 39 recallwe presented two approaches for unsupervised ac quisition of unary entailment rules from regular corporain the first approach rules are directly learned based on distributionalsimilarity measuresthe second approach de rives unary rules from a given rulebase of binary rulesunder the first approach we proposed a novel directional measure for scoring entailment rules termed balancedinclusionwe tested the different approaches utilizing a standard ie testset and compared them to binary rule learningour results suggest the advantage of learning unary rules unary rulebases perform 855 better than binary rules it is better to directly learn unary rules than to derive them from binary rulebasesin addition the balancedinclusion measure outperformed all other tested methodsin future work we plan to explore additional unary template structures and similarity scores and to improve rule application utilizing context matching methods such as acknowledgements this work was partially supported by isf grant 109505 the ist programme of the europeancommunity under the pascal network of ex cellence ist2002506778 and the negev project
C08-1107
learning entailment rules for unary templatesmost work on unsupervised entailment rule acquisition focused on rules between templates with two variables ignoring unary rules entailment rules between templates with a single variablein this paper we investigate two approaches for unsupervised learning of such rules and compare the proposed methods with a binary rule learning methodthe results show that the learned unary rulesets outperform the binary rulesetin addition a novel directional similarity measure for learning entailment termed balancedinclusion is the best performing measurewe propose a unary template which is defined as a template consisting of one argument slot and one predicate phrasewe use the distributional similarity of arguments to detect unary template entailmenttwo approaches for unsupervised learning of unary rules are investigated in a pivot approach for extracting paraphrase patterns from bilingual parallel corpora is presented while in the quality of paraphrase extraction from parallel corpora is improved by requiring that phrases and their paraphrases have the same syntactic type our approach is different from theirs in many respects their goal is paraphrase extraction while we are extracting directional entailment rules as textual resources for pattern extraction they use parallel corpora while we rely on monolingual wikipedia revisions the para phrases they extract are more similar to dirt while our approach allows to focus on the acquisition of rules for specific phenomena frequent in entailment pairs and not covered by other resourceswe try identifying the entailment relation between lexicalsyntactic templates using weedsprec but observed that it tends to promote unreliable relations involving infrequent templates
the ups and downs of preposition error detection in esl writing in this paper we describe a methodologyfor detecting preposition errors in the writ ing of nonnative english speakers our system performs at 84 precision andclose to 19 recall on a large set of stu dent essays in addition we address the problem of annotation and evaluation inthis domain by showing how current ap proaches of using only one rater can skew system evaluation we present a sampling approach to circumvent some of the issuesthat complicate evaluation of error detec tion systems the longterm goal of our work is to develop asystem which detects errors in grammar and us age so that appropriate feedback can be given to nonnative english writers a large and growing segment of the worlds populationestimates arethat in china alone as many as 300 million people are currently studying english as a second lan guage usage errors involving prepositions are among the most common types seen in thewriting of nonnative english speakersfor ex ample reported error rates for english prepositions that were as high as 10 ina japanese learner corpuserrors can involve incorrect selection ex traneous use and omission what is responsiblefor making preposition usage so difficult for non native speakersc 2008licensed under the creative commonsattributionnoncommercialshare alike 30 unported li cense some rights reservedat least part of the difficulty seems to be due tothe great variety of linguistic functions that prepositions servewhen a preposition marks the argument of a predicate such as a verb an adjective or a noun preposition selection is con strained by the argument role that it marks thenoun which fills that role and the particular predi catemany english verbs also display alternations in which an argument is sometimes marked by a preposition and sometimes not when prepositions introduceadjuncts such as those of time or manner selec tion is constrained by the object of the preposition finally the selection of a preposition for a given context also depends upon the intended meaning of the writer with so many sources of variation in englishpreposition usage we wondered if the task of se lecting a preposition for a given context might prove challenging even for native speakersto investigate this possibility we randomly selected200 sentences from microsofts encarta encyclopedia and in each sentence we replaced a ran domly selected preposition with a blank linewe then asked two native english speakers to perform a cloze task by filling in the blank with the best preposition given the context provided by the rest of the sentenceour results showed only about75 agreement between the two raters and be tween each of our raters and encartathe presence of so much variability in prepo sition function and usage makes the task of thelearner a daunting oneit also poses special chal lenges for developing and evaluating an nlp error detection systemthis paper addresses both the 865 development and evaluation of such a systemfirst we describe a machine learning system that detects preposition errors in essays of esl writersto date there have been relatively few attempts to address preposition error detectionthough the sister task of detecting determiner errors has been the focus of more researchour system performs comparably with other leading sys temswe extend our previous work by experimenting with combination fea tures as well as features derived from the google ngram corpus and comlex second we discuss drawbacks in current meth ods of annotating esl data and evaluating errordetection systems which are not limited to prepo sition errorswhile the need for annotation by multiple raters has been well established in nlp tasks most previous work in error detection has surprisingly relied on only one raterto either create an annotated corpus of learner errors or to check the systems outputsome grammatical errors such as number disagreement be tween subject and verb no doubt show very highreliability but others such as usage errors involv ing prepositions or determiners are likely to be much less reliableour results show that relyingon one rater for system evaluation can be problem atic and we provide a sampling approach which can facilitate using multiple raters for this taskin the next section we describe a system that automatically detects errors involving incorrect preposition selection and extraneous preposition usage in sections 3 and 4 we discuss theproblem of relying on only one rater for exhaus tive annotation and show how multiple raters can be used more efficiently with a sampling approachfinally in section 5 we present an analysis of com mon preposition errors that nonnative speakers make21 modelwe have used a maximum entropy classi fier to build a model of correctpreposition usage for 34 common english prepo sitionsthe classifier was trained on 7 million preposition contexts extracted from parts of the metametrics lexile corpus that contain textbooks and other materials for high school studentseach context was represented by 25 features consisting of the words and partofspeech tags found in a local window of two positions around the preposition plus the head verb of the preceding verb phrase the head noun of the precedingnoun phrase and the head noun of the following noun phrase among othersin analyzing the contexts we used only tagging and heuris tic phrasechunking rather than parsing so as to avoid problems that a parser might encounter with illformed nonnative text 1 in test mode the classifier was given the context in which a preposition occurred and it returned a probability for each of the 34 prepositions22 other componentswhile the me classifier constitutes the core of thesystem it is only one of several processing com ponents that refines or blocks the systems outputsince the goal of an error detection system is to provide diagnostic feedback to a student typically a systems output is heavily constrained so that it minimizes false positives and thus does not mislead the writerpreprocessing filter a preprocessing pro gram skips over preposition contexts that contain spelling errorsclassifier performance is poor in such cases because the classifier was trained on welledited text ie without misspelled wordsinthe context of a diagnostic feedback and assess ment tool for writers a spell checker would first highlight the spelling errors and ask the writer tocorrect them before the system analyzed the prepo sitionspostprocessing filter after the me clas sifier has output a probability for each of the 34prepositions but before the system has made its fi nal decision a series of rulebased postprocessingfilters block what would otherwise be false posi tives that occur in specific contextsthe first filter prevents the classifier from marking as an error acase where the classifiers most probable preposi tion is an antonym of what the writer wrote such as withwithoutand fromtoin these cases resolution is dependent on the intent of the writerand thus is outside the scope of information cap 1 for an example of a common ungrammatical sentence from our corpus consider in consion for some reasonsmuseums particuraly known travel place get on many peo ple866 tured by the current feature setanother problem for the classifier involves differentiating between certain adjuncts and argumentsfor example in the sentence they described a part for a kid thesystems top choices were of and tothe benefac tive adjunct introduced by for is difficult for theclassifier to learn perhaps because it so freely occurs in many locations within a sentencea post processing filter prevents the system from marking as an error a prepositional phrase that begins with for and has an object headed by a human noun extraneous use filter to cover extraneous use errors we developed two rulebased filters 1 plural quantifier constructions to handle casessuch as some of peopleand 2 repeated prepo sitions where the writer accidentally repeated the same preposition two or more times such as canfind friends with withwe found that extrane ous use errors usually constituted up to 18 of all preposition errors and our extraneous use filters handle a quarter of that 18thresholding the final step for the preposi tion error detection system is a set of thresholds that allows the system to skip cases that are likely to result in false positivesone of these is wherethe topranked preposition and the writers prepo sition differ by less than a prespecified amountthis was also meant to avoid flagging cases where the systems preposition has a score only slightly higher than the writers preposition score such as my sister usually gets home around 300in these cases the systems and the writers prepositions both fit the context and it would be inappropriate to claimthe writers preposition was used incorrectlyanother system threshold requires that the probability of the writers preposition be lower than a pre specified value in order for it to be flagged as anerrorthe thresholds were set so as to strongly fa vor precision over recall due to the high number offalse positives that may arise if there is no thresh oldingthis is a tactic also used for determiner selection in and both thresholds were empirically set on a development corpus23 combination featuresme is an attractive choice of machine learning al gorithm for a problem as complex as preposition error detection in no small part because of theavailability of me implementations that can han dle many millions of training events and featureshowever one disadvantage of me is that it does not automatically model the interactions amongfeatures as some other approaches do such as sup port vector machines to overcome this we have experimented with aug menting our original feature set with combinationfeatureswhich represent richer contextual struc ture in the form of syntactic patternstable 1 illustrates the four com bination features used for the example context take our place in the linethe p denotes a preposition so npn denotes a syntactic context where the preposition is preceded and followed by a noun phrasewe use the preceding noun phrase and following head from the original feature set for the npn featurecolumn 3 shows one instantiation of combination featurescombowordfor the npn feature the corresponding comboword instantiation is place linesince placeis the pn and lineis thefhwe also experimented with using combinations of pos tags and wordtag com binations so for the example the combotag npn feature would be nnnn and the combowordtag npn feature would beplace nnline nn the intuition with the combotag features is that the comboword features have the potentialto be sparse and these capture more general pat terns of usagewe also experimented with other features such as augmenting the model with verbpreposition preferences derived from comlex and querying the google terabyte ngramcorpus with the same patterns used in the combina tion featuresthe comlexbased features did not improve the model and though the google ngram corpus represents much more information than our7 million event model its inclusion improved per formance only marginally24 evaluationin our initial evaluation of the system we col lected a corpus of 8269 preposition contextserrorannotated by two raters using the scheme de scribed in section 3 to serve as a gold standardin this study we focus on two of the three types of preposition errors using the incorrect preposition and using an extraneous prepositionwe compared 867 class components comboword features combotag features pn fh line nn npn pnfh placeline nnnn vpn pvpn takeline vbnn vnpn pvpnfh takeplaceline vbnnnn table 1 feature examples for take our place in the line different models the baseline model of 25 features and baseline with combination features addedtheprecision and recall for the top performing models are shown in table 2these results do not in clude the extraneous use filter this filter generally increased precision by as much as 2 and recall by as much as 5evaluation metrics in the tasks of determiner and preposition selection in wellformed nativetexts and the evaluation metric most commonly used is accuracyin these tasks one compares the systems output on a determiner or preposition to the gold standard of what the writeroriginally wrotehowever in the tasks of deter miner and preposition error detection precision and recall are better metrics to use because oneis only concerned with a subset of the preposi tions those used incorrectly as opposed to all of them in the selection taskin essence accuracy has the problem of distorting system performanceresults the baseline system performed at 798 precision and 117 recallnext we tested the differ ent combination models word tag wordtag andall threesurprisingly three of the four combina tion models tag wordtag all did not improve performance of the system when added to the model but using just the comboword features improved recall by 1we use the comboword model to test our sampling approach in section 4as a final test we tuned our training corpus of 7 million events by removing any contexts with unknown or misspelled words and then retrained the modelthis purgeresulted in a removal of nearly 200000 training eventswith this new training corpus the combotag feature showed the biggest improvement over the baseline withan improvement in both precision and re call to 821 and 141 respectively it should be noted that with the inclusion of the extraneous use filter performance of the combotag rose to 84 precision and close to 19 recallmodel precision recall baseline 798 117 comboword 798 128 combotag 821 141table 2 best system results on incorrect selec tion task 25 related workcurrently there are only a handful of approachesthat tackle the problem of preposition error detec tion in english learner textsused a language model and decision trees to de tect preposition and determiner errors in the clec corpus of learner essaystheir system performs at 79 precision however recall figures are not presented thus making comparison difficultin addition their eval uation differs from ours in that they also include errors of omission and their work focuses on the top twelve most frequent prepositions while ours has greater coverage with the top 34 and used an me ap proach to classify different grammatical errors in transcripts of japanese interviewsthey do not present performance of prepositions specifically but overall performance for the 13 error types they target reached 25 precision and 7 recall created a rule based approach to detecting preposition errors in swedish language learners and their system performed at 25 ac curacy used a language model to tackle the novel problem of prepositionselection in a dialogue corpuswhile their perfor mance results are quite high 88 precision and 868 78 recall it should be noted that their evaluation was on a small corpus with a highly constraineddomain and focused on a limited number of prepo sitions thus making direct comparison with our approach difficultalthough our recall figures may seem low es pecially when compared to other nlp tasks such as parsing and anaphora resolution this is really a reflection of how difficult the task is for example in the problem of preposition selection in native text a baseline using the most frequent preposition results in precision and recall of 26in addi tion the cloze tests presented earlier indicate thateven in wellformed text agreement between na tive speakers on preposition selection is only 75in texts written by nonnative speakers rater dis agreement increases as will be shown in the next sectionwhile developing an error detection system forprepositions is certainly challenging given the re sults from our work and others evaluation also poses a major challengeto date single human annotation has typically been the gold standard for grammatical error detection such as in the work of 2 another method for evaluation is verification where a human rater checks over a systems outputthe drawbacks of this approach are 1every time the system is changed a rater is needed to recheck the output and 2it is very hard to estimate recallwhat these two evaluation methods have in common is that they sidestep the issue of annotator reliabilityin this section we show how relying on only onerater can be problematic for difficult error detec tion tasks and in section 4 we propose a methodfor efficiently evaluat ing a system that does not require the amount ofeffort needed in the standard approach to annota tion31 annotationto create a goldstandard corpus of error annotations for system evaluation and also to deter mine whether multiple raters are better than one 2 had a small evaluation on 40 preposition contexts and it is unclear whether mul tiple annotators were usedwe trained two native english speakers with prior nlp annotation experience to annotate prepositionerrors in esl textthe training was very extensive both raters were trained on 2000 preposition contexts and the annotation manual was it eratively refined as necessaryto summarize the procedure the two raters were shown sentences randomly selected from student essays with each preposition highlighted in the sentencethey marked each context for gram mar and spelling errors and then judged whether the writer used an incorrect preposition a correct preposition or an extraneous prepositionfinally the raters suggested prepositions that would best fit the context even if there were no error 32 reliabilityeach rater judged approximately 18000 prepo sitions contexts with 18 sets of 100 contextsjudged by both raters for purposes of comput ing kappadespite the rigorous training regimen kappa ranged from 0411 to 0786 with an overall combined value of 0630of the prepositions that rater 1 judged to be errors rater 2 judged 302 to be acceptableconversely of the prepositions rater 2 judged to be erroneous rater 1 found 381 acceptablethe kappa of 0630 shows the difficulty of this task and also shows how two highly trained raters can produce very different judgmentsdetails on our annotation and human judgment experiments can be found in variability in ratersjudgments translates to variability of system evaluationfor instance in our previous work wefound that when our systems output was com pared to judgments of two different raters therewas a 10 difference in precision and a 5 differ ence in recallthese differences are problematicwhen evaluating a system as they highlight the potential to substantially over or underestimate per formancethe results from the previous section motivate theneed for a more refined evaluationthey sug gest that for certain error annotation tasks such as preposition usage it may not be appropriate to use only one rater and that if one uses multiple raters 869for error annotation there is the possibility of cre ating an adjudicated set or at least calculating the variability of the systems performancehoweverannotation with multiple raters has its own disadvantages as it is much more expensive and time consumingeven using one rater to produce a sizeable evaluation corpus of preposition errors is extremely costlyfor example if we assume that500 prepositions can be annotated in 4 hours us ing our annotation scheme and that the base rate for preposition errors is 10 then it would take atleast 80 hours for a rater to find and mark 1000 er rorsin this section we propose a more efficient annotation approach to circumvent this problem41 methodologyfigure 1 sampling approach examplethe sampling procedure outlined here is inspired by the one described in for the task of evaluating the usage of nouns verbs and adjectivesthe central idea is to skew the annotation corpus so that it contains a greater proportion of errorshere are the steps in the procedure 1process a test corpus of sentences so that eachpreposition in the corpus is labeled ok or errorby the system2divide the processed corpus into two subcorpora one consisting of the systems ok prepositions and the other of the systems errorprepositionsfor the hypotheticaldata in figure 1 the ok subcorpus con tains 90 of the prepositions and the errorsubcorpus contains the remaining 103randomly sample cases from each subcorpus and combine the samples into an an notation set that is given to a blindhuman raterwe generally use a higher sampling rate for the errorsubcorpus because we want to enrichthe annotation set with a larger proportion of errors than is found in the test corpus as a wholein figure 1 75 of the errorsubcorpus is sampled while only 16 of the ok subcorpus is sampled4for each case that the human rater judges tobe an error check to see which subcorpus itcame fromif it came from the ok sub corpus then the case is a miss if it came from the errorsubcorpus then the case is a hit if the rater judges a case to be a correct usage and it came from the errorsubcorpus then it is a false positive the sample from the errorsubcorpusforthe hypothetical data in figure 1 these val ues are 600750 080 for hits and 150750 020 for fpscalculate the proportion ofmisses in the sample from the ok sub corpusfor the hypothetical data this is 4501500 030 for misses6the values computed in step 5 are conditionalproportions based on the subcorporato calculate the overall proportions in the test cor pus it is necessary to multiply each value by the relative size of its subcorpusthis is shown in table 3 where the proportion ofhits in the errorsubcorpus is multiplied by the relative size of the errorsub corpus to produce an overall hit rate overall rates for fps and misses are calculated in a similar manner7using the values from step 6 calculate precision and recall these are shown in the last two rows of table 3estimated overall rates sample proportion subcorpus proportion hits 080 010 008 fp 020 010 002 misses 030 090 027 precision 008 080 recall 008 023 table 3 sampling calculations 870 this method is similar in spirit to active learning and which has been used to iteratively build up an annotated corpus but it differs fromactive learning applications in that there are no it erative loops between the system and the human annotatorin addition while our methodology is used for evaluating a system active learning is commonly used for training a system42 applicationnext we tested whether our proposed sampling approach provides good estimates of a sys tems performancefor this task we used the comboword model to separate a large corpusof student essays into the errorand ok sub corporathe original corpus totaled over 22000 prepositions which would normally take several weeks for two raters to double annotate and thenadjudicateafter the two subcorpora were propor tionally sampled this resulted in an annotation set of 752 preposition contexts which is substantially more manageable than the full corpuswe had both raters work together to make judgments for each prepositionit is important to note that while these are notthe exact same essays used in the previous evalua tion of 8269 preposition contexts they come from the same pool of student essays and were on the same topicsgiven these strong similarities we feel that one can compare scores between the two approachesthe precision and recall scores forboth approaches are shown in table 4 and are ex tremely similar thus suggesting that the samplingapproach can be used as an alternative to exhaus tive annotationprecision recall standard approach 80 12 sampling approach 79 14 table 4 sampling results it is important with the sampling approach to use appropriate sample sizes when drawing from the subcorpora because the accuracy of the estimatesof hits and misses will depend upon the propor tion of errors in each subcorpus as well as on the sample sizesthe ok subcorpus is expected to have even fewer errors than the overall base rate so it is especially important to have a relativelylarge sample from this subcorpusthe compari son study described above used an ok subcorpussample that was twice as large as the error subcorpus sample in short the sampling approach is intended to alleviate the burden on annotators when faced with the task of having to rate several thousand errors of a particular type in order to produce a sizeable error corpuson the other hand one advantage that exhaustive annotation has over the sampling method is that it makes possible the comparison of multiple systemswith the sampling approach one would have to resample and annotate for each system thus multiplying the work neededone aspect of automatic error detection that usu ally is underreported is an analysis of the errors that learners typically makethe obvious benefit of this analysis is that it can focus development of the systemfrom our annotated set of preposition errors we found that the most common prepositions that learners used incorrectly were in to and of the top ten prepositions accounted for 938 of all preposition errors in our learner corpusnext we ranked the common preposition con fusions the common mistakes made for each prepositionthe top ten most common confusions are listed in table 5 where null refers to cases where no preposition is licensed the most common of fenses were actually extraneous errors using to and of when no preposition was li censed accounted for 168 of all errorsit is interesting to note that the most common usage errors by learners overwhelmingly involved the ten most frequently occurring prepositions in native textthis suggests that our effort to handle the 34 most frequently occurring prepositions maybe overextended and that a system that is specifically trained and refined on the top ten preposi tions may provide better diagnostic feedback to a learnerthis paper has two contributions to the field of error detection in nonnative writingfirst we discussed a system that detects preposition errors with high precison and is competitive 871 writers prepraters prepfrequency to null 95 of null 73 in at 71 to for 46 in null 32 of for 31 in on 31 of in 29 at in 27 for to 25 table 5 common preposition confusions with other leading methodswe used an me approach augmented with combination features and a series of thresholdsthis system is currently incorporated in the criterion writing evaluationservicesecond we showed that the standard ap proach to evaluating nlp error detection systems can greatly skew system results when the annotation is done by only one raterhowever one reason why a single rater is commonly used is that building a corpus of learner errors can be extremely costly and time consumingto address this efficiency issue we presented a sampling approach that produces results comparable to exhaustive annotationthis makes using multiple raters possible since less time is required to assess the systems performancewhile the work presented here has focused on prepositions the arguments against using only one rater and for using a sampling approach generalize to other error types such as determiners and collocationsacknowledgements we would first like to thank our two annotators sarah ohls and waverly vanwinkle for their hours of hard workwe wouldalso like to acknowledge the three anonymous reviewers and derrick higgins for their helpful com ments and feedback
C08-1109
the ups and downs of preposition error detection in esl writingin this paper we describe a methodology for detecting preposition errors in the writing of nonnative english speakersour system performs at 84 precision and close to 19 recall on a large set of student essaysin addition we address the problem of annotation and evaluation in this domain by showing how current approaches of using only one rater can skew system evaluationwe present a sampling approach to circumvent some of the issues that complicate evaluation of error detection systemswe use the toefl datawe show that agreement between two native speakers on a cloze test targeting prepositions is about 76 which demonstrates that there are many contexts that license multiple prepositionsour model is trained with lexical features
a uniform approach to analogies synonyms antonyms and associations recognizing analogies synonyms anto nyms and associations appear to be fourdistinct tasks requiring distinct nlp al gorithms in the past the four tasks have been treated independently using a widevariety of algorithms these four seman tic classes however are a tiny sample of the full range of semantic phenomena andwe cannot afford to create ad hoc algo rithms for each semantic phenomenon weneed to seek a unified approach we propose to subsume a broad range of phenom ena under analogies to limit the scope of this paper we restrict our attention to the subsumption of synonyms antonyms and associations we introduce a supervised corpusbased machine learning algorithm for classifying analogous word pairs and we show that it can solve multiplechoice sat analogy questions toefl synonymquestions esl synonymantonym questions and similarassociatedboth ques tions from cognitive psychology a pair of words is analogous to another pair when the semantic re lations between the words in the first pair are highly similar to the relations in the second pairtwo words are synonymousin a context when they can be interchanged they are are antony mous when they have opposite meanings licensed to the coling 2008 organizing committee for pub lication in coling 2008 and for republishing in any form or mediumand white and they are associated when they tend to cooccur on the surface it appears that these are four distinct semantic classes requiring distinct nlp al gorithms but we propose a uniform approach to all fourwe subsume synonyms antonyms and associations under analogiesin essence we say that x and y are antonyms when the pair xy is analogous to the pair blackwhite x and y are synonyms when they are analogous to the pair leviedimposed and x and y are associated when they are analogous to the pair doctorhospitalthere is past work on recognizing analogies synonyms antonyms and asso ciations but each of these four tasks has been examined separately in isolation from the othersas far as we know the algorithm proposed here is the first attempt to deal with all four tasks using a uniform approachwe believe that it isimportant to seek nlp algorithms that can han dle a broad range of semantic phenomena becausedeveloping a specialized algorithm for each phe nomenon is a very inefficient research strategyit might seem that a lexicon such as word net contains all the information we need to handle these four taskshowever weprefer to take a corpusbased approach to seman ticsveale used wordnet to answer 374 multiplechoice sat analogy questions achievingan accuracy of 43 but the best corpusbased ap proach attains an accuracy of 56 another reason to prefer a corpusbased approachto a lexiconbased approach is that the former re quires less human labour and thus it is easier to extend to other languagesin section 2 we describe our algorithm for rec ognizing analogieswe use a standard supervised 905 machine learning approach with feature vectorsbased on the frequencies of patterns in a large cor puswe use a support vector machine to learn how to classify the feature vectors section 3 presents four sets of experimentswe apply our algorithm for recognizing analogies to multiplechoice analogy questions from the sat college entrance test multiplechoice synonym questions from the toefl esl practice questions for distinguishing syn onyms and antonyms and a set of word pairs thatare labeled similar associated and both devel oped for experiments in cognitive psychologywe discuss the results of the experiments in section 4the accuracy of the algorithm is competitive with other systems but the strength of the al gorithm is that it is able to handle all four tasks with no tuning of the learning parameters to the particular taskit performs well although it iscompeting against specialized algorithms devel oped for single tasksrelated work is examined in section 5 and limitations and future work are considered in sec tion 6we conclude in section 7an analogy abcd asserts that a is to b as c is to d for example trafficstreetwaterriverbed asserts that traffic is to street as water is to riverbed that is the semantic relations between traffic and street are highly similar to the semantic relations between water and riverbedwe may view the task of recognizing word analogies as a problem of classifying word pairs word pair class label carpenterwood artisanmaterial masonstone artisanmaterial potterclay artisanmaterial glassblowerglass artisanmaterial trafficstreet entitycarrier waterriverbed entitycarrier packetsnetwork entitycarrier gossipgrapevine entitycarriertable 1 examples of how the task of recogniz ing word analogies may be viewed as a problem of classifying word pairswe approach this as a standard classificationproblem for supervised machine learningthe al gorithm takes as input a training set of word pairs with class labels and a testing set of word pairs without labelseach word pair is represented as a vector in a feature space and a supervised learning algorithm is used to classify the feature vectorsthe elements in the feature vectors are based on the frequencies of automatically defined patterns in a large corpusthe output of the algorithm is anassignment of labels to the word pairs in the test ing setfor some of the experiments we selecta unique label for each word pair for other ex periments we assign probabilities to each possible label for each word pairfor a given word pair such as masonstone the first step is to generate morphological variationssuch as masonsstonesin the following experi ments we use morpha and morphg for mor phological processing 1 the second step is to search in a large corpus for all phrases of the following form 0 to 1 words x 0 to 3 words y 0 to 1 wordsin this template xy consists of morphologicalvariations of the given word pair in either order for example masonstone stonemason masonsstones and so ona typical phrase for ma sonstone would be the mason cut the stone withwe then normalize all of the phrases that are found by using morpha to remove suffixesthe template we use here is similar to turney but we have added extra context wordsbefore the x and after the y our morpholog ical processing also differs from turney in the following experiments we search in a cor pus of 5 1010 words consisting of web pages gathered by a web crawler2 to retrieve phrases from the corpus weuse wumpus an effi cient search engine for passage retrieval from large corpora3 the next step is to generate patterns from allof the phrases that were found for all of the in put word pairs to generate patterns from a phrase we re place the given word pairs with variables x and y and we replace the remaining words with a wild card symbol or leave them as they are1httpwwwinformaticssusxacukresearchgroupsnlp carrollmorphhtml2the corpus was collected by charles clarke university of waterloowe can provide copies on request3httpwwwwumpussearchorg906 for example the phrase the mason cut the stone withyields the patterns the x cut y with x the y and so onif a phrase contains n words then it yields 2 patternseach pattern corresponds to a feature in the feature vectors that we will generatesince a typical input set of word pairs yields millions of pat terns we need to use feature selection to reduce the number of patterns to a manageable quantityfor each pattern we count the number of input word pairs that generated the patternfor example x cut y is generated by both masonstone and carpenterwoodwe then sort the patterns in descending order of the number of word pairs that generated themif there are n input word pairs then we select the topkn patterns and drop the remainderin the fol lowing experiments k is set to 20the algorithm is not sensitive to the precise value of kthe reasoning behind the feature selection al gorithm is that shared patterns make more useful features than rare patternsthe number of features depends on the number of word pairs because if we have more feature vectors then we need more features to distinguish themturney also selects patterns based on the numberof pairs that generate them but the number of se lected patterns is a constant independent of the number of input word pairsthe next step is to generate feature vectors one vector for each input word paireach of the n feature vectors has kn elements one element for each selected patternthe value of an element in a vector is given by the logarithm of the frequency in the corpus of the corresponding pattern for the given word pairfor example suppose the given pair is masonstone and the pattern is x cut y we look at the normalized phrases that we collected for masonstone and we count how many match this patternif f phrases match thepattern then the value of this element in the fea ture vector is log is undefinedeach feature vector is then normal ized to unit lengththe normalization ensures that features in vectors for highfrequency word pairs are comparable to features in vectors for lowfrequency word pairs now that we have a feature vector for each in put word pair we can apply a standard supervised learning algorithmin the following experiments we use a sequential minimal optimization support vector machine with a radial basis function kernel as implemented in weka 4 the algorithm generates probability estimates for each class by fitting logistic regression models to the outputs of the svmwe disable the normalizationoption in weka since the vectors are already nor malized to unit lengthwe chose the smo rbf algorithm because it is fast robust and it easily handles large numbers of featuresfor convenience we will refer to the above algo rithm as pairclassin the following experiments pairclass is applied to each of the four problemswith no adjustments or tuning to the specific prob lemssome work is required to fit each probleminto the general framework of pairclass but the core al gorithm is the same in each casethis section presents four sets of experiments with analogies synonyms antonyms and associationswe explain how each task is treated as a problem of classifying analogous word pairs we give the experimental results and we discuss past work on each of the four tasks31 sat analogiesin this section we apply pairclass to the taskof recognizing analogiesto evaluate the performance we use a set of 374 multiplechoice ques tions from the sat college entrance examtable 2 shows a typical questionthe target pair is called the stemthe task is to select the choice pair that is most analogous to the stem pairstem masonstone choices teacherchalk carpenterwood soldiergun photographcamera bookword solution carpenterwood table 2 an example of a question from the 374 sat analogy questionsthe problem of recognizing word analogies wasfirst attempted with a system called argus using a small handbuilt semantic network with a spreading activation algorithmturney et al used a combination of 13 independent modulesveale used a spread ing activation algorithm with wordnet turney used a corpusbased algorithmwe may view table 2 as a binary classification problem in which masonstone and carpen terwood are positive examples and the remaining word pairs are negative examplesthe difficulty is that the labels of the choice pairs must be hidden from the learning algorithmthat is the training set consists of one positive example and the testing set consists of five unlabeled exam ples to make this task more tractable we randomly choose a stem pair from one of the 373 other sat analogy questions andwe assume that this new stem pair is a negative ex ample as shown in table 3word pair train or test class label masonstone train positive tutorpupil train negative teacherchalk test hidden carpenterwood test hidden soldiergun test hidden photographcamera test hidden bookword test hidden table 3 how to fit a sat analogy question into the framework of supervised pair classificationto answer the sat question we use pairclass to estimate the probability that each testing example is positive and we guess the testing example with the highest probabilitylearning from a trainingset with only one positive example and one nega tive example is difficult since the learned model can be highly unstableto increase the stability we repeat the learning process 10 times using adifferent randomly chosen negative training exam ple each timefor each testing word pair the 10 probability estimates are averaged togetherthis is a form of bagging pairclass attains an accuracy of 521forcomparison the acl wiki lists 12 previously published results with the 374 sat analogy ques tions5 only 2 of the 12 algorithms have higher accuracythe best previous result is an accuracy of 561 random guessing would 5for more information see sat analogy questions at httpaclweborgaclwikiyield an accuracy of 20the average senior high school student achieves 57 correct 32 toefl synonymsnow we apply pairclass to the task of recogniz ing synonyms using a set of 80 multiplechoicesynonym questions from the toefl a sample question is shown in table 4the task is to select the choice word that is most similar in meaning to the stem wordstem levied choices imposed believed requested correlated solution imposed table 4 an example of a question from the 80 toefl questionssynonymy can be viewed as a high degree of semantic similaritythe most common way tomeasure semantic similarity is to measure the dis tance between words in wordnet corpusbased measures of word similarityare also common we may view table 4 as a binary classifica tion problem in which the pair leviedimposed is a positive example of the class synonymous and the other possible pairings are negative examples as shown in table 5word pair class label leviedimposed positive leviedbelieved negative leviedrequested negative leviedcorrelated negative table 5 how to fit a toefl question into the framework of supervised pair classificationthe 80 toefl questions yield 320 word pairs 80 labeled positive and 240 labelednegativewe apply pairclass to the word pairs us ing tenfold crossvalidationin each random fold 90 of the pairs are used for training and 10 are used for testingfor each fold the model that is learned from the training set is used to assign probabilities to the pairs in the testing setwith 908ten separate folds the ten nonoverlapping test ing sets cover the whole datasetour guess foreach toefl question is the choice with the high est probability of being positive when paired with the corresponding stempairclass attains an accuracy of 762forcomparison the acl wiki lists 15 previously published results with the 80 toefl synonym questions6 of the 15 algorithms 8 have higher accu racy and 7 have lowerthe best previous resultis an accuracy of 975 ob tained using a hybrid of four different algorithmsrandom guessing would yield an accuracy of 25the average foreign applicant to a us university achieves 645 correct 33 synonyms and antonymsthe task of classifying word pairs as either syn onyms or antonyms readily fits into the framework of supervised classification of word pairstable 6shows some examples from a set of 136 esl practice questions that we collected from various esl websitesword pair class label gallingirksome synonyms yieldbend synonyms naivecallow synonyms advisesuggest synonyms dissimilarityresemblance antonyms commenddenounce antonyms exposecamouflage antonyms unveilveil antonyms table 6 examples of synonyms and antonyms from 136 esl practice questionslin et al distinguish synonyms from antonyms using two patterns from x to y and either x or y when x and y are antonyms they occasionally appear in a large corpus in oneof these two patterns but it is very rare for syn onyms to appear in these patternsour approach is similar to lin et al but we do not rely on handcoded patterns instead pairclass patterns are generated automaticallyusing tenfold crossvalidation pairclass at tains an accuracy of 750always guessing the majority class would result in an accuracy of 654the average human score is unknown and 6for more information see toefl synonym questions at httpaclweborgaclwikithere are no previous results for comparison34 similar associated and botha common criticism of corpusbased measures ofword similarity is that they are merely detecting associations rather than actual semantic similarity to address this criti cism lund et al evaluated their algorithm for measuring word similarity with word pairs that were labeled similar associated or boththeselabeled pairs were originally created for cogni tive psychology experiments with human subjectstable 7 shows some ex amples from this collection of 144 word pairs word pair class label tablebed similar musicart similar hairfur similar housecabin similar cradlebaby associated mugbeer associated camelhump associated cheesemouse associated alebeer both uncleaunt both peppersalt both frownsmile both table 7 examples of word pairs labeled similar associated or bothlund et al did not measure the accuracy of their algorithm on this threeclass classification probleminstead following standard practice incognitive psychology they showed that their al gorithms similarity scores for the 144 word pairs were correlated with the response times of human subjects in priming testsin a typical priming test a human subject reads a priming word and is then asked to complete a partial word the time required to perform thetask is taken to indicate the strength of the cogni tive link between the two words using tenfold crossvalidation pairclass at tains an accuracy of 771 on the 144 word pairssince the three classes are of equal size guessing the majority class and random guessing both yield an accuracy of 333the average human score is unknown and there are no previous results for comparison909the four experiments are summarized in tables 8 and 9for the first two experiments where there are previous results pairclass is not the best butit performs competitivelyfor the second two ex periments pairclass performs significantly abovethe baselineshowever the strength of this ap proach is not its performance on any one task but the range of tasks it can handleas far as we know this is the first time a standard supervised learning algorithm has been ap plied to any of these four problemsthe advantageof being able to cast these problems in the frame work of standard supervised learning problems isthat we can now exploit the huge literature on su pervised learningpast work on these problems has required implicitly coding our knowledge ofthe nature of the task into the structure of the algo rithmfor example the structure of the algorithmfor latent semantic analysis implicitly con tains a theory of synonymy the problem with this approach is that it can be very difficult to work out how to modify the algorithm if it does not behave the way we wanton the other hand with a supervised learning algo rithm we can put our knowledge into the labeling of the feature vectors instead of putting it directly into the algorithmthis makes it easier to guide the system to the desired behaviourwith our approach to the sat analogy ques tions we are blurring the line between supervised and unsupervised learning since the training set for a given sat question consists of a single realpositive example in effect a single example becomes a sui generis it con stitutes a class of its ownit may be possible to apply the machinery of supervised learning toother problems that apparently call for unsupervised learning by using this sui generis deviceone of the first papers using supervised ma chine learning to classify word pairs was rosarioand hearsts paper on classifying nounmodifier pairs in the medical domainfor ex ample the nounmodifier expression brain biopsy was classified as procedurerosario and hearst constructed feature vectors for each nounmodifier pair using mesh and umls as lexical resourcesthey then trained a neu ral network to distinguish 13 classes of semantic relations such as because location measure andinstrumentnastase and szpakowicz explored a similar approach to classifying general domain nounmodifier pairs using wordnet and rogets thesaurus as lexical resourcesturney and littman used corpusbased features for classifying nounmodifier pairstheir features were based on 128 handcoded patternsthey used a nearestneighbour learning algorithm to classify generaldomain nounmodifier pairsinto 30 different classes of semantic relationstur ney later addressed the same problem using 8000 automatically generated patternsone of the tasks in semeval 2007 was the clas sification of semantic relations between nominals the problem is to classifysemantic relations between nouns and noun com pounds in the context of a sentencethe task attracted 14 teams who created 15 systems all of which used supervised machine learning with features that were lexiconbased corpusbased or bothpairclass is most similar to the algorithm of tur ney but it differs in the following wayspairclass does not use a lexicon to find syn onyms for the input word pairsone of our goals in this paper is to show that a pure corpusbased algorithm can handle synonymswithout a lexiconthis considerably simpli fies the algorithmpairclass uses a support vector machine instead of a nearest neighbour learning algorithmpairclass does not use the singular value decomposition to smooth the feature vectorsit has been our experience that svd is not necessary with svmspairclass generates probability estimateswhereas turney uses a cosine mea sure of similarityprobability estimates canbe readily used in further downstream pro cessing but cosines are less usefulthe automatically generated patterns in pairclass are slightly more general than the pat terns of turney the morphological processing in pairclass is more sophisticated than in turney 910 experiment number of vectors number of features number of classes sat analogies 2244 44880 374 toefl synonyms 320 6400 2 synonyms and antonyms 136 2720 2 similar associated and both 144 2880 3 table 8 summary of the four taskssee section 3 for explanationsexperiment accuracy best previous human baseline rank sat analogies 521 561 570 200 2 higher out of 12 toefl synonyms 762 975 645 250 8 higher out of 15 synonyms and antonyms 750 none unknown 654 none similar associated and both 771 none unknown 333 none table 9 summary of experimental resultssee section 3 for explanationshowever we believe that the main contribution of this paper is not pairclass itself but the extension of supervised word pair classification beyond theclassification of nounmodifier pairs and semantic relations between nominals to analogies syn onyms antonyms and associationsas far as we know this has not been done beforethe main limitation of pairclass is the need for a large corpusphrases that contain a pair of wordstend to be more rare than phrases that contain either of the members of the pair thus a large cor pus is needed to ensure that sufficient numbers of phrases are found for each input word pairthe size of the corpus has a cost in terms of disk spaceand processing timein the future as hardware i am proves this will become less of an issue but there may be ways to improve the algorithm so that a smaller corpus is sufficientanother area for future work is to apply pair class to more taskswordnet includes more thana dozen semantic relations pairclass should be applicable to allof these relationsother potential applications in clude any task that involves semantic relations such as word sense disambiguation informationretrieval information extraction and metaphor in terpretationin this paper we have described a uniform approach to analogies synonyms antonyms and as sociations in which all of these phenomena are subsumed by analogieswe view the problem ofrecognizing analogies as the classification of se mantic relations between wordswe believe that most of our lexical knowledge is relational not attributionalthat is meaning is largely about relations among words rather thanproperties of individual words considered in isolationfor example consider the knowledge en coded in wordnet much of the knowledge in wordnet is embedded in the graph structure that connects wordsanalogies of the form abcd are calledproportional analogiesthese types of lowerlevel analogies may be contrasted with higher level analogies such as the analogy between the solar system and rutherfords model of the atom which are sometimes called conceptual analogieswe believe that the difference between these two types is largely a matter of complexitya higherlevel analogy is composed of many lowerlevel analogiesprogresswith algorithms for processing lowerlevel analo gies will eventually contribute to algorithms for higherlevel analogiesthe idea of subsuming a broad range of semantic phenomena under analogies has been sug gested by many researchersminsky wrote how do we ever understand anythingalmost always i think by using one or another kind of analogyhofstadter claimed all meaningcomes from analogiesin nlp analogical algo rithms have been applied to machine translation morphology and semantic relations analogy provides a frameworkthat has the potential to unify the field of seman ticsthis paper is a small step towards that goalacknowledgementsthanks to joel martin and the anonymous review ers of coling 2008 for their helpful comments911
C08-1114
a uniform approach to analogies synonyms antonyms and associationsrecognizing analogies synonyms antonyms and associations appear to be four distinct tasks requiring distinct nlp algorithmsin the past the four tasks have been treated independently using a wide variety of algorithmsthese four semantic classes however are a tiny sample of the full range of semantic phenomena and we cannot afford to create ad hoc algorithms for each semantic phenomenon we need to seek a unified approachwe propose to subsume a broad range of phenomena under analogiesto limit the scope of this paper we restrict our attention to the subsumption of synonyms antonyms and associationswe introduce a supervised corpusbased machine learning algorithm for classifying analogous word pairs and we show that it can solve multiplechoice sat analogy questions toefl synonym questions esl synonymantonym questions and similarassociatedboth questions from cognitive psychologywe propose a simpler svmbased algorithm for analogical classification called pairclasswe argue that many nlp tasks can be formulated in terms of analogical reasoning and we apply our pairclass algorithm to a number of problems including sat verbal analogy tests synonymantonym classification and distinction between semantically similar and semantically associated wordswe advocate the need for a uniform approach to corpusbased semantic tasks
top accuracy and fast dependency parsing is not a contradiction in addition to a high accuracy short parsing and training times are the most important properties of a parser however pars ing and training times are still relatively long to determine why we analyzed thetime usage of a dependency parser we il lustrate that the mapping of the features onto their weights in the support vectormachine is the major factor in time complexity to resolve this problem we implemented the passiveaggressive percep tron algorithm as a hash kernel the hash kernel substantially improves the parsing times and takes into account thefeatures of negative examples built dur ing the training this has lead to a higher accuracy we could further increase theparsing and training speed with a paral lel feature extraction and a parallel parsing algorithm we are convinced that the hashkernel and the parallelization can be ap plied successful to other nlp applicationsas well such as transition based depen dency parsers phrase structrue parsers and machine translation highly accurate dependency parsers have high de mands on resources and long parsing timesthe training of a parser frequently takes several days and the parsing of a sentence can take on averageup to a minutethe parsing time usage is impor tant for many applicationsfor instance dialog systems only have a few hundred milliseconds toanalyze a sentence and machine translation sys tems have to consider in that time some thousandtranslation alternatives for the translation of a sen tenceparsing and training times can be improved by methods that maintain the accuracy level or methods that trade accuracy against better parsing timessoftware developers and researchers areusually unwilling to reduce the quality of their ap plicationsconsequently we have to consider atfirst methods to improve a parser which do not in volve an accuracy loss such as faster algorithmsfaster implementation of algorithms parallel al gorithms that use several cpu cores and feature selection that eliminates the features that do not improve accuracywe employ as a basis for our parser the secondorder maximum spanning tree dependency pars ing algorithm of carreras this algorithmfrequently reaches very good or even the best la beled attachment scores and was one of the most used parsing algorithms in the shared task 2009 of the conference on natural language learning we combined thisparsing algorithm with the passiveaggressive perceptron algorithm a parser build out of these two algorithms provides a good baseline and starting point to improve upon the parsing and training timesthe rest of the paper is structured as followsin section 2 we describe related workin section 3 we analyze the time usage of the components of 89the parserin section 4 we introduce a new kernel that resolves some of the bottlenecks and i am proves the performancein section 5 we describethe parallel parsing algorithms which nearly allowed us to divide the parsing times by the number of coresin section 6 we determine the opti mal setting for the nonprojective approximationalgorithmin section 7 we conclude with a sum mary and an outline of further researchthe two main approaches to dependency parsing are transition based dependency parsing and maximum spanning tree based dependency parsing transition based parsers typically have a linear or quadratic complexity nivre introduced a transition based non projective parsing algorithm that has a worst casequadratic complexity and an expected linear pars ing timetitov and henderson combined a transition based parsing algorithm which used abeam search with a latent variable machine learn ing techniquemaximum spanning tree dependency based parsers decomposes a dependency structure into parts known as factorsthe factors of the first order maximum spanning tree parsing algorithm are edges consisting of the head the dependent and the edge labelthis algorithm has a quadratic complexitythe second order parsing algorithm of mcdonald and pereira uses aseparate algorithm for edge labelingthis algo rithm uses in addition to the first order factors theedges to those children which are closest to the de pendentthe second order algorithm of carreras uses in addition to mcdonald and pereira the child of the dependent occurring in the sentence between the head and the dependent and the an edge to a grandchildthe edge labeling is an integral part of the algorithm which requires an additional loop over the labelsthis algorithm therefore has a complexity of ojohansson and nugues reduced the needed number of loops over the edge labels by using only the edges that existed in the training corpus for a distinct head and child partofspeech tag combinationthe transition based parsers have a lower com plexitynevertheless the reported run times inthe last shared tasks were similar to the maxi mum spanning tree parsersfor a transition based parser gesmundo et al reported run times between 22 days for english and 47 days forczech for the joint training of syntactic and se mantic dependenciesthe parsing times were about one word per second which speeds upquickly with a smaller beamsize although the ac curacy of the parser degrades a bitjohansson and nugues reported training times of 24 days for english with the highorder parsing algorithm of carreras we built a baseline parser to measure the time usagethe baseline parser resembles the architec ture of mcdonald and pereira it consists of the second order parsing algorithm of carreras the nonprojective approximation algorithm the passive aggressive support vector machine and a feature extraction componentthe features are listed in table 4as in mcdonald et al the parser stores the features of each training example in a filein each epoch of the training the feature file is read and the weights are calculated and stored in an arraythis procedure is up to 5 times faster than computing the features each time anewbut the parser has to maintain large arrays for the weights of the sentence and the training filetherefore the parser needs 3gb of main memoryfor english and 100gb of disc space for the train ing filethe parsing time is approximately 20 faster since some of the values did not have to be recalculatedalgorithm 1 illustrates the training algorithm in pseudo codeis the set of training examples where an example is a pair of a sentence and the corresponding dependency structurewand v are weight vectorsthe first loop ex tracts features from the sentence xi and maps the features to numbersthe numbers are grouped into three vectors for the features of all possible edges hd possible edges in combination withsiblings hds and in combination with grandchil 90 tes tr tp ta rest total te parstrainsentfeatlas uas chinese 4582 748 95 3 846 3298 3262 84h 22277 876m 7688 8127 english 1509 168 125 20 15 202 1223 1258 385h 39279 847m 9014 9245 german 945 139 77 178 15 166 419 429 267h 36020 916m 8764 9003 spanish 3329 779 36 2 816 2518 2550 169h 14329 551m 8602 8954 table 1 tes is the elapsed time in milliseconds to extract and store the features tr to read the features and to calculate the weight arrays tp to predict the projective parse tree ta to apply the nonprojective approximation algorithm rest is the time to conduct the other parts such as the update function trainis the total training time per instance and te is the elapsed time to extract the featuresthe next columns illustrate the parsing time in milliseconds per sentence for the test set training time in hours the number of sentences in the training set the total number of features in million the labeled attachment score of the test set and the unlabeled attachment scorealgorithm 1 training baseline algorithm ii1 training dataw 0v 0 e i passiveaggresive update weight for i 1 to i tsse extractandstorefeatures tese for n 1 to e iteration over the training epochs for i 1 to i iteration over the training examples k i i e i k 2 passiveaggressive weight tsrk a readfeaturesandcalcarrays terk tspk yp predicteprojectiveparsetreetepk tsak ya nonprojectiveapprox teak update w v according to and w v average dren hdg where h d g and s are the indexes of the words included in xifinally the method stores the feature vectors on the hard discthe next two loops build the main part of the training algorithmthe outer loop iterates over the number of training epochs while the innerloop iterates over all training examplesthe on line training algorithm considers a single training example in each iterationthe first function in the loop reads the features and computes the weights a for the factors in the sentence xia is a set of weight arraysa w f hdw f hdsw f hdg the parsing algorithm uses the weight arrays to predict a projective dependency structure ypthe nonprojective approximation algorithm has as input the dependency structure and the weightarraysit rearranges the edges and tries to in crease the total score of the dependency structurethis algorithm builds a dependency structure yawhich might be nonprojectivethe training al gorithm updates w according to the difference between the predicted dependency structures ya and the reference structure yiit updates v as well whereby the algorithm additionally weights the updates by since the algorithm decreases in each round the algorithm adapts the weights more aggressively at the beginning after all iterations the algorithm com putes the average of v which reduces the effect of overfitting we have inserted into the training algorithm functions to measure the start times ts and the end times te for the procedures to compute andstore the features to read the features to predict the projective parse and to calculate the nonprojective approximationwe calculate the aver age elapsed time per instance as the average over all training examples and epochs tx ei k1 t e xktsxk ei we use the training set and the test set of theconll shared task 2009 for our experimentsta ble 1 shows the elapsed times in 11000 seconds of the selected languages for the procedure calls in the loops of algorithm 1we had to measure the times for the feature extractionin the parsing algorithm since in the training al gorithm the time can only be measured together with the time for storing the featuresthe table contains additional figures for the total training time and parsing scores1 the parsing algorithm itself only required to our surprise 125 ms for a english sentence 1we use a intel nehalem i7 cpu 333 ghzwith turbo mode on the clock speed was 346 ghz91 on average while the feature extraction needs 1223 ms to extract the features takes about100 times longer than to build a projective dependency treethe feature extraction is already implemented efficientlyit uses only numbers to rep resent features which it combines to a long integer number and then maps by a hash table2 to a 32bit integer numberthe parsing algorithm uses the integer number as an index to access the weights in the vectors w and v the complexity of the parsing algorithm is usu ally considered the reason for long parsing timeshowever it is not the most time consuming component as proven by the above analysisthere fore we investigated the question further askingwhat causes the high time consumption of the fea ture extractionin our next experiment we left out the mapping of the features to the index of the weight vectorsthe feature extraction takes 88 mssentence with out the mapping and 1223 mssentence with the mappingthe featureindex mapping needs 93 of the time to extract the features and 91 of thetotal parsing timewhat causes the high time con sumption of the featureindex mappingthe mapping has to provide a number as an in dex for the features in the training examples and to filter out the features of examples built while theparser predicts the dependency structuresthe al gorithm filters out negative features to reduce the memory requirement even if they could improve the parsing resultwe will call the features built due to the training examples positive features and the rest negative featureswe counted 58 timesmore access to negative features than positive fea tureswe now look more into the implementation details of the used hash table to answer the pre viously asked questionthe hash table for the featureindex mapping uses three arrays one for the keys one for the values and a status array to indicate the deleted elementsif a program storesa value then the hash function uses the key to cal culate the location of the valuesince the hashfunction is a heuristic function the predicted lo cation might be wrong which leads to socalled 2we use the hash tables of the trove library httpsourceforgenetprojectstrove4jhash missesin such cases the hash algorithm has to retry to find the valuewe counted 87 hash misses including misses where the hash had to retry several timesthe number of hash misseswas high because of the additional negative fea turesthe cpu cache can only store a small amount of the data from the hash tabletherefore the memory controller has frequently to transfer data from the main memory into the cputhis procedure is relatively slowwe traced down the high time consumption to the access of the key and the access of the valuesuccessive accessesto the arrays are fast but the relative random ac cesses via the hash function are very slowthelarge number of accesses to the three arrays be because of the negative features positive features and because of the hash misses multiplied by the time needed to transfer the data into the cpu are the reason for the high time consumptionwe tried to solve this problem with bloom filters larger hash tables and customized hash func tions to reduce the hash missesthese techniquesdid not help muchhowever a substantial i am provement did result when we eliminated the hash table completely and directly accessed the weight vectors w and v with a hash functionthis led us to the use of hash kernelsa hash kernel for structured data uses a hash function h j 1n to index cfshi etalmaps the observations x to a feature spacewe define as the numeric fea ture representation indexed by j let k j the hash based featureindex mappingwhere h k the process of parsing a sen tence xi is to find a parse tree yp that maximizes a scoring function argmaxyf the learning problem is to fit the function f so that the errors of the predicted parse tree y are as low as possiblethe scoring function of the hash kernel is f w where w is the weight vector and the size of w is n algorithm 2 shows the update function of the hash kernelwe derived the update function from the update function of mira update number of wrong labeled edges if 0 then you f you 2w w you v v you return w v al 2006the parameters of the function are the weight vectors w and v the sentence xi the gold dependency structure yi the predicted dependency structure yp and the update weight the function calculates the number ofwrong labeled edgesthe update function updates the weight vectors if at least one edge is la beled wrongit calculates the difference you of the feature vectors of the gold dependency structure and the predicted dependency structureeach time we use the feature represen tation the hash function h maps the features to integer numbers between 1 and w after that the update function calculates the margin and updates w and v respectivelyalgorithm 3 shows the training algorithm forthe hash kernel in pseudo codea main dif ference to the baseline algorithm is that it does not store the features because of the required time which is needed to store the additional negative featuresaccordingly the algorithm first extracts the features for each training instance then maps the features to indexes for the weight vector with the hash function and calculates the weight arraysalgorithm 3 training hash kernel for n1 to e iteration over the training epochs for i1 to i iteration over the training exmaples k i i e i k 2 passiveaggressive weight tsek aextrfeaturescalcarrays teek tspk yppredicteprojectiveparsetreetepk tsak yanonprojectiveapprox teak update w v according to and w v average for different j the hash function h might generate the same value k this means that the hash function maps more than one feature to thesame weightwe call such cases collisionscol lisions can reduce the accuracy since the weights are changed arbitrarilythis procedure is similar to randomization of weights which aims to save space by sharing values in the weight vector the hash kernel shares values when collisions occur that can be considered as an approximation of the kernel function because a weight might be adapted due to more than one featureif the approximation works well then we would need only a relatively small weight vector otherwise we need a larger weight vector to reduce the chance of collisionsin an experiments we compared two hash functions and different hash sizeswe selected for the comparison a standard hash function and a custom hash function the idea for the custom hash function h2 is not to overlap the values of the feature sequence number and the edge label with other valuesthese values are stored at the beginning of a long number which represents a featureh1 size3 h2 0xffffffffffffe000 xor 0xffffffffffff0000 xor 0xfffffffffffc0000 xor 0xfffffffffff00000 size vector size h1 h2 411527 8567 041 8574 041 3292489 8782 327 8797 328 10503061 8826 883 8835 877 21006137 8819 1258 8841 1253 42012281 8832 1245 8834 1527 1159115648832 1758 8839 1734 179669557 8834 1765 8828 1784table 2 the labeled attachment scores for differ ent weight vector sizes and the number of nonzero values in the feature vectors in millionsnot a prime numbertable 2 shows the labeled attachment scores for selected weight vector sizes and the number of nonzero weightsmost of the numbers in table2 are primes since they are frequently used to obtain a better distribution of the content in hash ta 3 n shifts n bits right and is the modulo operation93blesh2 has more nonzero weights than h1nevertheless we did not observe any clear improve ment of the accuracy scoresthe values do not change significantly for a weight vector size of 10 million and more elementswe choose a weightvector size of 115911564 values for further exper iments since we get more non zero weights and therefore fewer collisionste tp ta r total partraichinese 1308 200 3 1511 1184 93h english 379 213 182 15 420 354 46h german 209 12 153 17 238 126 24h spanish 1056 39 2 1097 1044 44h table 3 the time in milliseconds for the featureextraction projective parsing nonprojective ap proximation rest the total training time perinstance the average parsing time in mil liseconds for the test set and the training time in hours 0 1 2 3 0 5000 10000 15000 spanish figure 1 the difference of the labeled attachment score between the baseline parser and the parser with the hash kernel for increasing large training sets table 3 contains the measured times for the hash kernel as used in algorithm 2the parserneeds 0354 seconds in average to parse a sen tence of the english test setthis is 35 times faster than the baseline parserthe reason for that is the faster feature mapping of the hash kerneltherefore the measured time te for the feature ex traction and the calculation of the weight arrays are much lower than for the baseline parserthe training is about 19 slower since we could no longer use a file to store the feature indexes of the training examples because of the large number of negative featureswe counted about twice the number of nonzero weights in the weight vector of the hash kernel compared to the baseline parserfor instance we counted for english 1734 mil lions nonzero weights in the hash kernel and 847 millions in baseline parser and for chinese 1828 millions nonzero weights in the hash kernel and 876 millions in the baseline parsertable 6 showsthe scores for all languages of the shared task2009the attachment scores increased for all languagesit increased most for catalan and span ishthese two corpora have the smallest training setswe searched for the reason and found thatthe hash kernel provides an overproportional ac curacy gain with less training data compared to mirafigure 1 shows the difference between the labeled attachment score of the parser with mira and the hash kernel for spanishthe decreasing curve shows clearly that the hash kernel providesan overproportional accuracy gain with less train ing data compared to the baselinethis provides an advantage for small training corporahowever this is probably not the main rea son for the high improvement since for languageswith only slightly larger training sets such as chinese the improvement is much lower and the gra dient at the end of the curve is so that a huge amount of training data would be needed to make the curve reach zerocurrent cpus have up to 12 cores and we will see soon cpus with more coresalso graphiccards provide many simple coresparsing algo rithms can use several coresespecially the tasks to extract the features and to calculate the weightarrays can be well implemented as parallel algo rithmwe could also successful parallelize theprojective parsing and the nonprojective approximation algorithmalgorithm 4 shows the paral lel feature extraction in pseudo codethe mainmethod prepares a list of tasks which can be per formed in parallel and afterwards it creates thethreads that perform the taskseach thread re moves from the task list an element carries out the task and stores the resultthis procedure is repeated until the list is emptythe main method waits until all threads are completed and returns the resultfor the parallel algorithms table 5 shows the elapsed times depend on the number of 94 standard features linear features linear g features sibling features 1 lhf hpd 14 lhph1pdpd 44 lgpdpd1pd 99 lslhpdr 2 lhf d 15 lhpd1pdpd 45 lgpdpd1pd 100 lsldpdr 3 lhpd 16 lhpdpd1pd 46 lgpg1pd1pdpd 101 lhldpdr 4 ldf dpd 17 lhph1pd1pdpd 47 lg1pgpd1pdpd 102 ldlspdr 5 lhpd 18 lh1ph1pd1pdpd 48 lgpg1pdpd1pd 75 ldmsmd 6 ldpd 19 lhph1pdpd1pd 49 lg1pgpdpd1pd 76 lhmsmd 7 lhf hpdf dpd 20 lh1phpdpd1pd 50 lgpg1phpd linear s features 8 lhpdf dpd grandchild features 51 lgpg1phpd 58 lsps1phpd 9 lhf df dpd 21 lhpdpgpd 52 lgphph1pd 59 lsps1phpd 10 lhf hpdf d 22 lhpgpd 53 lgphph1pd 60 lsphph1pd 11 lhf df hpd 23 ldpgpd 54 lgpg1ph1phpd 61 lsphph1pd 12 lhf df d 24 lhf gf d 55 lg1pgph1phpd 62 lsps1ph1pd 13 lhpdpd 25 ldf gf d 56 lgpg1phph1pd 63 ls1psph1pd 77 lhlhpd 26 lgf hpd 57 lg1pgphph1pd 64 lsps1phpd 78 lhld 27 lgf dpd sibling features 65 ls1psphph1pd 79 lhpd 28 lhf gpd 30 lhpdpspd r 66 lsps1pdpd 80 ldldpd 29 ldf gpd 31 lhpspdr 67 lsps1pdpd 81 ldld 91 lhlgld 32 ldpspdr 68 spdpd1pd 82 ldpd 92 ldpgpd 33 lpf sf dr 69 spdpd1pd 83 ldlhpdphld 93 lglhpd 34 lppsf dr 70 sps1pd1pdpd 84 ldlhpdpd 94 lgldpd 35 lsf ppdr 71 s1pspd1pdpd 85 lhldldpd 95 lhlgpd 36 lsf dpdr 72 sps1pdpd1pd 86 lhlhpdpd 96 ldlgpd 37 lsf dpdr 73 s1pspdpd1pd 87 lhldlhpd 74 ldmgmd 38 ldf spdr special feature 88 lhldld linear g features 97 lhlsldr 39 lhpdpxpbetween hd 89 lhpdpd 42 lgpg1pdpd 98 ldlsldr 41 lhmdmd 43 lgpg1pdpd table 4 features groupsl represents the label h the head d the dependent s a sibling and g a grandchild d the order of words and r the distanceused coresthe parsing time is 19 times fasteron two cores and 34 times faster on 4 coreshy per threading can improve the parsing times again and we get with hyper threading 46 faster parsingtimeshyper threading possibly reduces the over head of threads which contains already our single core versionalgorithm 4 parallel feature extraction a weight arrays extractfeaturesandcalcarrays datalist threadsave data list for w1 1 to xi for w2 1 to xi datalistdatalist cnumber of cpu cores for t1 to c tt createarraythread start arraythread tt start thread t for t1 to c join tt wait until thread t is finished aa collectresult return a arraythread t dremovefirstelement if d is empty then endthread extract features and calculate part d of a cores te tp ta rest total parstrain1 379 213 182 15 420 354 458h 2 196 117 92 21 219 187 239h 3 138 89 65 16 155 126 166h 4 106 82 52 16 121 105 132h 44h 733 88 48 13 882 77 96htable 5 elapsed times in milliseconds for differ ent numbers of coresthe parsing time are expressed in milliseconds per sentence and the training time in hoursthe last row shows the times for 8 threads on a 4 core cpu with hyperthreadingfor these experiment we set the clock speed to 346 ghz in order to have the same clock speed for all experimentsthresholdfor nonprojective parsing we use the nonprojective approximation algorithm of mcdon ald and pereira the algorithm rearranges edges in a dependency tree when they improve the scorebohnet extended the algorithm by a threshold which biases the rearrangement of the edgeswith a threshold it is possible to gain a higher percentage of correct dependency linkswe determined a threshold in experiments for czech english and germanin the experimentwe use the hash kernel and increase the thresh 95 system average catalan chinese czech english german japanese spanish top conll 09 8577 8786 7919 8038 8988 8748 9257 8764 baseline parser 8510 8570 7688 7693 9014 8764 9226 8612 this work 8633 8745 7699 8096 9033 8806 9247 8813 table 6 top las of the conll 2009 of gesmundo et al bohnet che et al and ren et al las of the baseline parser and the parser with hash kernelthe numbers in bold face mark the top scoreswe used for catalan chinese japanese and spanish the projective parsing algorithmold at the beginning in small steps by 01 and later in larger steps by 05 and 10figure 2 showsthe labeled attachment scores for the czech en glish and german development set in relation to the rearrangement thresholdthe curves for all languages are a bit volatilethe english curve is rather flatit increases a bit until about 03and remains relative stable before it slightly decreasesthe labeled attachment score for ger man and czech increases until 03 as well and thenboth scores start to decreasefor english a thresh old between 03 and about 20 would work wellfor german and czech a threshold of about 03is the best choicewe selected for all three lan guages a threshold of 0374 76 78 80 82 84 86 88 0 1 2 3 4 5 czech english germanfigure 2 english german and czech labeled at tachment score for the development set in relation to the rearrangement threshold we have developed a very fast parser with ex cellent attachment scoresfor the languages of the 2009 conll shared task the parser could reach higher accuracy scores on average than the top performing systemsthe scores for catalan chinese and japanese are still lower than the top scoreshowever the parser would have ranked second for these languagesfor catalan and chinese the top results obtained transitionbasedparserstherefore the integration of both tech niques as in nivre and mcdonald seems to be very promisingfor instance to improvethe accuracy further more global constrains capturing the subcategorization correct could be inte grated as in riedel and clarke our fasteralgorithms may make it feasible to consider fur ther higher order factorsin this paper we have investigated possibilities for increasing parsing speed without any accuracylossthe parsing time is 35 times faster on a sin gle cpu core than the baseline parser which has an typical architecture for a maximum spanning tree parserthe improvement is due solely to thehash kernelthe hash kernel was also a prereq uisite for the parallelization of the parser because it requires much less memory bandwidth which is nowadays a bottleneck of parsers and many other applicationsby using parallel algorithms we could further increase the parsing time by a factor of 34 on a 4 core cpu and including hyper threading by a factor of 46the parsing speed is 16 times fasterfor the english test set than the conventional ap proachthe parser needs only 77 millisecond in average to parse a sentence and the speed willscale with the number of cores that become avail able in futureto gain even faster parsing times it may be possible to trade accuracy against speedin a pilot experiment we have shown that it is possible to reduce the parsing time in this way to as little as 9 millisecondswe are convinced thatthe hash kernel can be applied successful to tran sition based dependency parsers phrase structure parsers and many other nlp applications4 4we provide the parser and hash kernel as open source for download from httpcodegooglecompmatetools96
C10-1011
top accuracy and fast dependency parsing is not a contradictionin addition to a high accuracy short parsing and training times are the most important properties of a parser however parsing and training times are still relatively longto determine why we analyzed the time usage of a dependency parserwe illustrate that the mapping of the features onto their weights in the support vector machine is the major factor in time complexityto resolve this problem we implemented the passiveaggressive perceptron algorithm as a hash kernelthe hash kernel substantially improves the parsing times and takes into account the features of negative examples built during the trainingthis has lead to a higher accuracywe could further increase the parsing and training speed with a parallel feature extraction and a parallel parsing algorithmwe are convinced that the hash kernel and the parallelization can be applied successful to other nlp applications as well such as transition based dependency parsers phrase structrue parsers and machine translationwe show that the hash kernel improves parsing speed and accuracy since the parser uses additionally negative featuresthe mateparser is an efficient second order dependency parser that models the interaction between siblings as well as grandchildren
a monolingual treebased translation model for sentence simplification in this paper we consider sentence sim plification as a special form of translation with the complex sentence as the source and the simple sentence as the target we propose a treebased simplification model which to our knowledge is the first statistical simplification model covering splitting dropping reorderingand substitution integrally we also de scribe an efficient method to train our model with a largescale parallel dataset obtained from the wikipedia and simple wikipedia the evaluation shows that our model achieves better readability scores than a set of baseline systems sentence simplification transforms long and dif ficult sentences into shorter and more readable onesthis helps humans read texts more easilyand fasterreading assistance is thus an important application of sentence simplification espe cially for people with reading disabilities lowliteracy read ers or nonnative speakers not only human readers but also nlp applications can benefit from sentence simplificationthe original motivation for sentence simplification is using it as a preprocessor to facili tate parsing or translation tasks complex sentences are considered as stumbling blocks for such systemsmore recentlysentence simplification has also been shown help ful for summarization this work has been supported by the emmy noether program of the german research foundation underthe grant nogu 79831 and by the volkswagen founda tion as part of the lichtenbergprofessorship program under the grant noi82806sentence fusion se mantic role labeling question generation paraphrase generation and biomedical information extraction at sentence level reading difficulty stems either from lexical or syntactic complexitysen tence simplification can therefore be classifiedinto two types lexical simplification and syntac tic simplification these two types of simplification can be further implemented by a set of simplification operationssplitting dropping reordering and substitution are widely accepted as important simplification operationsthe splitting operation splits a long sentence intoseveral shorter sentences to decrease the complex ity of the long sentencethe dropping operation further removes unimportant parts of a sentence to make it more concisethe reordering operationinterchanges the order of the split sentences or parts in a sentence finally the substitution operation re places difficult phrases or words with their simpler synonymsin most cases different simplification operations happen simultaneouslyit is therefore nec essary to consider the simplification process as a combination of different operations and treatthem as a wholehowever most of the existing models only consider one of these operationssiddharthan and petersen and ostendorf focus on sentence splitting while sen tence compression systems mainly use the dropping operationas faras lexical simplification is concerned word substitution is usually done by selecting simpler syn onyms from wordnet based on word frequency in this paper we propose a sentence simplifica tion model by tree transformation which is based 1353 on techniques from statistical machine translation our model in tegrally covers splitting dropping reordering and phraseword substitutionthe parameters of ourmodel can be efficiently learned from complex simple parallel datasetsthe transformation froma complex sentence to a simple sentence is con ducted by applying a sequence of simplification operationsan expectation maximization algorithm is used to iteratively train our modelwe also propose a method based on monolingualword mapping which speeds up the training pro cess significantlyfinally a decoder is designed to generate the simplified sentences using a greedy strategy and integrates language modelsin order to train our model we further com pile a largescale complexsimple parallel dataset from simple english wikipedia1 and en glish wikipedia2 as such datasets are rarewe organize the remainder of the paper as follows section 2 describes the pwkp datasetsec tion 3 presents our tsm modelsections 4 and 5 are devoted to training and decoding respectivelysection 6 details the evaluationthe conclusions follow in the final sectionwe collected a paired dataset from the english wikipedia and simple english wikipediathe targeted audience of simple wikipedia includeschildren and adults who are learning english lan guagethe authors are requested to use easy words and short sentencesto compose articleswe processed the dataset as follows article pairing 65133 articles from simplewikipedia3 and wikipedia4 were paired by fol lowing the language linkusing the dump filesin wikimedia5 administration articles were fur ther removedplain text extraction we use jwpl to extract plain texts from wikipedia ar ticles by removing specific wiki tagspreprocessing including sentence boundary detection and tokenization with the stanford 1httpsimplewikipediaorg 2httpenwikipediaorg 3as of aug 17th 2009 4as of aug 22nd 2009 5httpdownloadwikimediaorg parser package and lemmatization with the treetagger monolingual sentence alignment as we need a parallel dataset algned at the sentence levelwe further applied monolingual sentence align ment on the article pairsin order to achieve the best sentence alignment on our dataset we tested three similarity measures sentencelevel tfidf word overlap and wordbased maximum edit distance with costs of insertion deletionand substitution set to 1to evaluate their perfor mance we manually annotated 120 sentence pairs from the article pairstab1 reports the precision and recall of these three measureswe manually adjusted the similarity threshold to obtain a recallvalue as close as possible to 558 which was pre viously adopted by nelken and shieber similarity precision recall tfidf 913 554 word overlap 505 551 med 139 547 table 1 monolingual sentence alignment the results in tab1 show that sentenceleveltfidf clearly outperforms the other two mea sures which is consistent with the results reported by nelken and shieber we henceforth chose sentencelevel tfidf to align our datasetas shown in tab2 pwkp contains more than 108k sentence pairsthe sentences from wikipedia and simple wikipedia are considered as complexand simplerespectivelyboth the average sentence length and average token length in simple wikipedia are shorter than those inwikipedia which is in compliance with the pur pose of simple wikipediaaveragesen len averagetoklen senpairscomplex simple complex simple 2501 2087 506 489 108016 table 2 statistics for the pwkp datasetin order to account for sentence splitting we al low 1 to n sentence alignment to map one complexsentence to several simple sentenceswe first per form 1 to 1 mapping with sentencelevel tfidf and then combine the pairs with the same complex sentence and adjacent simple sentenceswe apply the following simplification operations to the parse tree of a complex sentence splitting 1354dropping reordering and substitutionin this sec tion we use a running example to illustrate thisprocessc is the complex sentence to be simpli fied in our examplefig1 shows the parse tree of c c august was the sixth month in the ancient ro man calendar which started in 735bcnp vp s august was npinsixththe sbar np np pp whnp s vp started pp in 735bc ancient calendar whichthe roman month figure 1 parse tree of c 31 splittingthe first operation is sentence splitting which wefurther decompose into two subtasks segmen tation which decides where and whether to split a sentence and completion which makes the new split sentences completefirst we decide where we can split a sentencein our model the splitting point is judged by the syntactic constituent of the split boundary word in the complex sentencethe decision whether a sentence should be split is based on the length of the complex sentencethe features used in the segmentation step are shown in tab3word constituent ilength issplit probwhichsbar 1 true 00016 whichsbar 1 false 09984 whichsbar 2 true 00835 whichsbar 2 false 09165 table 3 segmentation feature table actually we do not use the direct constituent of a word in the parse treein our example the directconstituent of the word whichis whnpin stead we use alg1 to calculate the constituentof a wordalg1 returns sbaras the adjusted constituent for whichmoreover di rectly using the length of the complex sentenceis affected by the data sparseness problemin stead we use ilength as the feature which is calculated as ilength ceiling where comlength is the length of the complex sentence and avgsimlength is the average length of simple sentences in the training datasetthe probcolumn shows the probabilities obtained after training on our datasetalgorithm 1 adjustconstituent constituentwordfather father constituentfather while father 6 null and constituent is the most left child of father do constituentfather father fatherfather end while return constituent in our model one complex sentence can be split into two or more sentencessince many splitting operations are possible we need to select the mostlikely onethe probability of a segmentation op eration is calculated as p wc sft where w is a word in the complex sentence c and sft is the probability of the word w in the segmentation feature table fig2 shows a possible segmentation result of our examplenp vp s august was npinsixththe sbar np np pp whnp s vp started pp in 735bc ancient calendar which the roman month figure 2 segmentation the second step is completionin this step we try to make the split sentences complete and grammaticalin our example to make the second sentence which started in 735bccomplete and grammatical we should first drop the border word whichand then copy the dependent np the ancient roman calendarto the left of startedto obtain the complete sentence the ancient ro man calendar started in 735bcin our model whether the border word should be dropped or retained depends on two features of the border word the direct constituent of the word and the word itself as shown in tab4constword isdropped probwhnp which true 10 whnp which false probmin table 4 border drop feature table in order to copy the necessary parts to complete the new sentences we must decide which parts should be copied and where to put these parts in the new sentencesin our model this is judged by two features the dependency relation and theconstituentwe use the stanford parser for parsing the dependenciesin our example the de 1355pendency relation between calendarin the com plex sentence and the verb startedin the secondsplit sentence is gov nsubj6 the direct constituent of startedis vpand the word calen darshould be put on the leftof started see tab5depconstiscopied posprobgov nsubj vp true left 09000 gov nsubj vp true right 00994 gov nsubj vp false 00006 table 5 copy feature table for dependent nps we copy the whole np phrase rather than only the head noun7 in ourexample we copy the whole np phrase the an cient roman calendarto the new position rather than only the word calendarthe probability of a completion operation can be calculated as p y bws bdft y ws y depw cft where s are the split sentences bw is a border word in s w is a word in s dep is a dependency of w which is out of the scope of s fig3 shows the most likely result of the completion operation for our examplenp vp pt1 august was npinsixththe np np pppt2 vp started pp in 735bc ancient calendarthe romannp ancient calendarthe roman month figure 3 completion 32 dropping and reorderingwe first apply dropping and then reordering to each nonterminal node in the parse tree from topto bottomwe use the same features for both drop ping and reordering the nodes direct constituent and its childrens constituents pattern see tab6 and tab7constituent children drop probnp dt jj nnp nn 1101 766e4 np dt jj nnp nn 0001 126e7 table 6 dropping feature table 6with stanford parser whichis a referent of calenderand the nsubj of startedcalenderthus can be considered to be the nsubj of startedwith startedas the governor7the copied np phrase can be further simplified in the following stepsconstituent children reorder probnp dt jj nn 012 08303 np dt jj nn 210 00039 table 7 reordering feature table the bits 1and 0in the dropcolumn indicate whether the corresponding constituent is re tained or droppedthe number in the reordercolumn represents the new order for the childrenthe probabilities of the dropping and reordering operations can be calculated as equ2 and equ3p dft p rft in our example one of the possible results is dropping the nnp roman as shown in fig4np vp pt1 august was npinsixththe np np pppt2 vp started pp in 735bc ancient calendarthenp ancient calendarthe month figure 4 dropping reordering 33 substitution331 word substitutionword substitution only happens on the termi nal nodes of the parse treein our model the conditioning features include the original word and the substitutionthe substitution for a word can be another word or a multiword expressionthe probability of a word substitu tion operation can be calculated as p subft origin substitution probancient ancient 0963 ancient old 00183 ancient than transport 183e102 old ancient 0005 table 8 substitution feature table 332 phrase substitutionphrase substitution happens on the non terminal nodes and uses the same conditioningfeatures as word substitutionthe origincon sists of the leaves of the subtree rooted at the nodewhen we apply phrase substitution on anonterminal node then any simplification operation cannot happen on its descendants any more 1356 because when a node has been replaced then its descendants are no longer existingtherefore for each nonterminal node we must decide whether a substitution should take place at this node or at itsdescendantswe perform substitution for a non terminal node if the following constraint is met max y chnode maxwhere ch is a child of the nodecanbe any substitution in the subftthe proba bility of the phrase substitution is calculated as p subft fig5 shows one of the possible substitution re sults for our example where ancientis replaced by oldnp vp pt1 august was npinsixththe np np pppt2 vp started pp in 735bc old calendarthenp old calendarthe month figure 5 substitution as a result of all the simplification operations we obtain the following two sentences s1 straugust was the sixth month in the old calendarand s2 strthe old calendar started in 735bc34 the probabilistic modelour model can be formalized as a direct translation model from complex to simple p multi plied by a language model p as shown in equ4s argmax s p p we combine the parts described in the previous sections to get the direct translation model p strs p node p p p w where is a sequence of simplification operationsand str corresponds to the leaves of a simplified treethere can be many sequences of op erations that result in the same simplified sentence and we sum up all of their probabilitiesin this section we describe how we train the prob abilities in the tablesfollowing the work of yamada and knight we train our model by maximizing p over the training corpuswith the them algorithm described in alg2 us ing a constructed graph structurewe develop the training tree to calculate p p is equal to the inside probability of the root in thetraining treealg3 and alg4 are used to calculate the inside and outside probabilitieswe re fer readers to yamada and knight for more detailsalgorithm 2 them training initialize all probability tables using the uniform distribu tion for several iterations do reset al cnt 0 for each sentence pair in dataset do tt buildtrainingtree calcinsideprob calcoutsideprob update cnt for each conditioning feature in each node of tt cnt cnt nodeinsideprob nodeoutsideprobrootinsideprob end for updateprobability end for root sp sp_res1 sp_res2 dp ro mp mp_res1 mp_res2 sub mp mp_res subsub dp ro mp_res root sp sp_res sp_res dp ro ro_res ro_res sub ro_res subsub dp ro ro_res sub_res sub_res sub_res figure 6 training tree and decoding tree we illustrate the construction of the training tree with our running examplethere are two kinds of nodes in the training tree data nodes in rectangles and operation nodes in circlesdata nodes contain data and operation nodes execute operationsthe training is a supervised learning 1357 process with the parse tree of c as input and the two strings s1 and s2 as the desired outputroot stores the parse tree of c and also s1 and s2sp ro mp and sub are splitting reordering mapping and substitution operationssp res and mp res store the results of sp and mpin our example sp splits the parse tree into two parse trees pt1 and pt2 sp res1 contains pt1 and s1sp res2 contains pt2 and s2then dp ro and mp are iteratively applied to each nonterminal node at each level of pt1 and pt2 from top to downthis process continues until the terminal nodesare reached or is stopped by a sub nodethe function of mp operation is similar to the word mapping operation in the stringbased machine trans lationit maps substrings in the complex sentence which are dominated by the children of the current node to proper substrings in the simple sentencesspeeding up the example above is only oneof the possible pathswe try all of the promis ing paths in trainingpromising paths are thepaths which are likely to succeed in transform ing the parse tree of c into s1 and s2we select the promising candidates using monolingual word mapping as shown in fig7in this exampleonly the word whichcan be a promising can didate for splittingwe can select the promisingcandidates for the dropping reordering and map ping operations similarlywith this improvement we can train on the pwkp dataset within 1 hour excluding the parsing time taken by the stanford parserwe initialize the probabilities with the uniform distributionthe binary features such as sft and bdft are assigned the initial value of 05for dft and rft the initial probability is 1n wheren is the number of the childrencft is initial ized as 025subft is initialized as 10 for anysubstitution at the first iterationafter each itera tion the updateprobability function recalculatesthese probabilities based on the cnt for each fea turealgorithm 3 calcinsideprob for each node from level n to root of tt do if node is a sub node then nodeinsideprob p else if node is a mp or sp node then nodeinsideprob qchild childinsideprobelse nodeinsideprob pchild childinsideprobend if end for algorithm 4 calcoutsideprob for each node from root to level n of tt do if node is the root then nodeoutsideprob 10 else if node is a sp res or mp res node then comment father are the fathers of the current node sibling are the children of father excluding the current node nodeoutsideprob p father fatheroutsideprob qsibling siblinginsideprobelse if node is a mp node then nodeoutsideprob fatheroutsideprob 10 else if node is a sp ro dp or sub node then nodeoutsideprob fatheroutsideprob p end if end for august was the sixth in the ancient roman calendar statedwhich in 735bc august was the sixth in the old roman calendar stated in 735bcthe old calendarcomplex sentence simple sentences month month figure 7 monolingual word mappingfor decoding we construct the decoding tree similarly to the construction of the training treethe decoding tree does not have mp op erations and there can be more than one sub nodes attached to a single ro resthe root contains the parse tree of the complex sentencedue to space limitations we cannot provide all the details of the decoderwe calculate the inside probability and out side probability for each node in the decoding treewhen we simplify a complex sentence we start from the root and greedily select the branchwith the highest outside probabilityfor the sub stitution operation we also integrate a trigram language model to make the generated sentences more fluentwe train the language model with srilm all the articles from the simple wikipedia are used as the training corpus amounting to about 54 mbour evaluation dataset consists of 100 complex sentences and 131 parallel simple sentences from pwkpthey have not been used for trainingfour baseline systems are compared in our eval uationthe first is moses which is a state of the art smt system widely used as a baseline in mt communityobviously the purpose of mosesis crosslingual translation rather than monolin 1358 gual simplificationthe goal of our comparison is therefore to assess how well a standard smt system may perform simplification when fed with a proper training datasetwe train moses with the same part of pwkp as our modelthe secondbaseline system is a sentence compression sys tem whose demo system is available online8 as the compressionsystem can only perform dropping we further ex tend it to our third and fourth baseline systems in order to make a reasonable comparisonin our third baseline system we substitute the words in the output of the compression system with their simpler synonymsthis is done by looking up the synonyms in wordnet and selecting the mostfrequent synonym for replacementthe word fre quency is counted using the articles from simple wikipediathe fourth system performs sentence splitting on the output of the third systemthis is simply done by splitting the sentences at andor but which whoand that and dis carding the border wordsin total there are 5systems in our evaluation moses the mt system c the compression system cs the compressionsubstitution system css the compres sionsubstitutionsplit system tsm our modelwe also provide evaluation measures for the sen tences in the evaluation dataset cw complexsentences from normal wikipedia and sw par allel simple sentences from simple wikipedia61 basic statistics and examplesthe first three columns in tab9 present the ba sic statistics for the evaluation sentences and theoutput of the five systemstokenlen is the aver age length of tokens which may roughly reflect the lexical difficultytsm achieves an average token length which is the same as the simple wikipedia senlen is the average number of tokens inone sentence which may roughly reflect the syn tactic complexityboth tsm and css produce shorter sentences than swmoses is very close to cwsen gives the number of sentencesmoses c and cs cannot split sentences and thus produce about the same number of sentences as available in cwhere are two example results obtained with our tsm systemexample 1cw genetic engineering has ex panded the genes available to breeders to utilize in creating desired germlines for new cropssw 8http212126215106compressionnew plants were created with genetic engineer ingtsm engineering has expanded the genes available to breeders to use in making germlines for new cropsexample 2cw an umbrella term is a word thatprovides a superset or grouping of related con cepts also called a hypernymsw an umbrellaterm is a word that provides a superset or group ing of related conceptstsm an umbrella term is a worda word provides a superset of related concepts called a hypernymin the first example both substitution and dropping happentsm replaces utilizeand cre atingwith useand makinggeneticisdroppedin the second example the complex sen tence is split and alsois dropped62 translation assessmentin this part of the evaluation we use traditional measures used for evaluating mt systemstab9 shows the bleu and nist scoreswe use mtevalv11bpl9 as the evaluation toolcwand sw are used respectively as source and ref erence sentencestsm obtains a very high bleu score but not as high as moses however the original complex sentences from normal wikipedia get a rather high bleu when compared to the simple sentenceswe also find that most of the sentences generated by moses are exactly the same as those in cwthis shows that moses only performs few modi fications to the original complex sentencesthis is confirmed by mt evaluation measures if we set cw as both source and reference the bleu score obtained by moses is 078tsm gets 055 in the same setting which is significantly smaller than moses and demonstrates that tsm is able to generate simplifications with a greater amount of variation from the original sentenceas shown inthe samecolumn of tab9 25 sentences generated by moses are exactly identical to the com plex sentences while the number for tsm is 2 which is closer to swit is however not clear how well bleu and nist discriminate simplification systemsas discussed in jurafsky and martin bleu does poorly at comparing systems with radically different architectures and is most appropriate when evaluating incremental changes with similar architecturesin our case tsm andcss can be considered as having similar architec tures as both of them can do splitting dropping 9httpwwwstatmtorgmoses 1359 toklen senlen sen bleu nist same flesch lix oov people cw 495 2781 100 050 689 100 491 530 529 384 sw 476 1786 131 100 1098 3 604 441 507 179 moses 481 2608 100 055 747 25 548 481 520 363 c 498 1802 103 028 537 1 562 459 517 481 cs 490 1811 103 019 451 0 591 451 495 616 css 498 1020 182 018 442 0 655 383 534 581 tsm 476 1357 180 038 621 2 674 367 508 353 table 9 evaluation and substitutionbut moses mostly cannot split and dropwe may conclude that tsm and moses have different architectures and bleu or nist isnot suitable for comparing themhere is an exam ple to illustrate this almost as soon as heleaves annius and the guard publius arrive to es cort vitellia to titus who has now chosen her as his empress almost as soon as he leavesannius and the guard publius arrive to take vitellia to titus who has now chosen her as his empress the same as an nius and the guard publius arrive to take vitellia to titustitus has now chosen her as his empressin this example moses generates an exactly iden tical sentence to sw thus the blue and nist scores of moses is the highesttsm simplifies the complex sentence by dropping splitting and substitution which results in two sentences that are quite different from the sw sentence and thus gets lower blue and nist scoresnevertheless the sentences generated by tsm seem better than moses in terms of simplification63 readability assessmentintuitively readability scores should be suitable metrics for simplification systemswe use the linux stylecommand to calculate the fleschand lix readability scoresthe results are pre sented in tab9pein the flesch column standsfor plain englishand the gradein lix repre sents the school yeartsm achieves significantly better scores than moses which has the best bleuscorethis implies that good monolingual trans lation is not necessarily good simplificationoovis the percentage of words that are not in the ba sic english be850 list10 tsm is ranked as the second best system for this criterionthe perplexity is a score of text probability measured by a language model and normal ized by the number of words in the text 10httpsimplewikipediaorgwiki wikipediabasic_english_alphabetical_ wordlistppl can be used to measure how tight the language model fits the textlanguage models constitute an important feature for assessing readabil ity we train a trigram lm using the simple sentences in pwkp and calculate the people with srilmtsm gets the best people scorefrom this table we can conclude that tsm achieves better overall readability than the baseline systemspeople p 1 n there are still some important issues to be con sidered in futurebased on our observations the current model performs well for word substitution and segmentationbut the completion of the new sentences is still problematicfor example we copy the dependent np to the new sentencesthis may break the coherence between sentencesabetter solution would be to use a pronoun to replace the npsometimes excessive droppings oc cur eg olderand twinare dropped in she has an older brother and a twin brotherthis results in a problematic sentence she has anbrother and a brotherthere are also some er rors which stem from the dependency parserinexample 2 an umbrella termshould be a dependency of calledbut the parser returns su persetas the dependencyin the future we will investigate more sophisticated features and rules to enhance tsmin this paper we presented a novel largescale par allel dataset pwkp for sentence simplificationwe proposed tsm a treebased translation model for sentence simplification which covers splitting dropping reordering and wordphrase substitution integrally for the first timewe also described anefficient training method with speeding up tech niques for tsmthe evaluation shows that tsm can achieve better overall readability scores than a set of baseline systems1360
C10-1152
a monolingual treebased translation model for sentence simplificationin this paper we consider sentence simplification as a special form of translation with the complex sentence as the source and the simple sentence as the targetwe propose a treebased simplification model which to our knowledge is the first statistical simplification model covering splitting dropping reordering and substitution integrallywe also describe an efficient method to train our model with a largescale parallel dataset obtained from the wikipedia and simple wikipediathe evaluation shows that our model achieves better readability scores than a set of baseline systemswe use a treebased simplification model which uses techniques from statistical machine translation with this data setwe examine the use of paired documents in english wikipedia and simple wikipediafor a datadriven approach to the sentence simplification taskwe propose sentence simplification can be approached as a monolingual machine translation task where the source and target languages are the same and where the output should be simpler in form from the input but similar in meaning
robust sentiment detection on twitter from biased and noisy data in this paper we propose an approach toautomatically detect sentiments on twit ter messages that explores some characteristics of how tweets are written and metainformation of the words that compose these messages moreover we leverage sources of noisy labels as our training data these noisy labels were provided by a few sentiment detectionwebsites over twitter data in our experi ments we show that since our features areable to capture a more abstract representation of tweets our solution is more ef fective than previous ones and also more robust regarding biased and noisy data which is the kind of data provided by these sources twitter is one of the most popular social network websites and has been growing at a very fast pacethe number of twitter users reached an estimated75 million by the end of 2009 up from approx imately 5 million in the previous yearthrough the twitter platform users share either informationor opinions about personalities politicians prod ucts companies events etc this has been attracting the attention of different communities interested in analyzing its contentsentiment detection of tweets is one of the basicanalysis utility functions needed by various applications over twitter datamany systems and ap proaches have been implemented to automatically detect sentiment on texts most of theseapproaches use the raw word representation as features to build a model for sentiment detection and perform this task over large pieces of textshowever the main limitation of usingthese techniques for the twitter context is mes sages posted on twitter socalled tweets are veryshortthe maximum size of a tweet is 140 char actersin this paper we propose a 2step sentiment analysis classification method for twitter whichfirst classifies messages as subjective and ob jective and further distinguishes the subjectivetweets as positive or negativeto reduce the la beling effort in creating these classifiers instead of using manually annotated data to compose thetraining data as regular supervised learning ap proaches we leverage sources of noisy labels asour training datathese noisy labels were pro vided by a few sentiment detection websites over twitter datato better utilize these sources we verify the potential value of using and combining them providing an analysis of the provided labels examine different strategies of combining these sources in order to obtain the best outcome and propose a more robust feature set that captures a more abstract representation of tweets composedby metainformation associated to words and spe cific characteristics of how tweets are writtenby using it we aim to handle better the problem of lack of information on tweets helping on thegeneralization process of the classification algo rithms and the noisy and biased labels provided by those websitesthe remainder of this paper is organized as fol lowsin section 2 we provide some context about messages on twitter and about the websites used as label sourceswe introduce the features used in the sentiment detection and also provide a deep analysis of the labels generated by those sources in section 3we examine different strategies of 36 combining these sources and present an extensive experimental evaluation in section 4finally we discuss previous works related to ours in section 5and conclude in section 6 where we outline direc tions and future workin this section we give some context about twitter messages and the sources used for our datadriven approachtweetsthe twitter messages are called tweetsthere are some particular features that can be usedto compose a tweet rtis an acronym for retweet which means the tweet was forwarded from a previous post twuserrepresents that this message is areply to the user twuser obamais a tag provided by the user for this message socalled hash tag and httpbitly9k4n9pis a link to someexternal sourcetweets are limited to 140 charac tersdue to this lack of information in terms of words present in a tweet we explore some of the tweet features listed above to boost the sentiment detection as we will show in detail in section 3data sourceswe collected data from 3 different websites that provide almost realtime sentiment detection for tweets twendz twitter sen timent and tweetfeelto collect data we issued a query containing a common stopword of as we are interested in collecting generic data and retrieved tweets from these sites for three weeksarchiving the returned tweets along with their sen timent labelstable 1 shows more details aboutthese sourcestwo of the websites provide 3 class detection positive negative and neutral and one of them just 2class detectionone thing tonote is our crawling process obtained a very dif ferent number of tweets from each websitethismight be a result of differences among their sam pling processes of twitter stream or some kind of filtering process to outputfor instance a sitemay only present the tweets it has more confi dence about their sentimentin section 3 we present a deep analysis of the data provided by these sources showing if they are useful to build a sentiment classificationrt twuser obama is the first yous president not to have seen a new state added in his lifetimehttpbitly9k4n9p obama figure 1 example of a tweetour goal is to categorize a tweet into one of the three sentiment categories positive neutral ornegativesimilar to we implement a 2step sentimentdetection frameworkthe first step targets on dis tinguishing subjective tweets from nonsubjective tweets the second onefurther classifies the subjective tweets into posi tive and negative namely the polarity detectionboth classifiers perform prediction using an ab stract representation of the sentences as features as we show later in this section31 featuresa variety of features have been exploited on the problem of sentiment detection including unigrams bigrams partofspeech tags etc a natural choice would be to use the raw word represen tation as features since they obtained good results in previous works that deal with large textshowever as we want to perform sentiment detection on very short messages this strategy might not be effective as shown in our ex perimentsin this context we are motivated to develop an abstract representation of tweetswepropose the use of two sets of features metainformation about the words on tweets and char acteristics of how tweets are writtenmetafeaturesgiven a word in a tweet we mapit to its partofspeech using a partofspeech dic tionary1previous approaches have shown that the ef fectiveness of using pos tags for this tasktheintuition is certain pos tags are good indicators for sentiment taggingfor example opinion messages are more likely containing adjec 1the pos dictionary we used in this paper is available at httpwordlistsourceforgenetposreadme37 data sources url tweets sentiments twendz httptwendzwaggeneredstromcom 254081 posnegneutral twitter sentiment httptwittersentimentappspotcom 79696 posnegneutral tweetfeel httpwwwtweetfeelcom 13122 posneg table 1 information about the 3 data sourcestives or interjectionsin addition to pos tags we map the word to its prior subjectivity also used by and polarity the prior polarity is switched from pos itive to negative or viceversa when a negative expression precedes the wordwe obtained the prior subjectivity and polarity information from subjectivity lexicon of about 8000 words used in 2although this is a very comprehensive list slang and specific web vocabulary are not present on it eg words as yummyor ftwfor this reason we collected popular words used on online discussions from many online sources and added them to this listtweet syntax featureswe exploited the syn tax of the tweets to compose our featurestheyare retweet hashtag reply link if the tweet contains a link punctuation emoticons and upper cases the frequency of each feature in a tweet is di vided by the number of the words in the tweet32 subjectivity classifieras we mentioned before the first step in our tweet sentiment detection is to predict the subjectivity ofa given tweetwe decided to create a single clas sifier by combining the objectivity sentences from twendz and twitter sentiment and the subjectivity sentences from all 3 sourcesas we do not know the quality of the labels pro vided by these sources we perform a cleaning process over this data to assure some reasonable qualitythese are the steps 1disagreement removal we remove the2the subjectivity lexicon is available at httpwwwcspittedumpqa tweets that are disagreed between the data sources in terms of subjectivity 2same users messages we observed that theusers with the highest number of messages in our dataset are usually those ones that postsome objective messages for example ad vertising some product or posting some job recruiting informationfor this reason we allowed in the training data only one message from the same useras we show later this boosts the classification performance mainlybecause it removes tweets labeled as subjective by the data sources but are in fact objec tive 3top opinion words to clean the objectivetraining set we remove from this set tweets that contain the topn opinion words in the subjectivity training set eg words as cool suck awesome etc as we show in section 4 this process is in fact able to remove certain noisy in the training dataleading to a better performing subjectivity classi fierto illustrate which of the proposed features are more effective for this task the top5 features in terms of information gain based on our trainingdata are positive polarity link strong subjec tive upper case and verbsthree of them aremetainformation and the other two are tweet syntax features here is a typical example of a objective tweet in which the user pointed an external link and used manyupper case words starbucks expands paybyiphone pilot to 1000 storesstarbucks cus tomers with apple iphones or ipod touches can httpoohjacomx9ubc38 33 polarity classifierthe second step of our sentiment detection approach is polarity classification ie predict ing positive or negative sentiment on subjectivetweetsin this section first we analyze the qual ity of the polarity labels provided by the three sources and whether their combination has the potential to bring improvementsecond wepresent some modifications in the proposed fea tures that are more suitable for this task331 analysis of the data sources the 3 data sources used in this work provide some kind of polarity labels two questions we investigate regarding these sources are how useful are these polarity labelsand does combining them bring improvement in accuracywe take the following aspects into considera tionlabeler quality if the labelers have low quality combine them might not bring much i am provement in our case each source is treated as a labeler number of labels provided by the labelersif the labels are informative ie the prob ability of them being correct is higher than 05 the more the number of labels the higher is the performance of a classifier built from them labeler bias the labeled data provided by the labelers might be only a subset of the real data distributionfor instance labelers might be interested in only providing labels that they are more confident aboutdifferent labeler bias if labelers make simi lar mistakes the combination of them might not bring much improvementwe provide an empirical analysis of these datasets to address these pointsfirst we measurethe polarity detection quality of a source by calcu lating the probability p of a label from this source being correctwe use the data manually labeled for assessing the classifiersperformance to obtain the correct labels of data sources quality entropy twendz 077 83 twittersentiment 082 79 tweetfeel 089 75 table 2 quality of the labels and entropy of thetweets provided by each data source for the polar ity detectiona data sampletable 2 shows their valueswe can conclude from these numbers that the 3 sources provide a reasonable quality datathis means that combining them might bring some improvement to the polarity detection instead of for instance using one of them in isolationan aspect that is overlooked by quality is the bias of the datafor instance by examining the data from twitterfeelwe found out that only 4 positive words cover 95 of their positive examples and only 6 negative words cover 96 of their negative setclearly the data provided by this source is biased towards thesewordsthis is probably the reason why this website outputs such fewer number of tweets com pared to the other websites as well as why its data has the smallest entropy among the sources the quality of the data and its individual bias have certainly impact in the combination of labelshowever there is other important aspect that oneneeds to consider different bias between the labelersfor instance if labelers a and b make similar decisions we expect that combining their labels would not bring much improvementthere fore the diversity of labelers is a key element incombining them one way to mea sure this is by calculating the agreement between the labels produced by the labelerswe use the kappa coefficient to measure thedegree of agreement between two sourcesta ble 3 presents the coefficients for each par of data sourceall the coefficients are between 04 and06 which represents a moderate agreement be tween the labelers this means that in fact the sources provide different bias regarding polarity detection39 data sources kappa twendztwittersentiment 058 twittersentimenttweetfeel 058 twendztweetfeel 044 table 3 kappa coefficient between pairs of sourcesfrom this analysis we can conclude that com bining the labels provided by the 3 sources canimprove the performance of the polarity detection instead of using one of them in isolation be because they provide diverse labels of reasonable quality although thereis some issues related to bias of the labels pro vided by themin our experimental evaluation in section 4 we present results obtained by different strategies of combining these sources that confirm these findings332 polarity features the features used in the polarity detection are the same ones used in the subjectivity detectionhowever as one would expect the set of the most discriminative features is different between the two tasksfor subjectivity detection the top5 features in terms of information gain based on the training data are negative polarity positive polarity verbs good emoticons and upper casefor this task the metainformation of the words is more important than specific features from twitter whereas for the subjectivity detection tweet syntax features have a higher relevancethis analysis show that prior polarity is very important for this taskhowever one limitation of using it from a generic list is its values might not hold for some specific scenariofor instancethe polarity of the word spotis positive accord ing to this listhowever looking at our training data almost half of the occurrences of this word appears in the positive set and the other half inthe negative setthus it is not correct to as sume that prior polarity of spotis 1 for thisparticular datathis example illustrates our strat egy to weight the prior polarities for each wordw with prior polarity defined by the list we cal culate the prior polarity of w pol based on the distribution of w in the positive and negative setsthus polpos countcount and polneg 1polposwe assume thepolarity of a word is associated with the polar ity of the sentence which seems to be reasonable since we are dealing with very short messagesalthough simple this strategy is able to improve the polarity detection as we show in section 4we have performed an extensive performance evaluation of our solution for twitter sentimentdetectionbesides analyzing its overall perfor mance our goals included examining different strategies to combine the labels provided by the sources comparing our approach to previous onesin this area and evaluating how robust our solu tion is to the noisy and biased data described in section 341 experimental setupdata setsfor the subjectivity detection afterthe cleansing processing the train ing data contains about 200000 tweets and for polarity detection 71046 positive and 79628negative tweetsfor test data we manually labeled 1000 tweets as positive negative and neu tralwe also built a development set to tune the parameters of the classification algorithmsapproachesfor both tasks subjectivity and po larity detection we compared our approach with previous ones reported in the literaturedetailed explanation about them are as follows reviewsa this is the approach proposed by pang and lee for sentiment analysis in regular online reviewsit performs the subjectivity detection on a sentencelevel relying on the proximity between sentences to detect subjectivitythe set of sentences predicted as subjec tive is then classified as negative or positive in terms of polarity using the unigrams that 40compose the sentenceswe used the imple mentation provided by lingpipe unigrams pang et al showed unigrams are effective for sentiment detection in regular reviewsbased on that we built unigrambased classifiers for the subjectivity and polarity detections over thetraining dataanother approach that uses un igrams is the one used by twittersentiment websitefor polarity detection they select the positive examples for the training data from the tweets containing good emoticonsand negative examples from tweets contain ing bad emoticonswe built a polarity classifier using this approach twittersa twittersa exploits the features described in section 3 in this paperforthe subjectivity detection we trained a classifier from the two available sources using the cleaning process described in sec tion 3 to remove noise in the training data twittersa and other classifiertrained from the original data twittersafor the polarity detection task we built a few classifiers to compare theirperformances twittersa and twit tersa are two classifiers we trained using combined data from the 3 sourcesthe only difference is twittersa uses the modification of weighting the priorpolarity of the words based on the training datatwittersa and twittersa combine classification out puts from 3 classifiers respectively trained from each sourcetwittersa usesmajority voting to combine them and twittersa picks the one with maxi mum confidence scorewe use weka to create the classifierswe tried different learning al gorithms available on weka and svm obtainedthe best results for unigrams and twittersaexperimental results reported in this section are ob tained using svm42 subjectivity detection evaluationtable 4 shows the error rates obtained by the different subjectivity detection approachestwittersa achieved lower error rate than both unigrams and reviewsaas a result these numbers confirm that features inferred from meta information of words and specific syntax featuresfrom tweets are better indicators of the subjectivity than unigramsanother advantage of our approach is since it uses only 20 features the training and test times are much faster than using thousands of features like unigramsone of the reasons why twittersa obtained such a good performance was the process of data cleansing the label quality provided by the sources for this task was very poor 066 for twendz and 068 for twittersentimentby cleaning the datathe error decreased from 199 twittersa to 181 twittersaregard ing reviewsa its lower performance is expected since tweets are composed by single sentences and reviewsa relies on the proximity between sentences to perform subjectivity detectionwe also investigated the influence of the size oftraining data on classification performancefig ure 2 plots the error rates obtained by twittersaand unigrams versus the number of training ex amplesthe curve corresponding to twittersa showed that it achieved good performances evenwith a small training data set and kept almost constant as more examples were added to the training data whereas for unigrams the error rate de creasedfor instance with only 2000 tweets as training data twittersa obtained 20 of error rate whereas unigrams 345these numbers show that our generic representation of tweets produces models that are able to generalize even with a few examples43 polarity detection evaluationwe provide the results for polarity detectionin table 5the best performance was ob tained by twittersa which combines results of the 3 classifiers respectively trained from each source by taking the output by themost confident classifier as the final predictiontwittersa was followed by twittersa and twittersa both cre 41 ated from a single training datathis result shows that computing the prior polarity of the words based on the training data twittersabrings some improvement for this tasktwit tersa obtained the highest error rate among the twittersa approachesthis implies that in our scenario the best way of combining the merits of the individual classifiers is by using a confidence score approachunigrams also achieved comparable perfor manceshowever when reducing the size of thetraining data the performance gap between twit tersa and unigrams is much widerfigure 3shows the error rate of both approaches3 in function of the training sizesimilar to subjectivity detection the training size does not have much influ ence in the error rate for twittersahowever forunigrams it decreased significantly as the train ing size increasedfor instance for a training size with 2000 tweets the error rate for unigrams was 46 versus 238 for our approachas forsubjectivity detection this occurs because our features are in fact able to capture a more general rep resentation of the tweetsanother advantage of twittersa over uni grams is that it produces more robust modelstoillustrate this we present the error rates of uni grams and twittersa where the training data is composed by data from each source in isolationfor the tweetfeel website where data is very bi ased unigrams obtained an error rate of 445 whereas over a sample of the same size of the combined training data itobtained an error rate of around 30our ap proach also performed worse over this data thanthe general one but still had a reasonable er ror rate 251regarding the twendz website which is the noisiest one unigrams also obtained a poor performance comparing itagainst its performance over a sample of the general data with a same size our approach on the other hand was not much influenced by the noise finally since the data qual ity provided by twittersentiment is better than the3for this experiment we used the twittersa con figurationapproach error rate twittersa 181 twittersa 199 unigrams 276 reviewsa 32 table 4 results for subjectivity detectionapproach error rate twittersa 187 twittersa 194 twittersa 20 twittersa 226 unigrams 209 reviewsa 217 unigramsts 243 table 5 results for polarity detectionsite training size twittersa unigrams tweetfeel 13120 251 445 twendz 78025 229 323 twittersentiment 59578 22 234 table 6 training data size for each source and error rates obtained by classifiers built from them0 5 10 15 20 25 30 35 40 0 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000error rate training size unigramstwittersa figure 2 influence of the training data size in the error rate of subjectivity detection using unigrams and twittersaprevious sources there was not much impact over both classifiers created from itfrom this analysis over real data we can con clude that our approach produces an effective polarity classifier even when only a small number of training data is available a robust model tobias and noise in the training data and combining data sources with such distinct characteris tics as our data analysis in section 3 pointed out is effective42 0 10 20 30 40 50 0 20000 40000 60000 80000 100000 120000 140000 160000error rate training size unigramstwittersa figure 3 influence of the training data size in the error rate of polarity detection using unigrams and twittersathere is a rich literature in the area of sentiment detection most of these ap proaches try to perform this task on large texts aseg newspaper articles and movie reviewsan other common characteristic of some of them isthe use of ngrams as features to create their mod elsfor instance pang and lee explores the fact that sentences close in a text might share the same subjectivity to create a better subjectivity detector and similar to uses unigrams as features for the polar ity detectionhowever these approaches do not obtain a good performance on detecting sentimenton tweets as we showed in section 4 mainly be because tweets are very short messagesin addition to that since they use a raw word representation they are more sensible to bias and noise and needa much higher number of examples in the train ing data than our approach to obtain a reasonable performancethe web sources used in this paper and some other websites provide sentiment detection for tweetsa great limitation to evaluate them is they do not make available how their classification was builtone exception is twittersentiment for instance which considers tweets with good emoticons as positive examples and tweets with bad emoticons as negative examples for the training data and builds a classifier using unigrams and bigrams as featureswe showed in section 4 that our approach works better than theirs for this problem obtaining lower error rateswe have presented an effective and robust sen timent detection approach for twitter messages which uses biased and noisy labels as input to build its modelsthis performance is due to the fact that our approach creates a more abstract representation of these messages instead of usinga raw word representation of them as some previous approaches and although noisy and bi ased the data sources provide labels of reasonablequality and since they have different bias com bining them also brought some benefitsthe main limitation of our approach is the cases of sentences that contain antagonistic sentimentsas future work we want to perform a more fine grained analysis of sentences in order to identifyits main focus and then based the sentiment clas sification on it
C10-2005
robust sentiment detection on twitter from biased and noisy datain this paper we propose an approach to automatically detect sentiments on twitter messages that explores some characteristics of how tweets are written and metainformation of the words that compose these messages moreover we leverage sources of noisy labels as our training datathese noisy labels were provided by a few sentiment detection websites over twitter datain our experiments we show that since our features are able to capture a more abstract representation of tweets our solution is more effective than previous ones and also more robust regarding biased and noisy data which is the kind of data provided by these sourceswe propose a twostep approach to classify the sentiments of tweets using svm classifiers with abstract features
enhanced sentiment learning using twitter hashtags and smileys automated identification of diverse sen timent types can be beneficial for manynlp systems such as review summariza tion and public media analysis in some ofthese systems there is an option of assign ing a sentiment value to a single sentence or a very short text in this paper we propose a supervised sentiment classification framework whichis based on data from twitter a popu lar microblogging service by utilizing50 twitter tags and 15 smileys as sen timent labels this framework avoids theneed for labor intensive manual annotation allowing identification and classifi cation of diverse sentiment types of shorttexts we evaluate the contribution of different feature types for sentiment classification and show that our framework successfully identifies sentiment types of untagged sentences the quality of the senti ment identification was also confirmed byhuman judges we also explore dependencies and overlap between different sen timent types represented by smileys and twitter hashtags a huge amount of social media including newsforums product reviews and blogs contain nu merous sentimentbased sentencessentiment is defined as a personal belief or judgment that both authors equally contributed to this paperis not founded on proof or certainty1senti ment expressions may describe the mood of thewriter or the opin ion of the writer towards some specific entity automated identification of diverse sentimenttypes can be beneficial for many nlp systems such as review summarization systems dia logue systems and public media analysis systemssometimes it is directly requested by the user toobtain articles or sentences with a certain senti ment value in some other cases obtaining sentiment value can greatly enhance information extraction tasks like review summarizationwhilethe majority of existing sentiment extraction sys tems focus on polarity identification or extraction of a handful of prespecified mood labels there are many useful and relatively unexplored sentiment typessentiment extraction systems usually require an extensive set of manually supplied sentiment words or a handcrafted sentimentspecific datasetwith the recent popularity of article tagging some social media types like blogs allow users to add sentiment tags to articlesthis allows to use blogsas a large userlabeled dataset for sentiment learning and identificationhowever the set of sentiment tags in most blog platforms is somewhat re strictedmoreover the assigned tag applies to the whole blog post while a finer grained sentiment extraction is needed with the recent popularity of the twitter micro blogging service a huge amount of frequently 1wordnet 21 definitions241selfstanding short textual sentences became openly available for the research communitymany of these tweets contain a wide vari ety of userdefined hashtagssome of these tagsare sentiment tags which assign one or more senti ment values to a tweetin this paper we propose away to utilize such tagged twitter data for classi fication of a wide variety of sentiment types from textwe utilize 50 twitter tags and 15 smileys assentiment labels which allow us to build a classifier for dozens of sentiment types for short tex tual sentencesin our study we use four different feature types for sentiment classification and evaluate the contribution of each feature type for this taskwe show that our framework successfully identi fies sentiment types of the untagged tweetswe confirm the quality of our algorithm using human judgeswe also explore the dependencies and overlap between different sentiment types represented by smileys and twitter tagssection 2 describes related worksection 3 details classification features and the algorithm while section 4 describes the dataset and labelsautomated and manual evaluation protocols and results are presented in section 5 followed by a short discussionsentiment analysis tasks typically combine twodifferent tasks identifying sentiment expres sions and determining the polarity of the expressed sentimentthese tasks are closely related as the purpose of most works is to determine whether a sentence bears a positive or a negative opinion about the target of the sentimentseveral works use lexical re sources and decide whether a sentence expressesa sentiment by the presence of lexical items others combine additional feature types for this decision it was suggested that sentiment words may havedifferent senses thus word sense disambiguation can improve sentiment analysis systems all works mentioned above identifyevaluative sentiment expressions and their polar ityanother line of works aims at identifying abroader range of sentiment classes expressing various emotions such as happiness sadness boredom fear and gratitude regardless positive or negative evaluationsmihalcea and liu derive lists of words and phrases with happiness factor from a corpus of blog posts where each post is annotated by the blogger with a mood labelbalog et al use the mood annotation of blog posts coupled with news datain order to discover the events that drive the dom inant moods expressed in blogsmishne used an ontology of over 100 moods assigned to blog posts to classify blog texts according tomoodswhile classifies a blog entry assign a hap piness factor to specific words and expressionsmishne used a much broader range of moodsstrapparava and mihalcea classify blogposts and news headlines to six sentiment cate gorieswhile most of the works on sentiment analysis focus on full text some works address senti ment analysis in the phrasal and sentence level see among othersonly a few studies analyze the sentiment and polarity of tweets targeted at major brandsjansenet al used a commercial sentiment analyzer as well as a manually labeled corpusdavi dov et al analyze the use of the sarcasmhashtag and its contribution to automatic recognition of sarcastic tweetsto the best of our knowledge there are no works employing twitter hashtags to learn a wide range of emotions and the re lations between the different emotions242below we propose a set of classification featuresand present the algorithm for sentiment classifica tion31 classification featureswe utilize four basic feature types for sentimentclassification single word features ngram fea tures pattern features and punctuation featuresfor the classification all feature types are com bined into a single feature vector311 wordbased and ngrambased features each word appearing in a sentence serves as a binary feature with weight equal to the inverted count of this word in the twitter corpuswe also took each consecutive word sequence containing25 words as a binary ngram feature using a similar weighting strategythus ngram features al ways have a higher weight than features of their component words and rare words have a higher weight than common wordswords or ngramsappearing in less than 05 of the training set sen tences do not constitute a featureascii smileys and other punctuation sequences containing two or more consecutive punctuation symbols were used as singleword featuresword features alsoinclude the substituted metawords for urls ref erences and hashtags 312 patternbased featuresour main feature type is based on surface pat ternsfor automated extraction of patterns we followed the pattern definitions given in we classified words into highfrequency words and content words a word whose corpus frequency is more than fh is considered to be a hfwwe estimate word frequency from the train ing set rather than from an external corpusunlike we consider allsingle punctuation characters or consecutive se quences of punctuation characters as hfwswe also consider url ref and hashtag tags as hfws for pattern extractionwe define a pattern as an ordered sequence of high frequency words and slots for content wordsfollowing the fh and fc thresholds were set to 1000 words per million and 100 words per million 2the patterns allow 26 hfws and 15 slots forcwsto avoid collection of patterns which capture only a part of a meaningful multiword ex pression we require patterns to start and to end with a hfwthus a minimal pattern is of the form hfw cw slot hfwfor each sentenceit is possible to generate dozens of different pat terns that may overlapas with words and ngram features we do not treat as features any patterns which appear in less than 05 of the training set sentencessince each feature vector is based on a singlesentence we would like to allow approximate pattern matching for enhancement of learn ing flexibilitythe value of a pattern feature is estimated according the one of the following four scenarios3 1 count exact match all the pattern components appear in the sentence in correct order without any additional wordscount sparse match same as exact match but additional nonmatching words can be inserted between pattern componentsn ncount incomplete match only n 1 of n pattern components appear in the sentence while some nonmatching words can be inserted inbetweenat least one of the appearing components should be a hfw0 no match nothing or only a single pattern component appears in the sentence0 1 and 0 1 are parameters we use to assign reduced scores for imperfect matchessince the patterns we use are relatively long ex act matches are uncommon and taking advantageof partial matches allows us to significantly re duce the sparsity of the feature vectorswe used 01 in all experimentsthis pattern based framework was proven effi cient for sarcasm detection in for a short discussion3as with word and ngram features the maximal featureweight of a pattern p is defined as the inverse count of a pat tern in the complete twitter corpus243 davidov et al 2010313 efficiency of feature selection since we avoid selection of textual features which have a training set frequency below 05 we perform feature selection incrementally on each stage using the frequencies of the features obtained during the previous stagesthus first we estimate the frequencies of single words in the training set then we only consider creationof ngrams from single words with sufficient frequency finally we only consider patterns composed from sufficiently frequent words and n grams314 punctuationbased features in addition to patternbased features we used the following generic features sentence length in words number of characters in the sentence number of characters in the sentence number of quotes in the sentence and number of capitalizedall capitals wordsin the sentenceall these features were normal ized by dividing them by the thus the maximal weight of each of these features is equal to the averaged weight of a single patternwordngram feature32 classification algorithmin order to assign a sentiment label to new exam ples in the test set we use a knearest neighborslike strategywe construct a feature vec tor for each example in the training and the test setwe would like to assign a sentiment class toeach example in the test setfor each feature vec tor v in the test set we compute the euclidean distance to each of the matching vectors in the training set where matching vectors are defined as ones which share at least one patternngramword feature with vlet ti i 1 k be the k vectors with low est euclidean distance to v4 with assigned labels li i 1 k we calculate the mean distance d for this set of vectors and drop from the set up to five outliers for which the distance was more then twice the mean distancethe label assigned 4we used k 10 for all experimentsto v is the label of the majority of the remaining vectorsif a similar number of remaining vectors have different labels we assigned to the test vector the most frequent of these labels according to their frequency in the datasetif there are no matching vectors found for v we assigned the default no sentimentlabel since there is significantly more nonsentiment sentences than sentiment sentences in twitterin our experiments we used an extensive twit ter data collection as training and testing setsin our training sets we utilize sentiment hashtags andsmileys as classification labelsbelow we de scribe this dataset in detail41 twitter datasetwe have used a twitter dataset generously pro vided to us by brendan oconnorthis dataset includes over 475 million tweets comprising roughly 15 of all public nonlow qualitytweets created from may 2009 to jan 2010tweets are short sentences limited to 140 utf 8 charactersall nonenglish tweets and tweets which contain less than 5 proper english words5 were removed from the datasetapart of simple text tweets may contain urladdresses references to other twitter users or a content tags assigned by the tweeter which we use as labels for our supervised clas sification frameworktwo examples of typical tweets are ipad sucks and 6510 people agreesee more on ipad sucks page httpjmp4oiyyg and pay nomind to those who talk behind you are back it sim ply means that you are 2 steps aheadihatequotesnote that in the first example the hashtagged words are a grammatical part of the sentence while ihate qoutes of the second example is a mere sentiment label and not part of the sentencealso note that hashtags can be composed of multiple words 5identification of proper english words was based on an available wnbased english dictionary 244 category of tags agreement strong sentiment 52 87 likely sentiment 70 66 contextdependent 110 61 focused 45 75 no sentiment 3564 99 table 1 annotation results for the 3852 mostfrequent tweeter tagsthe second column displays the av erage number of tags and the last column shows of tags annotated similarly by two judgesduring preprocessing we have replaced url links hashtags and references by urlreftag metawordsthis substitution obviously had some effect on the pattern recognition phase however our algorithm is robust enough to overcome this distortion42 hashtagbased sentiment labelsthe twitter dataset contains above 25 million dif ferent userdefined hashtagsmany tweets include more than a single tag and 3852 frequenttags appear in more than 1000 different tweetstwo human judges manually annotated these frequenttags into five different categories 1 strong sen timent 2 most likely sentiment 3 contextdependent sentiment 4 fo cused sentiment and 5 no sentiment table 1 shows annotation results and the percentage of similarly assigned values for each categorywe selected 50 hashtags annotated 1or 2by both judgesfor each of these tags we automatically sampled 1000 tweets resulting in 50000 la beled tweetswe avoided sampling tweets which include more than one of the sampled hashtagsas a nosentiment dataset we randomly sampled 10000 tweets with no hashtagssmileys from thewhole dataset assuming that such a random sam ple is unlikely to contain a significant amount of sentiment sentences43 smileybased sentiment labelswhile there exist many officiallists of possibleascii smileys most of these smileys are infrequent or not commonly accepted and used as sen timent indicators by online communitieswe used the amazon mechanical turk service in order to obtain a list of the most commonly used and unambiguous ascii smileyswe asked each of ten amt human subjects to provide at least 6 commonly used ascii moodindicating smileystogether with one or more singleword descrip tions of the smileyrelated mood statefrom the obtained list of smileys we selected a subset of 15 smileys which were provided by at least threehuman subjects described by at least two human subject using the same singleword descrip tion and appear at least 1000 times in our twitter datasetwe then sampled 1000 tweets foreach of these smileys using these smileys as sentiment tags in the sentiment classification frame work described in the previous sectionthe purpose of our evaluation was to learn how well our framework can identify and distinguishbetween sentiment types defined by tags or smileys and to test if our framework can be successfully used to identify sentiment types in new un tagged sentences51 evaluation using crossvalidationin the first experiment we evaluated the consistency and quality of sentiment classification us ing crossvalidation over the training setfullyautomated evaluation allowed us to test the performance of our algorithm under several dif ferent feature settings pnwmpt pnwmpt pnwmpt pnwmpt and full where stands for utilizationomission of the followingfeature types pnpunctuation wword mn grams ptpatternsfull stands for utilization of all feature typesin this experimental setting the training set was divided to 10 parts and a 10fold cross validation test is executedeach time we use 9 parts as thelabeled training data for feature selection and con struction of labeled vectors and the remaining part is used as a test setthe process was repeated tentimesto avoid utilization of labels as strong fea tures in the test set we removed all instances of involved label hashtagssmileys from the tweets used as the test set245 setup smileys hashtags random 006 002 pnwmpt 016 006 pnwmpt 025 015 pnwmpt 029 018 pnwmpt 05 026 full 064 031 table 2 multiclass classification results for smileys andhashtagsthe table shows averaged harmonic fscore for 10 fold cross validation51 sentiment classes were used for hashtags multiclass classificationunder multiclass classification we attempt to assign a single label to each of vectors in the test setnote that the random baseline for this task is 002 for hashtags table 2 shows the perfor mance of our framework for these tasksresults are significantly above the random baseline and definitely nontrivial considering theequal class sizes in the test setwhile still relatively low we observe much better performance forsmileys which is expected due to the lower num ber of sentiment typesthe relatively low performance of hashtags can be explained by ambiguity of the hashtags andsome overlap of sentimentsexamination of clas sified sentences reveals that many of them can be reasonably assigned to more than one of the available hashtags or smileysthus a tweet imreading stuff that i do not understand againha hahawth am i doingmay reasonably matchtags sarcasm damn haha lol humor an gry etc close examination of the incorrectly classified examples also reveals that substantialamount of tweets utilize hashtags to explicitly in dicate the specific hashtagged sentiment in these cases that no sentiment value could be perceived by readers unless indicated explicitly eg de blob game review posted on our blogfunobviously our framework fails to process such cases and captures noise since no sentiment datais present in the processed text labeled with a spe cific sentiment labelbinary classificationin the binary classification experiments we classified a sentence as either appropriate for a particular tag or as not bear hashtags average hate jealous cute outrageous pnwmpt 057 06 055 063 053 pnwmpt 064 064 067 066 06 pnwmpt 069 066 067 069 064 pnwmpt 073 075 07 069 069 full 08 083 076 071 078 smileys average x sentiment hashtags ing any sentiment6for each of the 50 labelsfor hashtags we have performed a bi nary classification when providing as trainingtestsets only positive examples of the specific senti ment label together with nonsentiment examplestable 3 shows averaged results for this case and specific results for selected tagswe can see thatour framework successfully identifies diverse sentiment typesobviously the results are much bet ter than those of multiclass classification and the observed 08 precision confirms the usefulnessof the proposed framework for sentiment classifi cation of a variety of different sentiment typeswe can see that even for binary classification settings classification of smileylabeled sentencesis a substantially easier task compared to classifi cation of hashtaglabeled tweetscomparing the contributed performance of different feature typeswe can see that punctuation word and pattern features each provide a substantial boost for classi fication quality while we observe only a marginalboost when adding ngrams as classification featureswe can also see that pattern features contribute the performance more than all other fea tures together52 evaluation with human judgesin the second set of experiments we evaluated our framework on a test set of unseen and untaggedtweets comparing its output to tags assigned by human judgeswe applied our framework with its full setting learning the sentiment tags fromthe training set for hashtags and smileys and executed the framework on the reduced tweeter dataset allowingit to identify at least five sentences for each senti ment classin order to make the evaluation harsher we re moved all tweets containing at least one of the relevant classification hashtags for each of the resulting 250 sentences for hashtagsand 75 sentences for smileys we generated an as signment taskeach task presents a human judgewith a sentence and a list of ten possible hash tagsone tag from this list was provided by ouralgorithm 8 other tags were sampled from the re maining 49 available sentiment tags and the tenth tag is from the list of frequent nonsentiment tags the human judge was requested to select the 02 most appropriate tags from the listallowing assignment of multiple tags conforms to the observation that even short sentences may express several different sentimenttypes and to the observation that some of the selected sentiment tags might express similar senti ment typeswe used the amazon mechanical turk service to present the tasks to englishspeaking subjectseach subject was given 50 tasks for twitter hash tags or 25 questions for smileysto ensure the quality of assignments we added to each test fivemanually selected clearly sentiment bearing as signment tasks from the tagged twitter sentences used in the training seteach set was presented to four subjectsif a human subject failed to provide the intended correctanswer to at least two of the control set questions we reject himher from the calculationin our evaluation the algorithmis considered to be correct if one of the tags se lected by a human judge was also selected by thealgorithmtable 4 shows results for human judge ment classificationthe agreement score for this task was 041 table 4 shows that the majority of tags selectedby humans matched those selected by the algo rithmprecision of smiley tags is substantially setup correct no sentiment control smileys 84 6 92 hashtags 77 10 90table 4 results of human evaluationthe second col umn indicates percentage of sentences where judges find noappropriate tags from the listthe third column shows per formance on the control sethashtags happy sad crazy boredsad 067 crazy 067 025 bored 005 042 035 fun 121 006 117 043 smileys 335 many sentences may express several types of sentiment we would like to estimate such intersentiment dependencies and overlap automatically from the labeled datawe use two different methods for overlap estimation tag cooccurrence and feature overlap531 tag cooccurrencemany tweets contain more than a single hashtag or a single smiley typeas mentioned we ex clude such tweets from the training set to reduce ambiguityhowever such tag coappearances canbe used for sentiment overlap estimationwe cal culated the relative cooccurrence frequencies of some hashtags and smileystable 5 shows some of the observed coappearance ratiosas expected some of the observed tags frequently coappear with other similar tags247 hashtags happy sad crazy boredsad 128 crazy 142 35 bored 24 111 21 fun 196 21 15 44 smileys 359 this is possibly due to frequently expressed contrast sentiment types in the same sentence a fascinating phenomenareflecting the great complexity of the human emo tional state 532 feature overlapin our framework we have created a set of fea ture vectors for each of the twitter sentiment tagscomparison of shared features in feature vector sets allows us to estimate dependencies betweendifferent sentiment types even when direct tag cooccurrence data is very sparsea feature is considered to be shared between two different senti ment labels if for both sentiment labels there is at least a single example in the training set whichhas a positive value of this featurein order to automatically analyze such dependencies we calcu late the percentage of sharedwordngrampattern features between different sentiment labelstable 6 shows the observed feature overlap values for selected sentiment tagswe observe the trend of results obtained by comparison of shared feature vectors is similar to those obtained by means of label cooccurrence although the numbers of the shared features arehigherthese results demonstrating the patternbased similarity of conflicting sometimes contradicting emotions are interesting from a psycho logical and cognitive perspectivewe presented a framework which allows an au tomatic identification and classification of various sentiment types in short text fragments which isbased on twitter dataour framework is a su pervised classification one which utilizes twitterhashtags and smileys as training labelsthe substantial coverage and size of the processed twitter data allowed us to identify dozens of senti ment types without any laborintensive manuallylabeled training sets or preprovided sentiment specific features or sentiment wordswe evaluated diverse feature types for senti ment extraction including punctuation patternswords and ngrams confirming that each feature type contributes to the sentiment classifica tion frameworkwe also proposed two different methods which allow an automatic identification of sentiment type overlap and interdependenciesin the future these methods can be used for automated clustering of sentiment types and senti ment dependency ruleswhile hashtag labels arespecific to twitter data the obtained feature vectors are not heavily twitterspecific and in the fu ture we would like to explore the applicability oftwitter data for sentiment multiclass identifica tion and classification in other domains
C10-2028
enhanced sentiment learning using twitter hashtags and smileysautomated identification of diverse sentiment types can be beneficial for many nlp systems such as review summarization and public media analysisin some of these systems there is an option of assigning a sentiment value to a single sentence or a very short textin this paper we propose a supervised sentiment classification framework which is based on data from twitter a popular microblogging serviceby utilizing 50 twitter tags and 15 smileys as sentiment labels this framework avoids the need for labor intensive manual annotation allowing identification and classification of diverse sentiment types of short textswe evaluate the contribution of different feature types for sentiment classification and show that our framework successfully identifies sentiment types of untagged sentencesthe quality of the sentiment identification was also confirmed by human judgeswe also explore dependencies and overlap between different sentiment types represented by smileys and twitter hashtagswe used 50 hash tags and 15 emoticons as noisy labels to create a dataset for twitter sentiment classification
categorial unification grammars categorial unification grammars embody the essential properties of both unification and categorial grammar formalisms their efficient and uniform way of encoding linguistic knowledge in wellunderstood and widely used representations makes them attractive for computational applications and for linguistic research in this paper the basic concepts of cugs and simple examples of their application will be presented it will be argued that the strategies and potentials of cugs justify their further exploration i the wider context of research on unification grammars approaches to selected linguistic phenomena such as longdistance dependencies adjuncts word order and extraposition are discussed the work on merging strategies from unification grammars and categorial grammars has its origins in several research efforst that have been pursued in parallelone of them is the grammar development on the patr system at srifor quite a while now i have been using the excellent facilities of patr for the design and testing of experimental cugssuch grammars currently run on two patr implementations stuart shieber zetalisp version on the symbolics 3600 and lauri karttunen interlispd wrsion on the xerox 1109the work on cugs has influenced our efforts to develop a larger patr grammar and will do so even more in the futureon the theoretical side this work is part of ongoing research on such topics as word order variation modification and german syntax within projects at sri and csli the structure of the paper eflects the diverse nature of the enterprisein the first section i will introduce the basic notions of cugs and demonstrate them through examples in patr notationthe second section discusses the motivation for this work and some of its theoretical implicationsthe third section sketches a linguistically motivated cug framework with a strong lexical syntax that accomodates word order variationthe paper concludes with a brief discussion of possible cug approaches tolongdistance d pendencies1basic notions of categorial unificationgrammars 12unif ication grammars and categorialgrammars both terms unification grammar and categorial grammar stand for whole families of related grammar formalisms whose basic notions are widely knownl yet for the characterization f the class of formalisms i want to discuss it will be useful to review the most central concepts of both ug and cgunification grammar formalisms employ complex feature structures as their syntactic representationsthese structures encode partial information about constituentseither term or graph unification is utilized as the main operation for checking propagating and merging of the information in these complex representationsmost unification grammars also use the complex feature structures for the linking of syntactic and semantic informationin traditional categorial grammars all information about possible syntactic ombinations of constituents is encoded in their categoriesthose grammars allow only binary combinationsone of the two combined constituents the functor encodes the combination funtion the other constituent serves as the argument to this functioninstead ot7 phrase structure rules the grammar contains one or in some formalisms two combination rules that combine a functor and an argument by applying the function encoded in the functor to the argument constituentmost categorial grammars only combine constituents whose terminal strings concatenate in the input string but this need not be soin most categorial grammar formalisms it is assumed that the syntactic functorargument structure in the corresponding compositional semantics187 there are usually two types of grammatical categories in a categorial grammar basic and derived onesbasic categories are just category symbols derived categories are functions from one category to anothera derived category that encodes a function from category a to category b might be written ba if the functor combines with an argument to its right or b if it expects the argument to its leftthus if we assume just two basic categories n and s then ns sn ns sn n etc are also categoriesnot all of these categories will ever occur in the derivation of sentencesthe set of actually occurring categories depends on the lexical categories of the languageassume the following simple sample grammar basic categories n s lexical categories n fn the grammar is used for the sample derivation in peter likes paul n fin n skn s it should be clear from my brief description that the defining characteristics of unification grammar have nothing to do with the ones of categorial grammarwe will see that the properties of both grammar types actually complement each other quite wetl12a sample cug in patr notat ion since the first categorial unification grammars were written in the patr formalism and tested on the patr systems implemented at sri and since patr is especially well suited for the emulation of other grammar formalisms i will use its notationthe representations in patr are directed acyclic graphs 2 rules have two parts a head and a bodythe head is a contextfree rewrite rule and the body is a daghere is an example a simple rule that forms a sentence by combining anoun phrase with a verb phrase188 head xo x1 x2 body in unification otation s np vp body in graph notation xo r s np the rule states that two constituents x1 and x2 can combine to form a constituent x0 if the terminal string covered by x1 immediately precedes the terminal string of x2 and if the dags of x0 x1 and x2 unify with the x0 x1 and x2 subgraphs of the rule body respectivelyi will now show the most straightforward encoding of a categorial grammar in this notationthere are two types of constituent graphsconstituent graphs for basic categories are of the following form n s of course there might be more features associated with the constituent oe 7 n s finite 3 sg derived constituents have graphs of the following form arg backward functional application value functor argument leftthis is the graph associated with the vp likes paul in graph notation left agr capers form catpersnu m s finite n 3 sg it corresponds to the derivedcategory s mboh s n form finite pers 3 num sg and are the rules that combine constituentsas in tradit ional categorial grammars two such rules sufice forward functional application value functor argument rightin graph notation val you ej functr l rgu right ment val you e j j rg you ment left if backward functional application is used to combine the constituents peter and likes paul the result is a finite sentencehowever if the same rule is applied to the identical constituents likes paul and likes paul again a finite sentence is obtainedhis is so because the graph for likes paul actually unifies with the value of arg in the same graphthis can be easily remedied by modifying the graph for the vp slightlyby st ipulat ing that the argument must not have an unfilled argument position one can rule out derivcd categories as subject arguments tbr the vp 0oi etum s finite n 3 sg 13extens ions to the basic formal i smin this subsection want to discuss very briefly a few extensions of the basic model that make it more suitable for the encoding of natural language rammarsthe first one is the sorting of fimctors according to their own syntactic categorythis move might be described alternat ively as defining the type of a constituent as being defined by both a set of syntactic 189 attributes and a function from categories to categoriesthis function is also expressed as the value of an attributefor a basic category the value of the function attribute is nilthe following graph is a simplified example of a functor category unction ca s m gr the combination rules need accordinglythis is the modified functional applicationto be changed rule of forward value functor argument rightin a traditional categorial grammar a derived category is exhaustively described by the argument and value categoriesbut often syntacticians want to make more fine grained distinctionsan example is vp modificationin a traditional categorial grammar two different vp modifiers let us say an adverb and an adverbial clause would receive the same translation peter called him angrily n fn n peter called him at work n n n 190 but what should be the category for veryif it receives the category to allow the derivation of the ungrammatical sentence is also permitted peter called him very angrily n n n peter called him very n n n at work if functor categories are permitted to carry features of their own that are not necessarily bound to to any features of their argument and value categories this problem disappearsadverbs and adverbial clauses could receive different features even if their categories encode the same combination functionanother solution to the problem involves the encoding of the difference in the value part of the functoryet this solution is not only unintuitive but also contradicts a linguistic generalizationit is unintuitive because there is no difference in the distribution of the resulting vpsthe only difference holds between the modifiers themselvesthe genealization that is violated by the encoding of the difference in the value subgraphs is the endocentricity of the vpthe modified vp shares all syntactic features with its head the lower vpyet the feature that indicates the difference between adverbs and adverbial phrases could not be in both the argument and the value parts of the functor otherwise iterations of the two types of modifiers as they occur in the following pair of sentences would be ruled out peter called him very angrily at work peter called him at work very angrilyanother augmentation is based on the patr strategy for linking syntax and semanticsmost grammars written in patr use the constituent graphs also for encoding semantic informationevery constituent has an attribute called trans or semanticsthe value of this attribute contains minimally the internal semantic fnnctionargument structure of the constituent but may also encode additional semantic informationthe separate encoding of the semantics allows for a compositional semantics even in construction in which syntactic and semantic structure divert as in certain raising constructionsthe following graph for a ficticious prenominal adjective that was introduced earlier contains translation attributes for the functor the argument and the valuethe meaning of the adjective is indicated by the atom redcat functi rans adj acc ing _g red at first glance the lexical graphseven the ones that are used in the highly simplified examplesseem to exhibit an excessive degree of complexity and redundancyhowever the lexical approach to syntax is built on the assumption that the lexicon is structuredto create a lexicon that is structured according to linguistic generalizations weintroduced lexical templates early on in the development of patrtemplates are graphs that contain structure shared by a class of lexical entrieslexical graphs can be partially or fully defined in terms of templates which themselves can be defined in terms of templatesif a template name appeam in the definition of some graph the graph is simply unified with the graph denoted by the templatethe next augmentation is already built into the formalismcategorial grammarians have recognized the limitations of fimctional application as the sole mode of combining constituents for a long timeone of the obvious extensions to classical categorial grammar was the utilization of functional composition as a further combination modea good example of a categorial grammar that employs both functional application and functional composition is steedman forward functional composition permits the following combination ofcategories ab because ac the resulting category inherits the argument place for c from the argument because neither steedman nor any other cg i am aware of permits functional composition i its full generalityin order to prevent overgeneration functional composition as well as other combination modes that are discussed by steedman are restricted to apply to certain categories onlythis somehow violates the spirit of a categorial grammarsteedman combination rules for instance are net universalin cug functional composition is subsumed under functional applicationit is the functor category that determines whether simple functional application or functional composition or either one may take placeconjunction is a good case for demonstrating the versatilityconsider the following sentences 3 peter andpaul like bananas peter likes bananas and paul likes oranges peter likes and buys bananasthe conjunction and may combine two simple argument categories two functors with one unfilled argument position or two functors with more than one unfilled argument position if the conjuncts have unfilled argument positions the conjoined phrase needs to inherit them through functional compositionthe simplified lexical graph for and is given under in order to avoid a thicket of crossing edges i have expressed some of the relevant bindings by indices191 c are thus patr could be useful for writing grammars designing rammar formalisms and for exploring classes of such formalismsthe work on exploring categorial unification formalisms has not only benefitted from the features of patr but it has in a way also influenced the development of the patr formalismit was for instance essential for the writing of categorial grammars to allow category variables in the contextfree phrase structure part of the ruleshow else could one formulate the rules of functional applicationthe implementation f this facility through stuart shieber however raised interesting problems in connection with the prediction aspect of the earleyparseroriginal earley prediction works on category symbolsan answer to these problems was presented by shieber who proposed to do earley prediction on the basis of some finite quotient of all constituent dags which can be specified by the grammar writeranother example for the influence of the cug efforts on the development of patr is a new template notation introduced by lauri karttunen in his interlispd version of patrsince categorial grammars exhibit an extensive embedding of categories within other categories it is useful to unify templates not only with the whole lexical dag but also with its categorial subgraphsthe notation permits this use of templates 3order variat ion worder order variation has always been one of the hardest problems for categorial grammarsfunctional composition together with typeraising can be used to obtain all permutations of the sentences that are generated by a traditional categorial grammartotally free word order does therefore not pose an unsurmountable problem to the categorial approachas with other types of grammar formalisms it is semifree word order that is difficult o accommedategpsg lfg and fug all have mechanisms for encoding ordering regularitiessuch a device does not exist in the categorial grammars that i am aware ofhowever uszkoreit argues for an application of lnear precedence rules to the valency list of syntactic functorsthis approach presupposes that the valency list contains adjuncts as well as complements as the flmetor syntactic arguments the model can be summarized as followsthe lexicon lists uninstantiated ntriesfor functors these entries contain a set of thematic rolesthe uninstantiated lexical entry may also state whether thematic roles have to be filled whether they may be filled more than once and whether idiosyncratic properties of the fnnetor predetermine the syntactic features of certain syntactic argumentsthere are three types of rules that instantiate l xical entries feature instantiation rules valency instantiation rules and order instantiation rulesan instantiated functor has an ordered valency list containing syntactic specifications of complements and adjuncts together with the appropriate semantic bindingsthe model can account for the interspersing of complements and adjuncts as they occur in many languages including englishthe model can also account for rightextraposition phenomenat therefore the valency list may constain adjuncts that do not fill a thematic role of the functor but combine semantically with some constituent inside a linearily preceding member of the same valency listfi in the proposed model the dependency between the extraposed phrase and its antecendent is neither established by functional applicationcomposition n r by feature passingit is assumed that there is a different matching process that combines the noncontiguous phrasesa process of this kind is independently needed for the matching of adjuncts with thematic roles that are embedded in the meaning of the functor tellme about french history start in 1700the year 1700 is obviously not the start time for the telling his call was very urgent lie tried desperately from every phone booth on campusit is not try that supplies here the source role but the implicit heme of tryif the theme role is filled everybody would analyze the from pp as semantically belonging to the theme of try he tried to call her desperately from every phone booth on campusi want to conclude this discussion with a remark on the parsing problem connected with the proposed modelin older patr phrasestructure grammars as well as in the categorial patr grammars all graphs that may be connected with a word in the input string are either retrieved from the lexicon or from a cache of aheady built lexical graphs or they are constructed on the spot fiom the exical entries through the morphology and through lexical rulesfor obvious reasons this approach cannot be used in conjunction with the categorial model just proposedif all adjuncts are included in the valency list and if moreover all acceptable linearizations are performed in the extended lexicon there is no upper bound on the number of acceptable lexieal graphs for functorsthis means that lexical entries cannot be fully instantiated when the word is recognizedhey need to be instantiated incrementally as potential arguments are encounteredin uszkoreit it is argued that the ordered valency lists of a functor admitted by the lexical nstantiation rules form a regular languagef further research confirms this hypothesis the incremental nstantiation f valency lists could be performed through sets of finite state machines193 in steedman categorial grammars longdistance dependencies are endcoded in the funct ionargument structure of categories the categories that form the path between filler and gap in a derivation tree all carry a valency slot for the fillerthis uniform encoding of both subeategorization and longdistance dependencies in the argument structure of categories seems at first glance superior to the hpsg or patr approaches to longdistance dependencies in which the two types of information are marked in different feature setshowever it turns out that the steedman grammars have to mark the longdistance valency slots in order to dist inguish them from other valency slotsthere could still be a justif ication for encoding the two types of dependencies in the same argument stackone might loose important nesting information by separat ing the two types of slotshowever i have not yet seen a convincing example of nesting constraints among subcategorization and longdistance dependenciestherefore i consider the question of the appropriate place for encoding longdistance dependencies still opena last remark on longdistance dependenciesin a unification based system like patr it is not trivial to ensure that gap information is passed up from one daughter const i tuent only when a rule is appliedthere are two ways to enforce this constraintthe first one involves a multipl ication of rulesfor a binary rule a b c for instance one could introduce three new rules one of which does not do any gap passing another one the passing of a gap from b to a and the third the passing of a gap from c to a patr uses a little more elegant method which has been first suggested by fernando pereiratwo features are threaded through every tree one of which carries a gap up a tree passing through all the constituents o the left of the gap and a second one that is set to nil if a gap has been found and that is then sent through all the const ituents to the r ight of the gap unifying it on the way with potential gapsit requires that information about the two special features be added to every rulein patr a preprocessor f rules adds this information for all rules in which the grammar writer did not include any gap threading information herself eg for encoding island constraintsin a cug that only contains two rules the first method of duplicating rules appears preferrable over the gap threading approachrules that propagate gap information might also include rules that permit parasitic gaps along the lines of steedman rules of functional substitution194
C86-1045
categorial unification grammarscategorial unification grammars embody the essential properties of both unification and categorial grammar formalismstheir efficient and uniform way of encoding linguistic knowledge in wellunderstood and widely used representations makes them attractive for computational applications and for linguistic researchin this paper the basic concepts of cugs and simple examples of their application will be presentedit will be argued that the strategies and potentials of cugs justify their further exploration in the wider context of research on unification grammarsapproaches to selected linguistic phenomena such as longdistance dependencies adjuncts word order and extraposition are discussed
word sense disambiguation with very large neural networks extracted from machine readable dictionaries in this paper we describe a means for automatically building very large neural networks from definition texts in machinereadable dictionaries and demonslrate he use of these networks for word sense disambiguation our method brings together two earlier independent approaches to word sense disambiguation the use of machinereadable dictionaries and spreading and activation models the automatic construction of vlnns enables realsize xperiments with neural networks for natural language processing which in turn provides insight into their behavior and design and can lead to possible improvements automated language understanding requires the determination f the concept which a given use of a word represents a process referred to as word sense disambiguation wsd is typically effected in natural llanguage processing systems by utilizing semantic teature lists for each word in the system lexicon together with restriction mechanisms such as case role selectionhowever it is often impractical to manually encode such information especially for generalized text where the variety and meaning of words is potentially unrestrictedfurthermore restriction mechanisms usually operate within a single sentence and thus the broader context cannot assist in the disambiguation processin this paper we describe a means tor automatically building very large neural networks from definition texts in machinereadable dictionaries and denmnstrate he use of these networks for wsdour method brings together two earlier independent approaches to wsd the use of machinereadable dictionaries and spreading and activation modelsthe automatic onstruction of vlnns enables realsize experiments with neural networks which in turn the authors would like to acknowledge the contributions of stphanc tlari6 and gavin huntlcy to the work presented in this paperprovides insight into their behavior and design and can lead to possible improvements21machinereadable dictionaries jbr wsdthere have been several attempts to exploit the information in maclfinereadable versions of everyday dictionaries in which an enormous amount of lexical and semantic knowledge is already encodedsuch information is not systematic or even complete and its extraction from machine readable dictionaries is not always straightforwardhowever it has been shown that even in its base form information from machinereadable dictionaries can be used for example to assist in the disambiguation f prepositional phrase attachment or to find subject domains in texts the most general and wellknown attempt to utilize information i machinereadable dictionaries for wsd is that of lesk which computes the degree of overlapthat is number of shared wordsin definition texts of words that appear in a tenword window of 1 389 contextthe sense of a word with the greatest number of overlaps with senses of other words in the window is chosen as the correct onefor example consider the definitions of pen and sheep from the collins english dictionary the dictionary used in our experiments in figure 1figure 1 definitions of pen sheep goat and page in the collins english dictionary pen 1 1an implement for writing or drawing using ink formerly consisting of a sharpened and split quill and now of a metal nib attached to a holder2the writing end of such an implement nib3style of writing4the pena writing as an occupation b the written word5 the long horny internal she will of a squid6to write or composepen 2 1an enclosure in which domestic animals are kept2any place of confinement3a dock for servicing submarines4to enclose or keep in a penpen 3 short for penitentiarypen 4 a female swansheep l any of various bovid mammals of the genus ois and related genera having transversely ribbed horns and a narrow face there are many breeds of domestic sheep raised for their wool and for meat2barbary sheep3a meek or timid person4separate the sheep from the goats to pick out the members of any group who are superior in some respectsgoat 1any surefooted agile bovid mammal of the genus capra naturally inhabiting rough stony ground in europe asia and n africa typically having a browngrey colouring and a bearddomesticated varieties are reared for milk meat and wool3a lecherous man 4a bad or inferior member of any group 6act the goatto fool around7get goatto because annoyance to page i 1one side of one of the leaves of a book newspaper letter etc or the written or printed matter it bears2such a leaf considered as a unit 3an episode phase or period 4printingthe type as set up for printing a page6to look through leaf throughpage 2 1a boy employed to run errands carry messages etc for the guests in a hotel club etc 2a youth in attendance at official functions or ceremonies3a a boy in training for knighthood in personal attendance on a knight b a youth in the personal service of a person of rank4an attendant at congress or other legislative body5a boy or girl employed in the debating chamber of the house of commons the senate or a legislative assembly to carry messages for members6to call out the name of 7to call by an electronic device such as bleep g to act as a page to or attend as a pageif these two words appear together in context the appropriate senses of pen and sheep will be chosen because the definitions of these two senses have the word domestic in commonhowever with one word as a basis the relation is tenuous and wholly dependent upon a particular dictionary wordingthe method also fails to take into account less immediate r lationships between wordsas a result it will not determine the correct sense of pen in the context of goatthe correct sense of pen and the correct sense of goat do not share any words in common in their definitions in the collins english dictionary however a strategy 390 which takes into account a longer path through definitions will find that animal is in the definition of pen 21 each of mammal and animal appear in the definition of the other and mammal is in the definition of goat 1similarly lesk method would also be unable to determine the correct sense of pen in the context of page because seven of the thirteen senses of pen have the same number of overlaps with senses of pagesix of the senses of pen share only the word write with the correct sense of page however pen 11 also contains words such as draw and ink and page 11 contains book newspaper letter and printthese other words are heavily interconnected in a complex network which cannot be discovered by simply counting overlapswilks et al build on lesk method by computing the degree of overlap for related wordsets constructed using cooccurrence data from definition texts but their method suffers from the same problems in addition to combinatorial problems thai prevent disambiguating more than one word at a time22neural networks for wsdneural network approaches to wsd have been suggested these models consist of networks in which the nodes represent words or concepts connected by activatory links the words activate the concepts to which they are semantically related and vice versain addition lateral inhibitory links usually interconnect competing senses of a given wordinitially the nodes corresponding tothe words in the sentence to be analyzed are activatedthese words activate their neighbors in the next cycle in turn these neighbors activate their immediate neighbors and so onafter a number of cycles the network stabilizes in a state in which one sense for each input word is more activated than the others using a parallel analog relaxation processneural network approaches to wsd seem able to capture most of what cannot be handled by overlap strategies such as leskhowever the networks used in experiments o far are handcoded and thus necessarily very small due to a lack of realsize data it is not clear that he same neural net models will scale up for realistic applicationfurther some approaches rely on context setting nodes to prime particular word senses in order to force 1the correct interpretationbut as waltz and pollack point out it is possible that such words are not explicitly present in the text under analysis but may be inferred by the reader from the presence of other related words to solve this problem words in such networks have been represented by sets of semantic microfeatures which correspond to fundamental semantic distinctions characteristic duration of events locations and other similar distinctions that humans typically make about situations in the worldto be comprehensive the authors uggest that these features must number in the thousandseach concept iin the network is linked via bidirectional activatory or inhibitory links to only a subset of the complete microfeature s t a given concept theoretically shares everal microfeatures with concepts to which it is closely related and will therefore activate the nodes corresponding to closely related concepts when it is activated itselfttowever such schemes are problematic due to the difficulties of designing an appropriate set of microfeatures which in essence consists of designing semantic primitivesthis becomes clear when one exmnines the sample microfeatures given by waltz md pollack they specify microf carfares uch as casino and canyon but it is obviously questionable whether such concepts constitute fundamental semantic distinctionsmore practically it is simply difficult to imagine how vectors of several thousands of microfeamrcs for each one of the lens of thousands of words and hundreds of thousands of senses can be realistically encoded by handour approach to wsd takes advantage of both strategies outlined above but enables us to address solutions to their shortcomingsthis work has been carried out in tile context of a joint project of vassar college and the groupe reprdsentation et traitement des connaissances of the centre national de la recherche scientifique which is concerned with the construction and exploitation of a large lexical data base of english and frenchat present the vassarcnrs data base includes through the courtesy of several editors and research institutions several english and french dictionaries dictionary of contemporary english thewebster 9th dictionary and the zyzomys cdrom dictionary from hachette publishers as well as several other lexical and textual materials we build vlnns utilizing definitions in the collins english dictionarylike lesk and wilks we assume that there are significant semantic relations between a word and the words used to define itthe connections in the network reflect these relationsall of the knowledge represented in the network is automatically generated from a machinereadable dictionary and therefore no hand coding is requiredfurther the lexicon md the knowledge it contains potentially cover all of english and as a result this information cml potentially be used to help dismnbiguate unrestricted text31topology of the networkin our model words are complex unitseach word in the input is represented by a word node connected by excitatory links to sense nodes representing the different possible senses tbr that word in the collins english dictionaryeach sense node is in turn connected by excitatory links to word nodes rcpreseming the words in tile definition of that sensethis process is repeated a number of times creating an increasingly complex and interconnected networkideally the network would include the entire dictionary but for practical reasons we limit the number of repetitions and thus restrict tile size of the network to a few thousand nodes and 10 to 20 thousand transitionsall words in the network are reduced to their lemmas and grammatical words are excludedthe different sense nodes tor a given word are interconnected by lateral inhibitory links3 391 figure 2topology of the network i word node sense node excitatory link inhibitory link when the network is run the input word nodes are activated firstthen each input word node sends activation to its sense nodes which in turn send activation to the word nodes to which they are connected and so on throughout he network for a number of cyclesat each cycle word and sense nodes receive feedback from connected nodescompeting sense nodes send inhibition to one anotherfeedback and inhibition cooperate in a winnertakeall strategy to activate increasingly related word and sense nodes and deactivate the unrelated or weakly related nodeseventually after a few dozen cycles the network stabilizes in a configuration where only the sense nodes with the strongest relations to other nodes in the network are activatedbecause of the winnertakeall strategy at most one sense node per word will ultimately be activatedour model does not use microfeatures because as we will show below the context is taken into account by the number of nodes in the network and the extent to which they are heavily interconnectedso far we do not consider the syntax of the input sentence in order to locus on the semantic properties of the modelhowever it is clear that syntactic information can assist in the disambiguation process in certain cases and a network including a syntactic layer such as that proposed by waltz and pol lack would undoubtedly enhance the model behavior32resultsthe network finds the correct sense in cases where lesk strategy succeedsfor example if the input consists of pen and sheep pen 21 and sheep 1 are correct ly act ivatedmore interestingly the network selects the appropriate senses in cases where lesk strategy failsfigures 3 and 4 show the state of the network after being run with pen and goat and pen and page respectivelythe figures represent only the most activated part of each network after 100 cyclesover the course of the run the network reinforces only a small cluster of the most semantically relevant words and senses and filters out tile rest of the thousands of nodesthe correct sense for each word in each context is the only one activated at the end of the runthis model solves the contextsetting problem mentioned above without any use of microfeaturessense 11 of pen would also be activated if it appeared in the context of a large number of other wordseg book ink inkwell pencil paper write draw sketch etcwhich ave a similar semantic relationship to penfor example figure 5 shows the state of the network after being run with pen and bookit is apparent that the subset of nodes activated is similar to those which were activated by page392 4 figure 3state of the network after being run with pen and goat are the most activated figure 4state of the network after being run with pen and page the darker nodes figure 5state of the network after being run with pen and book r the darker nodes ook 393 the examples given here utilize only two words as input in order to show clearly the behavior of the networkin fact the performance of the network improves with additional input since additional context can only contribute more to the disambiguation processfor example given the sentence the young page put the sheep in the pen the network correctly chooses the correct senses of page sheep and pen this example is particularly difficult because page and sheep compete against each other to activate different senses of pen as demonstrated in the examples abovehowever the word young reinforces sense 23 of page which enables sheep to win the struggleintersentential context could be used as well by retaining the most activated nodes within the network during subsequent runsby running various experiments on vlnns we have discovered that when the simple models proposed so far are scaled up several improvements are necessarywe have for instance discovered that gang effects appear due to extreme imbalance among words having few senses and hence few connections and words containing up to 80 senses and several hundred connections and that therefore dampening is requiredtn addition we have found that is is necessary to treat a word node and its sense nodes as a complex ecological unit rather than as separate ntitiesin our model word nodes corttrol the behavior of sense nodes by means of a differential neuron that prevents for example a sense node from becoming more activated than its master word nodeour experimentation with vlnns has also she would light on the role of and need for various other parameters uch as thresholds decay etcthe use of word relations implicitly encoded in machinereadable dictionaries coupled with the neural network strategy seems to offer a promising approach to wsdthis approach succeeds where the lesk strategy fails and it does not require determining and encoding microfeatures or other semantic informationthe model is also more robust than the lesk strategy since it does not rely on the presence or absence of a particular word or words and can filter out some degree of noise how ever there are clearly several improvements which can be made for instance the partofspeech for input words and words in definitions can be used to extract only the correct lemmas from the dictionary the frequency of use for particular senses of each word can be used to help choose among competing senses and additional knowledge can be extracted from other dictionaries and thesauriit is also conceivable that the network could learn by giving more weight to links which have been heavily activated over numerous runs on large samples of textthe model we describe here is only a first step toward a fuller understanding and refinement of the use of vlnns for language processing and it opens several interesting avenues for further application and research
C90-2067
word sense disambiguation with very large neural networks extracted from machine readable dictionariesin this paper we describe a means for automatically building very large neural networks from definition texts in machinereadable dictionaries and demonstrate the use of these networks for word sense disambiguationour method brings together two earlier independent approaches to word sense disambiguation the use of machinereadable dictionaries and spreading and activation modelsthe automatic construction of vlnns enables realsize experiments with neural networks for natural language processing which in turn provides insight into their behavior and design and can lead to possible improvementswe apply conventional spreading activation approaches to word sense disambiguation
synchronous treeadjoining grammars the unique properties of lreeadjoining grammars present a challenge for the application of fags beyond the limited confines of syntax for instance to the task of semantic interpretation or automatic translation of nat ural hmguage we present a variant of fags called synchronous tags which chmacterize correspondences between languages lqe formalism intended usage is to relate expressions of natural anguages to their associ ated semantics represented in a logical tbrm language or to their translates in another natural anguage in sum mary we intend it to allow tags to be used beyond their role in syntax proper we discuss the application of synchronous tags to concrete examples mention ing primarily in passing some computational issues that tuise in its interpretation treeadjoining rammars constitute a grammat ical formalism with attractive properties for the strong characterization f the syntax of natural angtmges that is characterization of the analysis trees of the expres sions in the language among these properties are that o the domain of locality in tags is larger than lot formalisms lhat augment contextfree grammars and the statements of dependencies and recursion pos sibilities in a tree are factored the former following from primitive dependencies in elementary trees the latter a consequence of an operatkm of adjunc tion of treesthese unique properties of tags present a challenge tot the application of tags beyond the limited confines of syntax for instance to the task of semantic interpre tation or automatic trmslation of natural anguagethe slandmd methods of moving beyond syntax to interpre tation make use in one way or another of the compo sitional structure of the analysis tree that is manifested in the tree derivationany version of compositional 1we assume familiarity throughout the paper with previous work on tagssee for instance the introduction by joshi semantics or syntaxdirected translation relies on such a methodology to some extenthowever in the case of tags the compositional structure of the tree is not miro rored by its derivational structure so that a method for constructing semantics based on the compositional syn tactic structure will be inherently nonderivational that is construction of the semantics will be independent of the derivation of the tree and therefore subsequenton the other hand a method mirroring the deriva tional structure will not necessarily be compositional with respect to tile derived structures of expressionsai tlough such a method would be quite different from ttle primarily compositional methods previously postulated it may have advantages given that certain aspects of language seem to be noncompositionalin this paper we present a varimt of tags called synchronous tags which characterize correstxmdences between languagesthe formalism intended usage is to relate expressions of natural anguages to their asso ciated semantics represented in a logical form language or to their translations in another natural language in summary we intend the formalism to allow tags to be used beyond their role in syntax properwe also discuss its application to concrete xamples and mention some computational issues that arise in its interpretationmal description language interpretation tasks can be thought of as asso ciating a syntactic analysis of a sentence with some other stmcturea logical form representation r an analysis of a target language sentence perhapssynchronous tags are defined so as to make such associations explicitthe original language and its associated structures are both defined by grammars tated in a tag formalism the two tags are synchronous in the sense that adjunction and substitution operations are applied simultaneously to related nodes in pairs of trees one for each languagefor convenience we will call the two languages ource and target languages although the formalism is not in herently directionalas an example consider the task of relating a frag ment of english with a simple representation of its predicateargument structurea synchronous tag for this purpose is given in figure 1each element of the 1 253 np v are t t v np hates i i george george n jb rocco l i brcoli vp f p a vp f violently violently i cooked cooked i figure 1 a sample synchronous tagsynchronous tag is a pair consisting of two elemen tar2 trees one from tlie source language and one from the target nodes one from each tree may be linked such links are depicted graph ically as thick linesif we project the pairs onto their first or second components the projections are tags for an english fragment and an lf fragment respectively qhese grammars are themselves written in a particular variant of tags the choice of this base formalism as we will call it is freein the case at hand we have chosen singlecomponent lexicalized tags with adjunction and substitution later examples are built on other basesthe elementary operation in a synchronous tag is su pervenient on the elementary operations in the base for malisma derivation step from a pair of trees proceeds as follows 1 nondeterministically choose a link in the pair con necting two nodes nondeterministically choose a pair of trees in the grammarform the resultant pair 3t 32where 3 is the result of performing a primi tive operation in the base formalism on a at node n using 3 3 2we will generalize the links later to allow sets of nodes from one tree to be linked to sets from the other3the definition allows for the operations performed on the first synchronous tag derivation then proceods by choos ing a pair of initial trees that is an element of the grammar and repeatedly applying derivation steps as aboveas an example suppose we start with the tree pair c in figure 14 we choose the link from the subject np to t and the tree pair fl to apply to its nodesthe resultant by synchronous substitution is the tree pair i ny t t i i i george v p hates georgej note that the links from a are preserved in the resul tant pair cq except for the chosen link which has no counterpart in the resultusing tree pair 7 on the remaining link from np to t in oq yields o 2 np vp r t t george y p haregeorgebroccoli hates broccoli this pairing manifests the correspondence b tween the sentence george hates broccoli and its logical form hates here we see that the links in the operator trees are preserved in the resultant pair accounting for the sole remaining linktile trees in 7 are linked in this way so that other tree pairs can modify the n we can continue the derivation using 5 and to gen erate the pair given in figure 2 thereby associating the meaning violently with the sentence george hates cooked broccoli vio lentlya subtle issue mises with respect o link updating in the resultant pair if two links impinge on the same nodewhen one of the links is chosen and an adjunction per formed at the node the other link must appear in the resultantthe question as to whether that link should now end at the root or foot of the adjoined tree can be re solved in several waysalthough the choice of method does not affect any of the examples in this paper we mention our current resolution of this problem hereif the remaining link is connected initially to the top of and second trees to differ one being a substitution and the other an adjunetion for exampleawe uge standard tag notation marking foot nodes in auxiliary trees with and nodes where substitution is m occur with 1the nonterminal names in the logical form grammar e mnemonic for formula relation symbol term and quantifier254 2 f george vp advp violently t hates n_ cooked broccoli i cooked broccoli figure 2 derived tree pair for george hates cooked broccoli violently the node serving as the adjunction site it will connect to the top of the root node of the adjoined auxiliary nee after the adjunction has been performed conversely if it is connected initially to the bottom of the node it will connect o the bottom of the foot node of the auxiliary treein all of the examples in this paper the links may be thought of as connecting to the tops of nodesthe issue has important ramificationsfor instance the link updating process allows for different derivations of a single derivation in the source language to correspond to derivations of different derivations in the target lan guage that is derivation order in synchronous tags is in this respect crucial unlike in the base tag for malismswe rely on this property in the analysis of quantifier scope in section 42we turn to the question of why in augmenting tags for the purposes of encoding semantic information it is preferable to use the synchronous tag method over more conventional methods such as semantic rules in volving logical operations or complex featurestructure encodings first the arguments for factoring recursion and depen dencies as tags do for the syntax of natural anguage have their counterparts in the semanticsthe structure of tags allows syntactic dependenciesagreement sub categorization and so forthto be localized in the prim itives of a grammar the elementary treesthis is most dramatically evident in the case of longdistance depen dencies such as that between a whphrase and its as sociated gapsimilarly using tags to construct logi cal forms allows the localization of semantic dependen cies in the logical forms of natural language xpressions dependencies such as the signature requirements of function and relation symbols and even the longdistance dependencies between a wh quantifier and its associated bound variablewith other methods of semantics these dependencies cannot be lo calized the semantic aspects of fillergap dependencies must be passed among the features of various nodes in a parse tree or otherwise distributed over the entire deriva tionsecond the use of the synchronous tag augmenta tion allows an even more radical reduction in the role of features in a tag grammarbecause of the extended domain of locality that tags possess the role of features and unification is reduced from its role in contextfree based systemsonly finitevalued features are needed with the possible exception of a feature whose value encodes an expression logical formin removing the conslzuction of logical forms from the duties delegatea to features we can maintain a strictly finiteovalued and therefore formally dispensablefeature system ibr tagsas a side note we mention a ramification of the syn chronous tag analysis concerning the claim of ka plan and zaenen that the paths over which longdistance dependencies operate form a regu lar languagevijayshanker and joshi provide an argument that this claim follows from several as sumptions concerning how a feature system for tags might be constrainedvijayshanker has noted that by placing a simple assumption on the elementary trees in the logical form component of a synchronous tag the proof of this claim becomes immediateany tag in which all foot nodes are i am mediate children of their associated root generates a tree path language that is regularthus a synchronous tag whose semantic component forms a tag with this property necessarily obeys the regular language constraint on longdistance semantic dependenciesto exemplify the formalism utility we briefly and in formally describe its application to the semantics of id ioms and quantifiersa companion paper uses a mapping between two tags for automatic translation between natural anguages and constitutes a further application of the synchronous tag concept5this is a folk theorem whose straighlforward proof is left as an exercise for the reader 3 255 more expansive descriptions of these analyses will be forthcoming in joint work with anne abeilld and anthony kroch 41 id ioms abeill6 and schabes note that lexicalized tags are an appropriate r presentation language for idiomatic constructions as their expanded omain of locality can account for many syntactic properties of idiomsit seems natural to generalize beyond syntax as they do to the claim that lexicalized fags allow one to deal with semantic noncompositionalitytheir argument to this claim is based on an intuition that semantics de pends on the tag derivation structure an intuition that synchronous tags makes precisefor example the id iomatic construction kick the bucket cashes out as the following tree pair under its idiomatic interpretation a3 de whereas the literal usage of kick is associated with a tree pair similar to that of hates in figure 1two derivations of the sentence george kicked the bucket are possible each using a different one of these two elementary tree pairs but both yielding identical de rived constituency trees for the englishthey will be associated of course with two different readings cor responding to the idiomatic and literal interpretations respectivelyall of the arguments for the tag analysis of idioms and light verb constructions can then be maintained in a formalism that allows for semantics for them as wellin particular discontinuous syntactic onstituents can be seman tically localizednonstandard longdistance dependencies are stat able without resort to reanalysisboth frozen and flexible idioms can be easily char acterized42 quant i f ie rsin order to characterize quantifier scoping possibilities we use a synchronous tag whose base formalism is multicomponent tags in which the prim itive operation is incorporation of a set of elementary trees at oncein synchronous multicomponent tags the links between trees connect in general a set of nodes in one tree with a set in anotherin particular an np will be linked both to a formula in the semantics and a term we will begin a derivation with just such a pair of elementat3 trees depicted as at in figure 3to distinguish two separate links from a single link among several nodes we use a coindexingrather than graphicalnotation f r linksthus the subject np node on the left is linked with both the f and first t node on the right as indicated by the boxed index 1the inteqgretation f such hyperlinks is that when a pair is chosen to operate at the link it must have sets of the correct sizes as its left and right component and the sets are simultaneously used at the various nodes as in a multicomponent lagfor instance a quantifiable noun will be paired with a set of two trees 6 politician r t x politician applying the latter multicomponent tree pair fll to the initial tree pair a l we derive the next stage in the deriva tion o2we have highlighted the link being operated on at this and later steps by using thick lines for the index boxes of the selected linkthe determiner can be introduced with the simple pair leading to the derivation step a3completing the deriva tion using analogous elementary tree pairs we might generate the final tree pair a4 of figure 3this final pairing associates the meaning by vegetablc vx politician hates with the sentence every politician hates some veg etableit should be clear that in a structure such as this with multiple nps the order of substitution of nps de termines the relative scope of the quantifiers although it has no effect whatsoever on the syntactic structurede veloping this line of reasoning has led to several detailed predictions of this analysis of quantifier scope which is beyond this paper purviewin summary however the analysis is slightly more restrictive than that of hobbs and shieber making predictions regarding the scope of topicalized or whmoved constituents relative scope of embedded quantifiers and possibly even syn tactic structure of complex npsthe synchronous tag formalism is inherently nondirec tionalderivation is not defined in terms of constructing 6the subscript x on certain nodes is the value of a feature on the nodes corresponding to the variable bound by the quantifierthe technique of using metavariables to encode object variables is familiar from the logic and unificationbased grammar literatures variable renaming with respect o these variables proceeds as usual256 4 i s v np 1 hates np v np i politician hates mm f j t ninf iiiq x f r t x r t x nt i i 1 politician hates and vijayshanker and joshi provide parsing algorithms for tags that could serw to parse the base formalism of a synchronous taggiven such an algorithm semantic interpretation can be performed by parsing the sentence according to the source grammar the pairings then determine a deriva tion in the target language for tile logical formgen eration from a logical form proceeds by the converse process of parsing the logical form expression thereby determining the derivation for the natural anguage sen fencemachine translation proceeds akmg similar lines by mapping two fags directly in previous work one of us noted that generation ac cording to an augmented contextfree grammar can be made more efficient by requiring the grammar to be se mantically monotonic the derived se mantics for an expression must include in an appropri ate sense the semantic material of all its subconstituentsit is interesting to note that synchronous fags are in herently semantically monotonicfurthermore it is rea sonable to require that the semantic omponent of a syn chronous tag t lexicalized allowing for more efficient parsing accord ing to the semantic grammar and consequenlly more efficient generationin the case of augmented context free grammars the semantic monotonicity requirement precludes lexicalization of the semanticsit is not possible to require nontrivial semantics to be associated with each lexical itemin summary just as lexicaliza lion of the syntactic grammar aids parsing so lexicalization of the semantic granmzr aids generationtile description of parsing and germration above rnay seem to imply that these processes cannot be pcrlormcd incrementally that is an entire source derivation must be recovered before the corresponding target derivation can be computedthe issue deserves clarificationin the case wltere the synchronous tag is order independent there is a onetoone mapping between the source and target derivationwhen par tial source derivations are recognized by the parser the corresponding partial target derivation retation can be incrementally compuled as the input is read from left to right interpretations of the partial target derivations corresponding to partial source derivations can be combined in one step to buikl a larger partial target derivation5 257 when the synchronous tag is ordersensitive how ever there may be a manytomany correspondence b tween source derivations and target derivationsthis is the case for instance in a grammar in which alterna tive quantifier scopings may be generated for a single sentencein this case it is unclear what should even be meant by incremental computationfor instance mid way in parsing a sentence at a point at which a single quantified np has been analyzed the incremental inter pretation could not possibly represent all possible scop ings that that quantifier might end up taking as it is not known what the quantifier might be required to scope with respect o at the point in the parse where the scoping decision can be made it is not clear whether an inerementality requirement would mean that the variant scopings must all be explicitly generated at that point or only implicitly generablewith respect o synchronous tags these considera tions are reflected in choice of parsing algorithmef ficiency of parsing necessitates that only one canonical derivation need to be com puted all other derivations yield the same objectstan dard parsing algorithms for both tags and cfgs rely on this optimizationif incrementality requires that we generate xplicit representations of all possible interpre tations of the string seen so far then this optimization cannot be used and parsing will be highly inefficientif the representation can be left i am plicit the optimization can be maintained but retrieval of explicit representations will be combinatorially more complex
C90-3045
synchronous treeadjoining grammarsthe unique properties of treeadjoining grammars present a challenge for the application of tags beyond the limited confines of syntax for instance to the task of semantic interpretation or automatic translation of natural languagewe present a variant of tags called synchronous tags which characterize correspondences between languagesthe formalism intended usage is to relate expressions of natural languages to their associated semantics represented in a logical form language or to their translates in another natural language in summary we intend it to allow tags to be used beyond their role in syntax properwe discuss the application of synchronous tags to concrete examples mentioning primarily in passing some computational issues that arise in its interpretationsynchronous tree adjoining grammars is introduced primarily for semantics but will be later also proposed for translationa synchronous derivation process for the two syntactic structures of both languages suggests the level of crosslingual isomorphism between the two trees eg synchronous tree adjoining grammars
word identification for mandarin chinese sentences keh j iann chen sh ing l luan liu institute of lnflrmation science academia sinica chinese sentences are composed with string of characters without blanks to mark words however the basic unit for sentence parsing and understanding is word therefore the first step of processing chinese sentences i to identify the words the difficulties of identifying words include the identification of com plex words such as determinativemeasure redupli cations derived words etc the identification of proper names resolving the ambiguous segmenta tions in this paper we propose the possible solutions for the above difficulties we adopt a matching algo rithm with 6 different heuristic rules to resolve the am biguities and achieve an 9977 of the success rate the statistical data supports that the maximal match ing algorithm is the most effective heuristics chinese sentences arc cxmposed with string of characters without blanks to mark wordshowever the basic unit for sentence parsing and understanding is wordtherefore the first step of processing chinese sentences is to identify the wordsmost of the current chinese natural language processing systems include a processor for word iden tificationalso there are many word segmentation techniques been developedusually they use a lexicon with a large set of entries to match input sentences 21012131421it is very often that there are many lssible different successful matchingstherefore the major focus for word identification were on thc resolu tion of ambiguitieshowever many other important as pects such as what should be done in what depth and what are considered to be the correct identifications were totally ignoredhigh identification rates are claimed to be achieved but none of them were mea sured under equal basesthere is no agreement in what extend words are considered to be correctly iden tifiedfor instance compounds occur very often in chi nese text but none of the existing systems except ours pay much attention to identify themproper name is another type of words which cannot be listed exhaus tively in the lexicontherefore simple matching algo rithms can not successfully identify either compounds or proper namesin this paper we like to raise the ptxblems and the difficulties in identifying words and suggest the possible solutions
C92-1019
word identification for mandarin chinese sentenceschinese sentences are composed with string of characters without blanks to mark wordshowever the basic unit for sentence parsing and understanding is wordtherefore the first step of processing chinese sentences is to identify the wordsthe difficulties of identifying words include the identification of complex words such as determinativemeasure reduplications derived words etc the identification of proper names resolving the ambiguous segmentationsin this paper we propose the possible solutions for the above difficultieswe adopt a matching algorithm with 6 different heuristic rules to resolve the ambiguities and achieve a 9977 of the success ratethe statistical data supports that the maximal matching algorithm is the most effective heuristicswe propose the forward maximum matching algorithm
stochastic lexicalized treeadjoining grammars aho a v 1968 lndexed grammars an extension to context free grammars j acm 15647671 baker jk 1979 trainable grammars tbr speech recognition in jared j wolf and dennis h klatt editors speech communication papers presentacd at the 97 h meeting of the acoustical society of amer ica mit cambridge ma june llooth taylor r and richard a thomlson 1973 applying probability measures to abstract languages ieee 7aasactions on computers c22442450 may booth t 1969 probabilistic representation f formal languages in tenth annual ieee symposium on switching and automata theory october chomsky n 1964 syntactic structures chapter 23 pages 1318 mouton gazdar g 1985 applicability of indexed grunmars to natural anguages technical report csli8534 center for study of language and information tlempttill charles t john j godfrey and george il doddington 1990 the atis spoken language sys tems pilot corpus in darpa speech and natural laaguage workshop hidden valley pennsylvania june jelinek f j d lafferty and r l mercer 1990 ba sic methods of probabilistic ontext free grammars technical report rc 16374 ibm yorktown heights new york 10598 joshi aravind k and yves schabes 1991 tree adjoiuing grammars and lexiealized grammars in maurice nivat and andreas podelski editors defin ability and recognizability ofsets of trees elsevier forthcoming joshi aravind k k vijaysimnker and david weir 1991 the convergence of mildly contextsensitive gramnmtical formalisms in peter sells stuart shieber and tom wasow editors foundational is sues in natural language processing mit press cambridge ma joshi aravind k 1987 an introduction to tree ad joining grammars in a manasterramer editor mathematics of language john beujamins amster dana lari k and s j young 1990 the estimation of stochastic ontextfree grmnmars using the inside outside algorithm computer speech and language 43556 acres de col1ng92 nantes 2328 aor 1992 4 3 1 pror oi coling92 nantes august 2328 1992 pereira fernando and yves schabes 1992 inside outside reest imation from partial ly bracketed cor pora in 20 th meeting of the association for compu tational linguistics newark delaware prat t fletcher 1942 secret and urgent the story of codes and ciphers blue ribbon books resnik philip 1991 lexicalized treeadjoining ram mar for distr ibutional analysis in penn review of linguistics spring schabes yves anne abeill and aravind k joshi 1988 pars ing strategies with lexicalized grarnmars application to tree adjoining gramnars in proceed ings of the 1 lh international conference on compu tational linguistics from the department of computer science schabes yves 1991 an insideoutside algor i thm for est imat ing the parameters of a hidden stochastic contextfree grammar based on earley algorithm manuscript shannon c e 1948 a mathemat ica l theory of communicat ion the bell system technical journal 27379423 shannon c e 1951 predict ion and entropy of printed english the bell system technical journal 3050 64 vi jayshanker k and david j weir 1991 pars ing constrained grammar formalisms in preparation vi jayshanker k 1987 a study of lbee adjoining grammars phd thesis department of computer and information science university of pennsylvmfia a comput ing the ins ide p rob ab i l i t i es in the following the inside and outside probabilities are reative to the input string w 3 t stands for the the set of foot nodes s for the set of nodes on which substitution can occur for the set of root nodes of initial trees and 4 for the set of nonterminal nodes of auxiliary trees the inside probability can be computed bottomup with the following recurrence quations for all node vfound in an elementary tree it can be shown that 1 if br a i dl if i 1 and if a w 1 0 otherwise 2 f71 e3 c l l if i j and if k l 0 otherwise
C92-2066
stochastic lexicalized treeadjoining grammarsthe notion of stochastic lexicalized treeadjoining grammar is formally definedthe parameters of a sltag correspond to the probability of combining two structures each one associated with a wordthe characteristics of sltag are unique and novel since it is lexically sensitive and yet hierarchical then two basic algorithms for sltag arc introduced an algorithm for computing the probability of a sentence generated by a sltag and an insideoutside like iterative algorithm for estimating the parameters of a sltag given a training corpusfinally we should how sltag enables to define a lexicalized version of stochastic contextfree grammars and we report preliminary experiments showing some of the advantages of sltag over stochastic contextfree grammarsin stochastic treeadjoining grammar this lack of contextsensitivity is overcome by assigning probabilities to larger structural units
principar an efficient broadcoverage principlebased parser we present an eficient broadcoverage principlebased parser for english the parser has been implemented in c and runs on sun sparcstations with xwindows it con rains a lexicon with over 90000 entries con structed automatically b applying a set of ex traction and conversion rules to entries from machine readable dictionaries principlebased grammars such as govern mentbinding theory offer many advantages over rulebased and unificationbased grammars such as the universality ofprinciples and mod ularity of components in the grammarprin ciples are constraints over xbar structuresmost previous principlebased parsers eg es sentially generate all possible xbar structures of a sentence and then use the principles to fil ter out the illicit onesthe drawback of this approach is the inefficiency due 1o the large number of candidate structures to befiltered outthe problem persists even when wrions techniques such as optimal ordering of princi ples and corontining are usedthis problem may also account for the fact that these parsers are experimental nd have limited coveragethis paper describes an efficient broad coverage principlebased parser called prin ciparthe main innovation in principar is that it applies principles to descriptions o17 x bar structures rather than the structures them selvesxbar structures of a sentence are only built when their descriptions have satisfied all the pri ncilleso dynamic data static dala lrocegsing module data flow figure 1 pile architecture of principar figure i shows the architecture of prin ciparsentence analysis is divided into three stepsthe lexical analyser first converts the in put sentence into a set of texical itemsthen a message passing algorithm for obparsing is used to construct a shared parse forestfi nally a parse tree retriever is used to enumer ate the parse treesthe key idea of the parsing algorithm was presented in this paper presents some implementation details and experimental resultsthe parser in pihncipar is based on a messagepassing framework proposed by in and lin and ooebel which uses a network to encode the grammarthe nodes in tile grammar network represent grammati cal categories or subcate gories such as vnp the links in the net work represent relationships belween the cat egoriesgbprinciples are implemented as lo cal constraints attached to the nodes and 482 perco la t ion cormtra in ts attached to links in the networkf igure2 depicts portion c tile gr unmar network for dnglish 2 i t ip cpspe i aai ip ni i vi 1 t a ul p vvvvvvv vn vsttlntiolx link rotn le since v subsumes vnp and vcp lhere is a sulsmnption l ink from v to caoh 11o of themthereis a donxhiance l ink frolil node e immediately doininaled by o lold cxidl lp lc sillcc aix nlar l i iy i lt l mediacly dominate a pp adjimct there is a dominance link from nbar to ppa dominance link fiom a to fl is associted with an integer id that determii les tile linear order between fl and other categories dolni mty a and a binary attrilute to specify whether fl is optional or obligloryi t ln order to simplify the diagrain we did nollabel tile links with their ids in ligure 2nstead the prece dence between dominance links is ilmieted ly their input sentences arc pyoued by passing mes sages iu the grammyou networklhe nodes ill the nel wory sending messages in tile rcv xba r strltc ill illt stilleleel art is the altrilmlc vmues of thereel node o the xbar strtlctallillco messges prom which this item is combinedthe source ilessages represent inlinedite constituctlls o the reel nodeliach node in l he grannil lu net work has a conll letion iredicate tllal deter tllillcs whether an ilieln at llie nodeis coin plete ilx w i lm i case the itelxl is sent as a ines sltge 1o eltier l ldos i l l 1110 xvoic direct ion of the linksvilen a node receives mi itcnl il adilel31pts o ine the itenl w i th ilems rein other nodes 1o forln hew ilenislwo itores and can ie combilxed if a r liks is a iilllciolx hal given i set o nlessages returl is the sel of l inks via which the iiicssages a rrivedlhe result o iixe colnbinmion is a lew ilolll the new ilthem represelxts a larger nba r slruc tyou re resulti ng from t hc combinat ion of the two snlaller cues111 1lie new itera s where ij is an interval denoting the position of the word in the sentence avlf is the at tribute values of the word sense and avo is the attr ibute values of the complements of the word sensestep 2 message passing for each lexi eel item create an initim message and send this message to the grammar network node that represents the category or subcategory of the word sensewhen the node receives the initial message it may forward the message to other nodes or it may combine the message with other messages and send the resulting combination to other nodesthis initiates a message passing pro cess which stops when there are no more mes sages to be passed aroundat that point the initial message for the next lexical item is fed into the networkstep 3 build a shared parse forest when all lexieal items have been processed a shared parse forest for the input sentence can be built by tracing the origins of the messages at the highest node whose str com ponent is the whole sentencethe parse forest consists of the links of the grammar network that are traversed uring the tracing processthe structure of the parse forest is similar to and but extended to include attribute valuesthe parse trees of the input sentence can be retrieved hom the parse forest one by onethe next section explains how tile constraints attached to the nodes and links in the network ensure that the parse trees satisfy all the prin ciplesgb principles are implemented as local and percolation constraints on the itemslo cal constraints are attached to nodes in the networkall items at a node must satisfy the node local constraint lercolation con straints are attached to the links in the net worka message can be sent across a link only if the item satisfies the percolation constraint of the linkwe will only use two examples to give the reader a general idea about how gb principles are interpreted as loc al and percolation con straintsinterested reader is referred to lin for more details31bound ing rpheorythe bounding theory states that a movement can cross at most one bar rier without leaving an intermediate tracean attribute named hbarr0r is used to imple ment this lrinciplea message containing the attribute value whbarrier is used to rep resent an xbar structure contmnlng a posi tion out ol7 which a whconstituent has moved but without yet crossing a barrierthe wdue whbarrier means that the movement has m ready crossed one barriercertain dominance links in the network are designated as bar rier linksbounding condition is implemented by tile percolation constraints attached to the barrier links which block any message with whbarrier and change whbarrior to whbarrier before the message is allowed to pass through32case theorycasetheory reqlfires that every lexicm np be assigned an alstlact casehe implementation of case theory in piinciiaii is based on the following attribute vajues ca govern cmca the head is cse assigner ca the head is not a case assigner govern the head is a governor govern the head is not a governor cr an np mcommanded by the head needs case marking the case filter is implemented as follows 1locm constraints attached to the nodesassign ca to items that represent xbar structures whose heads are case assigners 484 nolocal ce done in batch and relatively infrequentlythe twarserthat is as far as theparser is concerned the lexic jec that given a word or a phrase returns its lexical entry or ni l if the entry for exanqle bare inf qle fltlctioii subcat teturts a stllcegoriz lion frame of the wordthe first argt l tne i l progperfpassive the first argument of ref is the attribute val ues of beganthe second argument contains the base form of the word and a set of at tribute namesthe lexical items for the word began is obtained by unifying its attribute values with the attribute wdues in the lexiea entry for beginthe advantage of making references to the base form is that when the base form is modified one does not have to make changes to the entries for its variations4a phrasal entr ies he lexicon also allows for phrases that consist of multiple wordsone of the words in a phrase is designated as the head wordthe head word should be a word in the phrase that can un dergo morphological changes and is the most in frequentfor example in the phrase down payment the head word is paymentin de lexicon a phrase wl wj w is stored as a s t r ing wh ton 101 youh_lthat is the first word in the string is always head word and the words mter should ap pear before the head word in textsthe rune don phrases converts il s arguments into a list of phrases where tile entry word is the headlor example the lexical entry for paymenc is as follows after retrieving the entry for a word each phrase in the phrase list is compared with the surrounding words in the sentenceif the phrase is found in the sentence the entry for the phrase is retrieved froin the lexicon5reducing ambiguitiesone of the problems with many parsers is that they typically generate far more parses than humans normally doior example the average number of parses pet word is 135 in that means that their parser produces on average 8 parses for a 7word sentence 3d parses for a lword sentence and ld4 larses for a 17word seirence rphe large number of parse trees make tim lroeruttepiinciiai defines a weight for every parse treea weight is associated with every word sense and every link in the parse treepile weight of the parse tree is the total weight of the links and the word senses tt the leaf nodes of the treethe packed shared parse forest in ptun cipaiis organized in such a way that the parse tree with minimum weight is retrieved firstiiuncipaii then uses the minimum weight and a predetermined number called bigweight which is currently arbitraryly de fined to be 20 to prune the parse forestonly 486 the parse trees whose weights are less than are spared and outputthe weights of the links and word senses are determined as follows e ihe links fiom xbar to an adimlct yp have weightnlgweigliw and all the other links have weightl0 the words in the lexicon may have an attribute rar which takes wdues from very veryveryif a word sense has the attribute value its weight is bigweigiiti fa word sense has the attribute value its weight is 2bigweigiitotherwise the weight is 0 note that the attribute rare is used to indicate the relative frequency among different stmses of the same wordii i l bigweght l john john v npn p n about kim read a br read npa n bar n ip s tory x n story about kim figure 3 adjunct links hare higher weights example 51comparing the two parses of the sentence john read the story about kim in figure 3 in lee about kim is the con plement of story in it is the adjunct of readsince the adjunct dominance link from vbar to pp has much higher weight than the complement dominance link from nbar to pp the total weight of is much smaller them the weight of therefore only is output as the parse tree of the sentenceexample 52the lexical entry for tlm word do is as follows 7 7 p vwho z_ bar who v kim bigweight v did np np love a a kim love figure 4 lexical items have diffcent weights passive per prog hat is do corrcsionds to the correct understanding of the sentencehi did is analyzed as a bitransitive wb as in who did kim a fawn llow eww since the latter sense of the word has an attribute value tree has much higher weight than ttee and only is otdlmt by the iaihica l user interface can also be run on most unix machines with gnu gft compilerliu md coebel showed that the comllexlty of the message passing algorithm is o rocol taken on a sparcstation lc i words tmte ptrses 10 11 076 i3 060 t4 13 055 4 3 051 6 19 o80 2 26 413 32 of the grammar when attribute values are used in messages the complexity of the mgorithm is not yet knownour experiments have shown that the parser is very fasttable 1 lists the parsing time and the number of parses for several ex ample sentencesthe correct parses for all the sentences in tmle 1 are returned by the parsereven though the lexicon is derived from ma chine readable dictionaries and contains a arge number of senses for many words the ratio be tween the number of parse trees and the sen tence length here is well bellow the ratio re ported in acknowledgements the author wishes to thanlbonnie dorr for comments about sections 1 2 and 3this re search was supported by naturm sciences and engineering research council of canada grant ogp121338
C94-1079
principar an efficient broadcoverage principlebased parserwe present an efficient broadcoverage principlebased parser for englishthe parser has been implemented in c and runs on sun sparcstations with xwindowsit contains a lexicon with over 90000 entries constructed automatically by applying a set of extraction and conversion rules to entries from machine readable dictionarieswe release minipar a fast and robust parser for grammatical dependency relations
kvec a new approach for aligning parallel texts various methods have been proposed for aligning texts in two or more languages such as the canadian parliamentary debates some of these methods generate a bilingual lexicon as a byproduct we present an alternative alignment strategy which we call kvec that starts by estimating the lexicon for example it discovers that the english word fisheries is similar to the french pches by noting that the distribution of fisheries in the english text is similar to the distribution of pches in the french kvec does not depend on sentence boundaries there have been quite a number of recent papers on parallel text brown et al chen church church et al dagan et al gale and church isabelle kay and rgsenschein klavans and tzoukermann kupiec matsumoto ogden and gonzales shemtov simard et al warwick armstrong and russell wu most of this work has been focused on european language pairs especially englishfrenchit remains an open question how well these methods might generalize to other language pairs especially pairs such as englishjapanese and english chinesein previous work we have reported some preliminary success in aligning the english and japanese versions of the awk manual using charalign a method that looks for character sequences that are the same in both the source and targetthe charalign method was designed for european language pairs where cognates often share character sequences eg government and gouvernementin general this approach does not work between languages uch as english and japanese which are written in different alphabetsthe awk manual happens to contain a large number of examples and technical words that are the same in the english source and target japaneseit remains an open question how we might be able to align a broader class of texts especially those that are written in different character sets and share relatively few character sequencesthe kvec method attempts to address this questionkvec starts by estimating the lexiconconsider the example fisheries pchesthe kvec algorithm will discover this fact by noting that the distribution of fisheries in the english text is similar to the distribution of pches in the frenchthe concordances for fisheries and pches are shown in tables 1 and 2 1 1these tables were computed from a small fragment ofthecanadian hansards that has been used in a number of other studies church and simard et althe english text has 165160 words and the french text has 185615 words1096 there are 19 instances of fisheries and 21 instances of pchesthe numbers along the left hand edge show where the concordances were found in the textswe want to know whether the distribution of numbers in table 1 is similar to those in table 2 and if so we will suspect hat fisheries and pches are translations of one anothera quick look at the two tables suggests that the two distributions are probably very similar though not quite identical2 we use a simple representation f the distribution of fisheries and pchesthe english text and the french text were each split into k piecesthen we determine whether or not the word in question appears in each of the k piecesthus we denote the distribution of fisheries in the english text with a kdimensional binary vector vu and similarly we denote the distribution of pches in the french text with a kdimensional binary vector vpthe i th bit of vf indicates whether or not fisheries occurs in the i th piece of the english text and similarly the ith bit of vp indicates whether or not pches occurs in the i th piece of the french textif we take k be 10 the first three instances of fisheries in table 1 fall into piece 2 and the remaining 16 fall into piece 8similarly the first 4 instances of pgches in table 2 fall into piece 2 and the remaining 17 fall into piece 8thus vt vp now we want to know if vt is similar to vp and if we find that it is then we will suspect that fisheries pchesin this example of course the vectors are identical so practically any reasonable similarity statistic ought to produce the desired result3fisheries is not file translation of lections before describing how we estimate the similarity of vf and vp let us see what would happen if we tried to compare fisheries with a completely unrelated word eg lections2at most fisheries can account for only 19 instances ofpches leaving at least 2 instances ofpches unexplainedas can be seen in the concordances in table 3 for k10 the vector is by almost any measure of similarity one could imagine this vector will be found to be quite different from the one for fisheries and therefore we will correctly discover that fisheries is not the translation of lectionsto make this argument a little more precise it might help to compare the contingency matrices in tables 5 and 6the contingency matrices show the number of pieces where both the english and french word were found the number of pieces where just the english word was found the number of pieces where just the french word was found and the number of peices where neither word was foundtable 4 a contingency matrix french english a b c d table 5 fisheries vs pgches pches fisheries 2 0 0 8 table 6 fisheries vs lections lections fisheries 0 2 4 4 in general if the english and french words are good translations of one another as in table 5 then a should be large and b and c should be smallin contrast if the two words are not good translations of one another as in table 6 then a should be small and b and c should be large4mutual informationintuitively these statements seem to be true but we need to make them more preciseone could have chosen quite a number of similarity metrics for this purposewe use mutual information 1097 prob log2 prob prob that is we want to compare the probability of seeing fisheries and pches in the same piece to chancethe probability of seeing the two words in the same piece is simply a prob abcd the marginal probabilities are ab prob abcd ac prob abcd for fisheries pches prob prob prob 02thus the mutual information is log25 or 232 bits meaning that the joint probability is 5 times more likely than chancein contrast for fisheries lections prob o prob 05 and prob 04thus the mutual information is log 2 0 meaning that the joint is infinitely less likely than chancewe conclude that it is quite likely that fisheries and pches are translations of one another much more so than fisheries and lections5significanceunfortunately mutual information is often unreliable when the counts are smallfor example there are lots of infrequent wordsif we pick a pair of these words at random there is a very large chance that they would receive a large mutual information value by chancefor example let e be an english word that appeared just once and le t fbe a french word that appeared just oncethen there a nontrivial chance that e andf will appear is in the same piece as shown in table 7if this should happen the mutual information estimate would be very large ie logk and probably misleadingtable 7 f e 1 0 0 9 in order to avoid this problem we use a tscore to filter out insignificant mutual information valuesprob prob prob t 1 prob using the numbers in table 7 tl which is not significantsimilarly if e and f appeared in just two pieces 1 each then there is approximately a chance that they would both appear in the same two pieces and then the mutual information score would be quite log but we probably would not believe it high z because the tscore would be only by this definition of significance we need to see the two words in at least 3 different pieces before the result would be considered significantthis means unfortunately that we would reject fisheries pches because we found them in only two piecesthe problem of course is that we do not have enough pieceswhen k10 there simply is not enough resolution to see what is going onat k100 we obtain the contingency matrix shown in table 8 and the tscore is significant table 8k100 pches fisheries 5 0 1 94 how do we choose kas we have seen if we choose too small a k then the mutual information values will be unreliablehowever we can only increase k up to a pointif we set k to a ridiculously large value say the size of the english text then an english word and its translations are likely to fall in slightly different pieces due to random fluctuations and we would miss the signalfor this work we set k to the square root of the size of the corpusk should be thought of as a scale parameterif we use too low a resolution then everything turns into a blur and it is hard to see anythingbut if we use too high a resolution then we can miss the signal if 7098 it is not just exactly where we are lookingideally we would like to apply the kvec algorithm to all pairs of english and french words but unfortunately there are too many such pairs to considerwe therefore limited the search to pairs of words in the frequency range 310this heuristic makes the search practical and catches many interesting pairs 6resultsthis algorithm was applied to a fragment of the canadian hansards that has been used in a number of other studies church and simard et al the 30 significant pairs with the largest mutual information values are shown in table 9as can be seen the results provide a quickand dirty estimate of a bilingual exiconwhen the pair is not a direct translation it is often the translation of a collocate as illustrated by acheteur limited and santd welfareusing a scatter plot technique developed by church and helfman called dotplot we can visulize the alignment as illustrated in figure 1the source text is concatenated to the target text to form a single input sequence of nxny bytesa dot is placed in position ij whenever the input token at position i is the same as the input token at position j the equality constraint is relaxed in figure 2a dot is placed in position ij whenever the input token at position i is highly associated with the input token at position j as determined by the mutual information score of their respective k vecsin addition it shows a detailed magnified and rotated view of the diagonal linethe alignment program tracks this line with as much precision as possiblehave been rejected anyways as insignificanttable 9 kvec results french english 32 beauce beauce32 1981 1981 30 richmond richmond30 pches fisheries 28 deans deans28 prud prud28 prud homme27 acheteur limited 27 communications communications27 macdonald macdonald26 mazankowski mazankowski25 croisire nuclear 25 sant6 welfare25 39 39 25 johnston johnston25 essais nuclear 25 universit6 university25 bois lumber 25 angus angus24 angus via24 saskatoon university24 agriculteurs farmers 24 inflation inflation 24 james james24 vanier vanier24 sant6 health23 royale languages 23 grief grievance 7conclusionsthe kvec algorithm generates a quickanddirty estimate of a bilingual exiconthis estimate could be used as a starting point for a more detailed alignment algorithm such as word_align in this way we might be able to apply word_align to a broader class of language combinations including possibly englishjapanese and englishchinesecurrently word_align depends on charalign to generate a starting point which limits its applicability to european languages since char_align was designed for language pairs that share a common alphabet
C94-2178
kvec a new approach for aligning parallel textsvarious methods have been proposed for aligning texts in two or more languages such as the canadian parliamentary debates some of these methods generate a bilingual lexicon as a byproductwe present an alternative alignment strategy which we call kvec that starts by estimating the lexiconfor example it discovers that the english word fisheries is similar to the french peches by noting that the distribution of fisheries in the english text is similar to the distribution of peches in the frenchkvec does not depend on sentence boundaries
a rulebased approach to prepositional phrase attachment disambiguation in this paper we describe a new corpusbased ap proach to prepositional phrase attachment disam biguation and present results colnparing peffo mange of this algorithm with other corpusbased approaches to this problem prelositioual phrase attachment disambiguation is a difficult problemtake for example the sen rouge buy a ear po with a steering wheelwe would guess that the correct interpretation is that one should buy cars that come with steer ing wheels and not that one should use a steering wheel as barter for purchasing a carn this case we are helped by our world knowledge about auto mobiles and automobile parts and about typical methods of barter which we can draw upon to cor rectly disambignate he sentencebeyond possibly needing such rich semantic or conceptual intornla tion a l tmann and steedman show that there are certain cases where a discourse model is needed to correctly disambiguate prepositional phrase attachmenthowever while there are certainly cases of an biguity that seem to need some deep knowledge either linguistic or conceptual one might ask whag sort of performance could 1oe achieved by a sys tem thai uses somewhat superficial knowledge au parts of this work done at the computer and hp lbrmation science department university of penn sylvania were supported by by darpa and afosr jointly under grant noafosr900066 and by aro grant nodaal 0389c0031 pr and by an ibm gradmtte fellowship this work was also supported at mit by arpa under con tract n000t489jla32 monitored through the office of naval resear nltion is found whose application results in the greatest imlrovenmnt hat transfkrmation is then added to the ordered trmlsforlmltiou list and the corpus is updded by applying thelearned trans formation for a detailed discussiou of this algorithm in the context of machiue le aru iug issuesottce 311 ordered list of transformtions i learned new text can be mmotated hy first ai plying the initial state mnotator to it and then applying each o the traamormations iu orderunannotatid iixi 1nhai l state annotatlid text tii jl l e n elreposi tioiml phrase attmunent eariter learns transfor illttiolls onl a clltls o 4tuples of the orm where v is 1 wrl nl is the head of its objecl llolni phrase i is the lepositioll and 112 is the head of the noun phrase governed by the prelc sition 1or all sentences that conlbrm to this pattern in the penn treebmk wdl st leet 3ourlml corpns such a 4tuplc was formed attd each ltuple was paired with the ataehnteut de cision used in the treebauk parse here were 12766 4qulles in all which were randomly split into 12206 trnining smples and 500 test samplesn this eperiment tim at tachment choice for lrepositional ihrases was ie iween the ohiecl mmn and lhe matrix verbn the initial slte mmotator all prepositional phrases i atterns were extraclxxl usjng tgrep a treebased grep program written by rich pitote 4tuples were cxtracted automtkally aud mistakes were notmvn ttally pruned out1199 are attached to the object noun2 this is tile at tachment predicted by right association the allowable transforlnations are described by the following templates change the attachment location from x to y if n l i sw n2 is w v isw p is w n l is w1 and n2 is w2 n l i swl andv isw2 here from x to y can be either from nl to v or from v to nl w can be any word and the ellipsis indicates that the complete set of transformations permits matching on any combination of values for v n l p and n2 with the exception of patterns that specify vahms for all fourfor example one allowable transformation would be change the attachment location from nl to v if p is untillearning proceeds as followsfirst the train ing set is processed according to the start state annotator in this case attaching all prepositional phrases low then in essence each possible transtbrmation is scored by apply ing it to the corpus and cornputing the reduction in error ratein reality the search is data driven and so the vast majority of al lowable transformations are not examinedthe bestscoring transformation then becomes the first transformation i the learned listit is applied to the training corpus and learning continues on the modified corpusthis process is iterated until no rule can he found that reduces the error ratein the experiment a tol al of 471 transfor mations were learned figure 3 shows the first twenty3 initial accuracy on the test set is 640 when prepositional phrases are always attached to the object nounafter applying the transforma tions accuracy increases to 808figure 2 shows a plot of testset accuracy as a function of the nulnber of training instancesit is interesting to note that the accuracy curve has not yet reached a 2if it is the case that attaching to the verb would be a better start state in some corpora this decision could be parameterizedzin transformation 8 word token amount appears because it was used as the head noun for noun phrases representing percentage amounts eg 5the rule captures the very regular appearance in the penn tree bank wall street journal corpus of parses like sales for the yea vp rose np5yopp in fiscal 1988accuracy 8100 rl 8000 7900 t 7700 are f oo1 i 74001 _ _ t _ _ 7300 j 7200 l l i _ __ __ 2 7000 6900 6800 6700 6400 000 500 i q i t taining size x 103 1000 figure 2 accuracy as a function of lraining corpus size plateau suggesting that more training data wonld lead to further improvementsadding word class in format ion in the above experiment all transbrmations aretriggered hy words or groups of words and it is surprising that good performance is achieved even in spite of the inevitable sparse data problemsthere are a number of ways to address the sparse data problemone of the obvious ways mapping words to part of speech seerns unlikely to helph stead semanl ic class information is an attraclive alternativewe incorporated the idea of using semantic ino tbrmation in the lbllowing wayusing the word net noun hierarchy each noun in the ffaning and test corpus was associated with a set containing the noun itself phts the name of every semantic lass that noun appears in 4 the transformation template is modified so that in ad dition to asking if a nmm matches ome word w 4class names corresponded to unique synonynl set identifiers within the wordnet noun databasea noun appears in a class if it falls within the hy ponym tree below that classin the experiments reported here we used wordnet version l21200 1 2 4 5 _ change attehment location lrm to is iulo ni p is lom ni v p is with n v n2 is year n 1 v p is by iis i and n i v ni ix amounl n is lhrough ni v is durbg ni v v ix pul n1 v n2 is moulk n v 1 is ulldcr nj v 1 is after v is have and n1 v i is b n v p is wilkoul v ni p is of v is buy and n1 p is for ni v p is bejblproaeh io datasparseness i similar to tllat of where method ix proposed for using wordnet in conjunction with a corpus to ohtain classbased statisl ieqlit method here is ltlllcl simpler however in ihat we are only us ing boolean values to indieale whelher word can be a member of a class rather than esl imat ing filll se of joint probabil it ies involving lroach with classes ccm generdize ill a way that the approach without classes is mllle to we woldd expect fcwer lransformalions to be necessary lxperimeah ally this is indeed the casein a second experiment lraining aml testing were eratied out on the same samples as i the previous experiment bul ihis t ime using the ext ende d tra ns lbrmat ion tlates for word classesa total of 266 transformations were learnedapplying lhese transtormaiions to the test set leslllted in an accuracy of 818n figure 4 we show tile lirst 20 transforml ions lemned using ilollll classesclass descriptions arc surrounded by square braclets thell the preltositiomd phrase should be altache the wrb since tim thajl a llounthis exlw r iment also demonstrates how rely lulebased lexicon or word classiflcat ion scheme cau triviajly be incorljorated into the learner by exlencling lransfotnlaliolls to allow thent to make lefelealc trainittg is done on a superset el sentence types ttsed ill train ing the transforljatiolfbased learnerthe transformationbased learner is i rained on sen tences containing v n and p whereas the algo r i thm described by l l indle and iooth can zdso use sentences ositions frotn the associated press is wilh 7 n1 v p is of p is in and ni is 8 n 1 v measure quanlily amoul p is by allel 9 n1 v n2 is abslraclion i 0 ni v p is lhrough 1 is in and n i is 11 ni v groupgrouping12 v n 1 v is be 13 ni v v is pul14 ni v p is underp is i and n is 15 n1 v written commluication 16 n1 v l is wilhoul 17 n1 v p is during 18 n 1 v 19 ni v 20 n1v l is on and nt is youing p is after v is buy and p is for figure 4 the first 20 transformations learned for prepositional phrase attachment using noun classesof method accuracy transforms tscores 704 758 anstbrmaions 808 471 transormati ons 792 418 transformations 818 26 and tested it using the same training and test set used for the above experimentsdoing so re sulled in an attachment accuracy of 704next the training set was expanded to include not only the cases o ambiguous attachment fonnd in the parsed wall street journal corpus as before but also all the unambiguous prepositional phrase at tachments tbnnd in the corpus as well accuracy improved to 758 r using the larger training set still significantly lower than accuracy obtained us lag tam tlansformalionbased approachthe tech nique described in which com bined hindle and rooth lexical association tech nique with a wordnetbased conceptual associa tion measure resulted in an accuracy of 760 also lower than the results obtained using trans formationssince llindle and rooth approach does not make reference to n2 we reran the transformationlearner disalkwing all transforma tions that make reference o n2doing so resulted in an accuracy of 792see figure 5 hr a sun mary of resultsit is possihle lo compare the results described here with a somewhat similar approach developed independently by ratnaparkhi and ioukos since they also used training and test datt drawn from the penn treebank wall street jour nal corpusinstead of using mammlly coustructed lexical classes they nse word classes arrived at via mutmd information clustering in a training corpus resulting in a representation i which each word is represented by a sequence of bitsas in the experiments here their statistical model also makes use of a 4tuple context and can use the identities of the words class inlor marion rthe difference between these results nd tile result they quoted is likely due to a much blrger training set used in their origimd experiments1202 or both mnds of i ldormation as eotllextual fea tllles riley lescrile a search process useset of the available illorht l ion will i am used in the modelr and in esling choose site v op site n l accordilg io which has he higher eomlitional probaliitytatnalpkhi and roukos relort an aecuraey oi 816 using bot h word and class iuiorma tion on wall sireet 3ourna text using a training col pus twice as la rgc as that used in oup experimentsthey also report that a erformanee of 7771 el o other reseapehers have exllored eorlmsiased approaches io lrepositional phrase attaehmet t disamliguation tmt ndc use of word classes lor example weisehedq cl al and basili el al boll deserile the use of lnanual ly coustrueted donmhv sleitic word classes together with coriustmsed sittisties in o f d2r to resolve irelositional 1hrase at taehlllellt illiiguityiee results old tained on different corpora however it is er of advaihages over lroatehes11 a youison with lexical association higher bleer of simple reaosed to a large lllllllef of lexical cooeetlrreltee lloa iilitiesyou addit ion we have shown how the lrausorlnlionbased l arner can casity be e tended to incorporate wordclass ifformatiouthis resulted in a slight increase in 1erformanee but more notaly it resulted in a reduction hy roughly half in the lota mnnler of transfor mat ion rules neededand in abil istic models or classes derived vi statist ical cluslering methods this ech l l ique proa risolls ell io ina reimllopted above the rulebased approach described here achieves better perlorlttaucc using ml algolithm tlmt is eonceltually quite mmlle am iu llllroateh is eas ily ale 1ui handconstruetci llior knowledge is also availableexist ing knowl elge such as structural strategies or even a priori hxieal lreferences e incorlorated into ihe start state annotator so theft the learning dgoiithm begins with nore refiued inputand knowu exceptious au 1e handhlroach is that it re quires supervised t ra in ing that is a representa tive set of true cses tfolll which co learnilow ever this leeomes less of a probhm as atmotated eorlora beeolne increasingly available and sug gests the comhinat ion o1 supexvised and uusuper vised methods as ayou ilfleresth g aveme or urther reseire
C94-2195
a rulebased approach to prepositional phrase attachment disambiguationin this paper we describe a new corpusbased approach to prepositional phrase attachment disambiguation and present results comparing performance of this algorithm with other corpusbased approaches to this problemwe train a transformationbased learning algorithm on 12766 quadruples from wsjwe use the supervised transformationbased learning method and lexical and conceptual classes derived from wordnet achieving 82 precision on 500 randomly selected examples
anaphora for everyone pronominal anaphora resolution without a parser we present an algorithm for anaphora res olutkm which is a modified and extended version of that developed by in contrast to that work our al gorithm does not require indepth full syn tactic parsing of text instead with minimal compromise in output quality the modifica tions enable the resolution process to work from tile output of a part of speech tag ge enriched only with annotations of gram matica functkm of lexical items in the in put text stream evaluation of the results of our intplementation demonstrates that ac curate anaphora resolution can be realized within natural anguage processing flame works which do notr cannot employ ro bust and rcqiable parsing components describe an algorithm for pronominal anaphora resolution with high rate of cor rect analyseswhile one of the strong points of this algorithm is that it operates primarily on syntactic in formation ahme this also turns out to be a limiting factor for its wide use current stateoftheart of prac tically applicable parsing technology still falls short of robust and reliable delivery of syntactic analysis of real texts to the level of detail and precision that the filters a nd constraints described by i appin and l eass assumewe are particularly interested in a class of text pro cessing applications capable of delivery of content analysis to a depth inwlving nontrivial amount of discourse processing including anaphora resolutionthe operational context prohibits us from making any assumptions concerning domain style and genre of input as a result we have developed a text processing framework which builds its capabilities entirely on the basis of a considerably shallower linguistic analysis of the input stream thus trading off depth of base level analysis for breadth of cownagein this paper we present work on modifying the lmp pinleass algorithm in a way which enables it to work off a flat morphosyntactic analysis of the sentences of a text while retaining a degree of quality and accuracy in pronorainal anaphora resolution comparable to that reported in the modifica tions discussed below make the algorithm available to a wide range of text processing frameworks which due to the lack of full syntactic parsing capability nor really would have been unable to use this high preci sion anap hora resolution toolthe work is additionally important we feel as it shows that informatkm about the content and logical structure of a text in principie a core requirement for higher level semantic and discourse processes can be effectively approximated by the right mix of constituent analysis and inferences about functional relationsthe base level linguistic analysis for actaphora resolu tion is the output of a part of speech tagger augmented with syntactic function annotatkms for each input token this kind of analysis is generated by the mor pbosyntactic tagging system described in in addition to extremely high levels of accuracy in recall and precision of tag assignment report 9977 overall recall and 9554 overall preciskm over a variety of text genres and in comparison with other stateoftheart tagging sys tems the primary motivation for adopting this system is the requirement todevelop a robust ext processor with anaphora resolution being just one of its discourse analysis functkms capable of reliably handling arbi trary kinds of inputthe tagger provides a very simple analysis of the structure of the text for each lexical item in each sen tence it provides a set of values which indicate the morphological lexical grammatical nd syntactic fea tures of the item in tile context in which it appearsin addition the modified algorithm we present requhes annota tion of the input text stream by a simple position identification function which associates an integer with each token in a text sequentially as an example given the text for 1995 the company set up its headquar ters in hall l the newest and most prestigious of cebit 23 hal is tile anaphora resolutkm algorithm would be presented with the hllowing analysis treamnote in particular the grammatical function information qfmainv and the integer values asso cia ted with each tokenfor o f f139 for prep advl 1995o f f140 1995 num card companyo f f142 company n nom sgpl subj setoff143 set v past vf in fmainv upof f144 up adv advl advl i t s o f f145 it pron gen sg3 gn headquar ters o f f146 headquar ters n nom sgpl obj in o f f147 in prep l l o f f149 ii num card newest o f f152 new a sup pcomplo andof f153 and cc cc most o f f154 much adv sup ada pres t ig ious o f f155 p res t ig ious a abs 230f f158 23 num card qn ha l l s o f f159 hal l n nom pl p o f f160 punct 21 data collectionalthough lingsoft does not provide specific infor mation about constituent structure partial constituen cyspecifically identification of sequences of tokens as phrasal unitscan be inferred from the analysis by running the tagged text through a set of filters which are stated as regular expressions over metatokens such as the ones illustrated abovefor the purposes of anaphora resolution the pri mary data set consists of a complete listing of all noun phrases reduced to modifierhead sequencesthis data set is obtained by means of a phrasal grammar whose patterns characterize the composition of a noun phrase in terms of possible token sequencesthe output of np identification is a set of tokenfeature matrixoffset sequences where offset value is deter mined by the offset of the first token in the sequencethe offset indicates the position of the np in the text and so provides crucial information about precedence relationsa secondary data set consists of observations about the syntactic ontexts in which the nps identified by the phrasal grammar appearthese observations are derived using a set of patterns designed to detect nom inal sequences in two subordinate syntactic environ ments containment in an adverbial adjunct and con tainment in an np this is accomplished by running a set of patterns which identify nps that occur locally to ad verbs relative pronouns and nounpreposition r noun complementizer sequences over the tagged text in con junction with the basic np patterns described abovebecause the syntacticpatterns are stated as regular ex pressions misanalyses are inevitablein practice how ever the extent o which incorrect analyses of syntactic context affect the overall accuracy of the algorithm is not large we will return to a discussion of this point in section 4a third set of patterns identifies and tags occurrences of expletive itthese patterns target occurrences of the pronoun it in certain contexts eg as the subject of members of a specific set of verbs or as the subject of adjectives with clausal complementsonce the extraction procedures are complete and the results unified a set of discourse referentsabstract ob jects which represent the participants inthe discourse is generated from the set of np observationsa particu larly convenient implementation f discourse referents is to represent them as objects in the common lisp object system with slots which encode the following information parameters text text form type referential type agr person number gender gfun grammatical function adjunct t o r nil embed t o r nil pos text position note that each discourse referent contains information about itself and the context in which it appears but the only information about its relation to other dis course referents is in the form of precedence r lations the absence of explicit information about configurational relations marks the crucial difference between our algorithm and the lap pinleass algorithm use configurational information in two ways as a factor in the determination of the salience of a discourse refer ent and as input to a set of disjoint reference filtersour implementation seeks to perform exactly the same tasks by inferring hierarchical rela tions from a less rich basethe modifications and assumptions required to accomplish this goal will be highlighted in the following discussion22 anaphora resolutiononce the representation f the text has been recast as a set of discourse referents it is sent to the anaphora resolution algorithm properthe basic logic of the algorithm parallels that of the lap pinleass algorithmthe interpretation procedure in volves moving through the text sentence by sentence and interpreting the discourse referents in each sen tence from left to rightthere are two possible in terpretations of a discourse referent either it is taken to introduce a new participant in the discourse or it is taken to refer to a previously interpreted iscourse referentcoreference is determined by first eliminating from consideration those discourse referents to which an anaphoric expression cannot possibly refer then se lecting the optimal antecedent from the candidates that remain where optimality is determined by a salience measurein order to present the details of anaphora resolution we define below our notionsand implementations of coreference and salience221 coreference as in the lappin and leass algorithm the anaphor antecedent relation is established between two dis course referents hile the more general notion of coreference is represented in terms of equivalence classes of anaphorically re lated discourse referents which we will refer to as coref classesthus the problem of interpreting an anaphoric expression boils down to the problem of es tablishing an anaphoric link between the anaphor and some previously interpreted iscourse referent a consequence of establishing 114 this link is that the anaphor becomes a member of the coref class already associated with its antecedentin our implementation coref classes are repre sented as objects in the common lisp object system which contain information about the coref class as a whole including canonical form membership and most importantly salience 1 the connection between a dis course referent and its coref class is mediated through the coref object as follows every discourse referent includes an information parameter which is a pointer to a coref object discourse referents which have been determined to be coreferential share the same coref value imple menting coreference in this way provides a means of getting from any discourse referent in a coref class to information about the class as a whole222 salience the information parameter of a coref object most cru cial to anaphora resolution is its salience which is de termined by the status of the members of the coref class it represents with respect to 10 contextual gram matical and syntactic onstraintsfollowing we will refer to these constraints as salience factorsindividual salience factors are asso ciated with numerical values the overall salience or salience weight of a coref is the sum of the values of the salience factors that are satisfied by some member of the coref class the salience factors used by our algorithm are defined below with their valuesour salience factors mirror those used by with the exception of posss discussed below and cntxs which is sensitive to the context in which a discourse referent appears where a context is a topically coherent segment of text as deter mined by a textsegmentation algorithm which follows sents 100 iff in the current sentence cntxs 50 iff in the current context subjs 80 iff gfun subject exsts 70 iff in an existential construction posss 65 iff gfun possessive accs 50 iff gfun direct object dats 40 iff gfun indirect object oblqs 30 iff the complement of a preposition heads 80 iff embed nil args 50 iff adjunct nil note that the values of salience factors are arbitrary what is crucial as pointed out by is the relational structure imposed on the factors by these valuesthe relative ranking of the factors is justified both linguistically as a reflection of the role of the functional hierarchy in determining anaphoric relations as well as by experimental resultsboth lappin and leass and our ownfor all factors except cntxs and posss we adopt the values derived from a series of experiments described in which used dif ferent settings to determine the relative importance of 1the implementation of acoref object needs to be aware of po tenlial circularities thus a coref does not actually contain its member discourse r ferents but rather alisting of their offsets each factor as a function of the overall success of the algorithmour values for cntxs and posss were de termined using similar testsan important feature of our implementation of salience following that of lappin and leass is that it is variable the salience of a coref class decreases and increases according to the frequency of reference to the classwhen an anaphoric link is established between a pronoun and a previously introduced iscourse refer ent the pronoun is added to the coref class associated with the discourse referent its coref value is set to the coref value of the antecedent and the salience of the coref object is recalculated according to how the new member satisfies the set of salience factorsthis final step raises the overall salience of the coref since the new member will minimally satisfy sents and cntxssalience is not stable however in order to realisti cally represent the local prominence of discourse ref erents in a text a decay function is built into the algo rithm so that salience weight decreases over timeif new members are not added the salience weight of a coref eventually reduces to zerothe consequence of this variability in salience is that a very general heuris tic for anaphora resolution is established resolve a pronoun to the most salient candidate antecedent223 interpretation as noted above in terms of overall strategy the resolu tion procedure follows that of lappin and leassthe first step in interpreting the discourse referents in a new sentence isto decrease the salience weights of the coref classes that have already been established by a factor of twonext the algorithm locates all nonanaphoric dis course referents in the sentence under consideration generates a new coref class for each one and calcu lates its salience weight according to how the discourse referent satisfies the set of salience factorsthe second step involves the interpretation f lexical anaphors a list of candi date antecedentanaphor pairs is generated for every lexical anaphor based on the hypothesis that a lexical anaphor must refer to a coargumentin the absence of configurational information coarguments are iden tified using grammatical function information and precedence relationsa reflexive can have one of three possible grammatical function values direct object indirect object or obliquein the first case the closest preceding discourse referent with grammatical function value subject is identified as a possible antecedentin the latter cases both the clos est preceding subject and the closest preceding direct object hat is not separated from the anaphor by a sub ject are identified as possible antecedentsif more than one possible antecedent is located for a lexical anaphor the one with the highest salience weight is determined to be the actual antecedentonce an antecedent has been located the anaphor is added to the coref class associated with the antecedent and the salience of the coref class is recalculatec accordinglythe final step is the interpretation f pronounsthe basic resolution heuristic as noted above is quite sim ple generate a set of candidate antecedents then es tablish coreference with the candidate which has the greatest salience weight in order to generate the candi date set however those discourse referents with which 115 a pronoun cannot refer must be eliminated from consid erationthis is accomplished by running the overall candidate pool through two sets of filters a set of morpho logical agreement filters which eliminate from consid eration any discourse referent which disagrees in per son numbeb or gender with the pronoun and a set of disjoint reference filtersthe determination f disjoint reference represents a significant point of divergence between our algorithm and the lappinleass algorithm because as is well known configurational relations play a prominent role in determining which constituents in a sentence a pro noun may refer tothree conditions are of particular relevance to the anaphora resolution algorithm condition a pronoun cannot corefer with a coargumentcondition 2 a pronoun cannot corefer with a nonpronominal constituent which it both commands and precedescondition 3 a pronoun cannot corefer with a constituent which contains itin the absence of configurafional information our al gorithm relies on inferences from grammatical func tion and precedence todetermine disjoint referencein practice even without accurate information about con stituent structure the syntactic filters described below are extremely accurate condition i is implemented bylocating all discourse referents with gfun value direct object indirect object or oblique which follow a pronoun with gfun value subject or direct object as long as no subject intervenes discourse referents which satisfy these conditions are identified as disjointcondition 2 is implemented by locating for ev ery nonadjunct and nonembedded pronoun the set of nonpronominal discourse referents in its sentence which follow it and eliminating these as potential an tecedentsin effect the command relation is inferred from precedence and the information provided by the syntactic patterns an argument which is neither con tained in an adjunct nor embedded in another nominal commands those expressions which it precedescondition 3 makes use of the observation that a dis course referent contains every object o its right with a nonnil embed valuethe algorithm identifies as dis joint a discourse referent and every pronoun which fol lows it and has a nonnil embed value until a discourse referent with embed value nil is located condiditon 3 also rules out coreference between a genitive pronoun and the np it modifiesafter the morphological nd syntactic filters have been applied the set of discourse referents that remain constitute the set of candidate antecedents for the pro nounthe candidate set is subjected to a final evalu ation procedure which performs two functions it de creases the salience of candidates which the pronoun precedes and it increases the sa li ence of candida tes which satisfy either a locality or a parallelism condition both of which apply to intrasentential c ndidatesthe hcality heuristic isdesigned to negate the effects of subordinationwhen both candidate and anaphor ap pear in the same subordinate context the assumption being that the prominence of a candidate should be de termined with respect o the position of the anaphorthis is a point of difference between our algorithm and the one described in the salience of a candidate which is determined tobe in the same subordinate context as a pronoun is temporarily increased to the level it would have were the candidate not in the subordi nate context the level is returned to normal after the anaphor is resolvedthe parallelism heuristic rewards candidates which are such that the pair consisting of the gfun values of candidate and anaphor are identical to gfun values of a previously identified anaphorantecedent pairthis parallelism heuristic differs from a similar one used by the lappinleass algorithm which rewards candi dates whose grammatical function is identical to that of an anaphoronce the generation and evaluation of the candidate set is complete the candidates are ranked according to salience weight and the candidate with the high est salience weight is determined tobe the antecedent of the pronoun under considerationin the event of a tie the candidate which most immediately precedes the anaphor is selected as the antededent the coref value of the pronoun is set to that of the an tecedent adding it to the the antecedent coref class and the salience of the class is recalculated accordinglythe larger context from which the sample analysis in the beginning of section 2 was taken is as follows while apple and its powerpc partners claimed some prime real estate on the show floor apple most interesting offerings de buted behind the scenesgone was the nar row corner booth that apple shoehorned its products into last yearfor 1995 the com pany set up its headquarters in hall 11 the newest and most prestigious of cent 23 hallsthe anaphora resolution algorithm generates the fol lowing analysis for the first italicized pronounfor each candidate the annotation i square brackets in dicates its offset value and the number to the right indicates its salience weight at the point of interpreta tkm of the pronounana its off33 cnd apple of 1131 432 apple aol f10 352 its off03 352 appe offf i 5 1352 prilne real estat as well as the operation of higherqevel discourse processing designed to identify all references to a particular coref class not just the anaphoric ones the higher salience of the optimal candidatewhich ix also a member of this coref classshows the effect of the locality heuristic described in section 223both the pronoun and the candidate appear in the same sub ordinate context as a result the salience of the candidate is temporarily boosted to negate the effect of subordinatkman abbreviated candidate set for the second itali cized pronoun is given below ana i t s 61of f 145 cnd company ot i 142 h0 app l e for discussionlhe following text segment illust rates the resolution of in tersen ten tia l a napho rasun prototype lntemet access device uses a 110mhz microsparcprocesso and is disklessits dimensions are 55 inches x 9 inches x 2inchesana its aol f 347 cni ialteilel access devic roc es sot gical nct disjoint reference filters only those from the previous entence are con sidered these are ranked ac cording to salience weight where the crucial factor is grammatical function valuethe result of the ranking is that internet access devicethe candidate which satis fies the highestweighted salience facto1 sublsis the optimal candidate and so correctly identified as the an tecedentquantitative evaluation shows the anaphora resolution algorithm described here to run at a rate of 7570 accu racythe data set on which the evaluatkm was based consisted of 27 texts taken from a random selection of genres including press releases product annotmce meats news stories magazine articles and other doc uments existing as world wide web pageswithin these texts we counted 3 report as 85several observations about he results and the com parison with are in orderfirst and most obviously some deterioratkm in qual ity is to be expected given the relatively impoverished linguistic base we start withsecond it is important to note that this is not just a matter of simple comparisonthe results in describe the output of the procedttre applied to a singh text genre computer manualsar guably this is an example of a particularly well be haved text in any case it is not clear how the figure would be normalized over a wide range of text types some of them not completely clean as is the case with our datathird close analysis of the most common types of error our algorithm currently makes reveals two spe cific configurations in the input which confuse the pro cedure and contribute to the error rate gender mis match and certain long range contextttal phenomena best exemplified by text contain ing quoted passages inline implementing a gender agreement fil er is not technically complex as noted above the current algo rithrn contains onethe persistence of gender mis matches in the output simply reflects the lack of a con sistent gender slot in the ingsoft tagger outputaug menting the algorithm with a lexical database which includes more detailed gender information will result in improved accuracyensuring proper interpretatkm of anaphors both within and outside of quoted text requires in effect a method of evaluating quoted speech separately from its surrotmdingcnntextal hough acomplex problem we feel that this is possible given that our input data stream embodies a richer notkm of position and con text as a resut of an independent text segmentation procedure adapted from what is worth noting is the small number of errors which can be directly attributed to the absence of con figurational inhrmationof the 75 misinterpreted pro nouns only 2 inwlved a failure to establish configu ratkmally determined disjoint reference and only an additional several errors could be tmambiguously traced to a failure to correctly identify the syntactic ontext in which a dis course referent appeared overall these considerations lead to two conchlsionsfirst with the incorporation of more explicit morphological nd contextual information it should 3the set of 306 anaphoric pronouns excluded 30 occurrences of expletive it not identified by the expletive patterns as well as 6 occurrences of it which referred to a vp or propositional constituentwe are currently mfinin g the existing expletive patterns for improved accuracy117 be possible to increase the overall quality of our out put bringing it much closer in line with lappin and leass resultsagain straight comparison would not be trivial as eg quoted text passages are not a natural part of computer manuals and are on the other hand an extremely common occurrence in the types of text we are dealing withsecond and most importantly the absence of ex plicit configurational information does not result in a substantial degradation i the accuracy of an anaphora resolution algorithm that is otherwise similar to that described in lappin and leass algorithm for pronominal anaphora resolution is capable of high accuracy but requires in depth full syntactic parsing of textthe modifications of that algorithm that we have developed make it avail able to a larger set of text processing frameworks as we assume a considerably poorer analysis ubstratewhile adaptations to the input format and interpreta tion procedures have necessarily addressed the issues of coping with a less rich level of linguistic analysis there is only a small compromise in the quality of the resultsour evaluation indicates that the problems with the current implementation donot stem from the absence of a parse but rather from factors which can be addressed within the constraints imposed by the shallow base analysisthe overall success of the algo rithm is important then not only for the immediate utility of the particular modifications but also because the strategy we have developed for circumventing the need for full syntactic analysis is applicable to other in terpretation tasks which like the problem of anaphora resolution lie in the space of higher level semantic and discourse analysis
C96-1021
anaphora for everyone pronominal anaphora resolution without a parserwe present an algorithm for anaphora resolution which is a modified and extended version of that developed by in contrast to that work our algorithm does not require indepth full syntactic parsing of textinstead with minimal compromise in output quality the modifications enable the resolution process to work from tile output of a part of speech tagger enriched only with annotations of grammatical function of lexical items in the input text streamevaluation of the results of our implementation demonstrates that accurate anaphora resolution can be realized within natural language processing frameworks which do not cannot employ robust and reliable parsing componentswe also suggest that anaphora resolution is part of the discourse referents resolution
hmmbased word alignment in statistical translation in this paper we describe a new model for word alignment in statistical trans lation and present experimental results the idea of the model is to make the alignment probabilities dependent on the differences in the alignment positions rather than on the absolute positions to achieve this goal the approach us es a firstorder hidden markov model for the word alignment problem as they are used successfully in speech recognition for the time alignment prob lem the difference to the time align ment hmm is that there is no monotony constraint for the possible word order ings we describe the details of the mod el and test the model on several bilingual corpora in this paper we address the problem of word alignments for a bilingual corpusin the recent years there have been a number of papers con sidering this or similar problems in our approach we use a firstorder hidden markov model which is similar but not identical to those used in speech recognitionthe key component of this approach is to make the alignment probabilities dependent not on the absolute position of the word align ment but on its relative position ie we consider the differences in the index of the word positions rather than the index itselfthe organization of the paper is as followsafter reviewing the statistical approach to ma chine translation we first describe the convention al model we then present our firstorder hmm approach in lull detailfinally we present some experimental results and compare our model with the conventional modelthe goal is the translation of a text given in some language f into a target language e for conve nience we choose for the following exposition as language pair french and english ie we are giv en a french string f fx fjfj which is to be translated into an english string e eleicl among all possible english strings we will choose the one with the highest probability which is given by bayes decision rule a argmaxp q argmax pr l are el pr is the language model of the target lan guage whereas pr is the string translation modelthe argmax operation denotes the search problemin this paper we address the problem of introducing structures into the probabilistic de pendencies in order to model the string translation probability pra key issne in modeling the string translation probability pr is the question of how we define the correspondence b tween the words of the english sentence and the words of the french sentencein typical cases we can assume a sort of pairwise dependence by considering all word pairs for a given sentence pair i1j elqlj we fur ther constrain this model by assigning each french word to exactly one english wordmodels describ ing these types of dependencies are referred to as alignment modelsin this section we describe two models for word alignrnent in detail a mixturebased alignment model which was introduced in an hmmbased alignment modelin this paper we address the question of how to define specific models for the alignment probabil itiesthe notational convention will be as fol lowswe use the symbol prto denote general 836 probability distributions with no sleeitic assumltionsin contrast for modcltased prol ability distributions we use the generic symbol v31 al ignment w i th m ix ture d is t r i mt ionhere we describe the mixturebased alignment model in a fornmlation which is different fronl the original formulation ill we will isresented laterthe model is based on a decomposition of the joint probability br l into a product over the probabilities for each word j a jl whefe fo norlllaizi t on 17 has been includedthe next step now is to assutne a sort olairwise inter act ion between tim french word f j anle _p i1 i _pp i1 putting everything together we have the following mixturebased ntodel j l are pled j l it with the following ingredients sentence length probdility p mixture alignment probability p translation probmility passuming a tmifornl flignment prolability 1 p 7 we arrive at the lht model proposed ty this model will be referred to as ib m 1 modelto train the translation probabilities p we use a bilingual tain the following iterative l a equation will a 2 5 a for unilbrm alignment probabilities it can be shown that there is only one optinnnn and therefore the i am algorithm 72 always tinds the global optimumfor mixture alignment model with nonunilbrm alignment probabilities there tre to many alignrnent parameters pill j i to be estimated for smml co l poratherefore a specific model tbr tile mign ment inobabilities i used are p l i e i l this model assumes that the position distance rel ative to the diagonal ine of the plane is the dominating factor lb train this mod el we use the naximutn likelihood criterion in the socalled ulaximmn alproximation ie the likeli hood criterion covers only tile most lik i i iu ho v jl in training this criterion amounts to a sequence of iterations each of which consists of two steps posi l ion al ignmcnl ue to the natnre of tile nfixture tnod the words are not distrilmted arbitrarily over the senteuce ositions but tend to form clus tersfig1 illustrates this effect for the language pair german 15nglisheach word of the german sentence is assigned to a word of the english sentencethe alignments have a strong tendency to preserve the local neigh borhood when going from the one langnage to the other languagein mmy cases although not al ways there is an even stronger restriction the differeuce in the position index is smmler than 3837 days both on eight at it make can we if think i well j j x d _ jg g z aa figure 1 word alignment for a german english sentence pairto describe these wordbyword aligmnents we introduce the mapping j aj which assigns a word f j in position j to a word el in position ajthe concept of these alignments i similar to the ones introduced by but we wilt use another type of dependence in the probability distributionslooking at such align ments produced by a hmnan expert it is evident that the mathematical model should try to cap ture the strong dependence of aj on the previous aligmnenttherefore the probability of alignment aj for position j should have a dependence on the previous alignment aj _ 1 p where we have inchided the conditioning on the total length of the english sentence for normal ization reasonsa sinfilar approach as been cho sen by thus the problem formulation is similar to that of the time align ment problem in speech recognition where the socalled iiidden markov models have been suc cessfully used for a long time us ing the same basic principles we can rewrite the probability by introducing the hidden alignments af al aj aa for a sentence pair ifa e pr _vr a i jl so far there has been no basic restriction of the approachwe now assume a firstorder depen dence on the alignments aj only vr where in addition we have assmned that tile translation probability delends only oil aj and not oil ajl putting everything together we have the ibllowing llmmbased model a pr ii pplea af j with the following ingredients ilmm alignment probability p or p translation probabflity pin addition we assume that the tmm align ment probabilities p depend only on the jump width using a set of nonnegative parameters s we can write the iimm alignment probabilities in the form 4 i i p e s 11 this form ensures that for each word position i i 1 i the itmm alignment probabilities satisfy the normmization constraintnote the similarity between equations and the mixtme model can be interpreted as a zerothorder model in contrast to the firstorder tlmm modelas with the ibm2 model we use again the max imum approximation j pr max pt ary size aml numler of running words is not very fawrabletallle i etweeu 83 and 2 the avalanche bulletins are awdlabte for ioth get ntan and irenical sen tair fsonl the ennorar des tempatures d abord dlevdes puis plus basses 60 h 8le fiom the vetmobil corpus is given in figure 1lach of the three corljorawere ttsed to train 1oth al ignnmnt models the mixtureiased al ignment model in eq and the llmmbasehxity roba1ilities whereas the i imm mod el produces a smaller perplexity for the al ignment lrohalilitiesin the calculatiot of the perplexi ties thles for the two nndels but itt general the tl m m model seems to giwslightly better re suits in the cases of 0171 des 0035 le 0039 sud 0427 sur 0040 versant 0284 ibm2 alpes 0276 sud 0371 versant 0356 hmm alpes 0284 des 0028 sud 0354 versant 0333 this is a result of the smoother position align ments produced by the hmm modela pro nounced example is given in figure 2he prob lem of the absolute position alignment can he demonstrated at the positions and both schneebretlgefahr und schneeverfrachtungen have a high probability on neigethe ibm2 models chooses the position near the diagonal as this is the one with the higher probabilityagain schneebrettgefahr generates de which explains the wrong alignment near the diagonal in however this strength of the hmm model can also be a weakness as in the case of est developpe ist entstanden in figure 2the required two large jumps are correctly found by the mixture model but not by the hmm mod elthese cases suggest an extention to the hmm modelin general there are only a small number of big jumps in the position alignments in a given sentence pairtherefore a model could be useful that distinguishes between local and big jumpsthe models have also been tested on the verb mobil translation corpus as well as on a small corpus used in the eutrans projectthe sen tences in the eutrans corpus are in general short phrases with simple grammatical structureshowever the training corpus is very small and the produced alignments are generally of poor qualitythere is no marked difference for the two align ment modelstable 5 perplexity results verbmobil corpusfor eutrans and model itertranslaligntotal ibm1 10 2610 6233 16267 ibm2 5 2443 4003 9781 hmm 5 2461 3934 9686 ibm1 10 4373 10674 46672 ibm2 5 4696 6538 30706 itmm 5 4859 5452 26495 the verbmobil corpus consists of spontaneous ly spoken dialogs in the domain of appointment schedulingthe assumption that every word in the source language is aligned to a word in the target language breaks down for many sentence pairs resulting in poor alignmentthis in turn affects the quality of the translation probabilitiesseveral extensions to the current iimm based model could be used to tackle these problems the results presented here did not use the concept of the empty wordfor the hmm based model this however requires a second order rather than a firstorder modelwe could allow for multiword phrases inboth languagesin addition to the absolute or relative align ment positions the alignment probabilities can be assumed to depend on part of speech tags or on the words themselves5 conclusionin this paper we have presented an itmmbased approach for rnodelling word aligmnents in par allel textsthe characteristic feature of this ap proach is to make the alignment probabilities ex plicitly dependent on the alignment position of the previous wordwe have tested the model suc cessfully on real datathe hmmbased approach produces translation probabilities comparable to the mixture alignment modelwhen looking at the position alignments those generated by the itmm model are in general much smootherthis could be especially helpful for languages uch as german where compound words are matched to several words in the source languageon the oth er hand large jumps due to different word order ings in the two languages are successfully modeledwe are presently studying and testing a nmltilevel hmm model that allows only a small number of large jumpsthe ultimate test of the different alignment and translation models can only be car ried out in the framework of a fully operational translation system6 acknowledgementthis research was partly supported by the and under the esprit research project 20268 eutrans
C96-2141
hmmbased word alignment in statistical translationin this paper we describe a new model for word alignment in statistical translation and present experimental resultsthe idea of the model is to make the alignment probabilities dependent on the differences in the alignment positions rather than on the absolute positionsto achieve this goal the approach uses a firstorder hidden markov model for the word alignment problem as they are used successfully in speech recognition for the time alignment problemthe difference to the time alignment hmm is that there is no monotony constraint for the possible word orderingswe describe the details of the model and test the model on several bilingual corporawe use a useful feature that assesses the goodness of the alignment path through the source sentence
using semantic roles to improve question answering shallow semantic parsing the automaticidentification and labeling of sentential constituents has recently received much attention our work examines whether seman tic role information is beneficial to questionanswering we introduce a general frame work for answer extraction which exploits semantic role annotations in the framenetparadigm we view semantic role assignment as an optimization problem in a bipartite graph and answer extraction as an instance of graph matching experimental results on the trec datasets demonstrate i am provements over stateoftheart models recent years have witnessed significant progress in developing methods for the automatic identificationand labeling of semantic roles conveyed by sentential constituents1 the success of these methods often referred to collectively as shallow semantic pars ing is largely due to the availability of resources like framenet and propbank which document the surface realization of semantic roles in real world corporamore concretely in the framenet paradigm themeaning of predicates is conveyed by frames schematic repre sentations of situationssemantic roles for an overviewelements are defined for each frame and correspond to salient entities present in the evoked situ ationpredicates with similar semantics instantiate the same frame and are attested with the same rolesthe framenet database lists the surface syntacticrealizations of semantic roles and provides anno tated example sentences from the british national corpusfor example the frame commerce sell has three core semantic roles namely buyer goods andseller each expressed by an indirect object a di rect object and a subject it can also be attested with noncore roles and thatare more generic and can be instantiated in sev eral frames besides commerce sellthe verbs sell vend and retail can evoke this frame but also the nouns sale and vendor a leeseller sold a textbook to abbybuyerb kimseller sold the sweatergoodsc my companyseller has sold more than three million copiesgoodsd abbyseller sold the cargoods for cashmeanse heseller reluctanctlymanner sold his rockgoodsby abstracting over surface syntactic configurations semantic roles offer an important first step to wards deeper text understanding and hold promisefor a range of applications requiring broad cover age semantic processingquestion answering is often cited as an obvious beneficiary of semantic 12 role labeling faced with the question q what year did the yous buyalaskaand the retrieved sentence s before russia sold alaska to the united states in 1867 a hypo thetical qa system must identify that united states is the buyer despite the fact that it is attested in one instance as a subject and in another as an objectonce this information is known isolating the correct answer can be relatively straightforwardalthough conventional wisdom has it that seman tic role labeling ought to improve answer extraction surprising little work has been done to this effect and initial results have been mostly inconclusive or negative there are at least two good reasons for these findingsfirst shallow semanticparsers trained on declarative sentences will typically have poor performance on questions and generally on outofdomain datasecond existing re sources do not have exhaustive coverage and recallwill be compromised especially if the question an swering system is expected to retrieve answers fromunrestricted textsince framenet is still under development its coverage tends to be more of a prob lem in comparison to other semantic role resources such as propbankin this paper we propose an answer extractionmodel which effectively incorporates framenetstyle semantic role informationwe present an auto matic method for semantic role assignment which is conceptually simple and does not require extensive feature engineeringa key feature of our approachis the comparison of dependency relation paths at tested in the framenet annotations and raw textwe formalize the search for an optimal role assignment as an optimization problem in a bipartite graphthisformalization allows us to find an exact globally op timal solutionthe graphtheoretic framework goessome way towards addressing coverage problems related with framenet and allows us to formulate an swer extraction as a graph matching problemas abyproduct of our main investigation we also exam ine the issue of framenet coverage and show howmuch it impacts performance in a trecstyle ques tion answering settingin the following section we provide an overview of existing work on question answering systems that exploit semantic rolebased lexical resourcesthenwe define our learning task and introduce our approach to semantic role assignment and answer ex traction in the context of qanext we present our experimental framework and datawe conclude the paper by presenting and discussing our resultsquestion answering systems have traditionally de pended on a variety of lexical resources to bridge surface differences between questions and potential answerswordnet is perhaps the most popular resource and has been employed in a variety of qarelated tasks ranging from query expansion to axiombased reasoning passage scoring and answer filtering besides wordnet recent qa systems increasingly rely on syntactic information as a means of abstracting over word order differences and structural alternations most syntaxbased qa systems incorporate some means of comparison between the tree representing the question with the subtree surrounding the answer candidatethe assumption here is that appropriate answers are more likely to have syntactic relationsin common with their corresponding questionsyntactic structure matching has been applied to passage retrieval and answer extrac tion narayanan and harabagiu were the firstto stress the importance of semantic roles in an swering complex questionstheir system identifies predicate argument structures by merging semanticrole information from propbank and framenetexpected answers are extracted by performing probabilistic inference over the predicate argument struc tures in conjunction with a domain specific topicmodelsun et al incorporate semantic analy sis in their trec05 qa systemthey use assert a publicly available shallow semantic parser trained on propbank to generate predicateargument structures which subsequently form the basis of comparison between question and answer sentencesthey find that semantic analysis does not boost performance due to the low recall of the semantic parserkaisser proposes a 13 semstruc ac1semstruc ac2 semstruc aci semstruc q sentmodel i q model i model ii answer figure 1 architecture of answer extraction question paraphrasing method based on framenetquestions are assigned semantic roles by matching their dependency relations with those attested in the framenet annotationsthe assignments are used to create question reformulations which are submitted to google for answer extractionthe semantic role assignment module is not probabilistic it relies onstrict matching and runs into severe coverage prob lemsin line with previous work our method exploitssyntactic information in the form of dependency re lation paths together with framenetlike semanticroles to smooth lexical and syntactic divergences be tween question and answer sentencesour approach is less domain dependent and resource intensive than narayanan and harabagiu it solely employs a dependency parser and the framenet databasein contrast to kaisser we model the semanticrole assignment and answer extraction tasks numerically thereby alleviating the coverage problems en countered previouslywe briefly summarize the architecture of the qa system we are working with before formalizing the mechanics of our framenetbased answer extractionmodulein common with previous work our over all approach consists of three stages determiningthe expected answer type of the question retrieving passages likely to contain answers to the question and performing a match between the ques tion words and retrieved passages in order to extract the answerin this paper we focus on the last stage question and answer sentences are normalized to aframenetstyle representation and answers are re trieved by selecting the candidate whose semantic structure is most similar to the questionthe architecture of our answer extraction mod ule is shown in figure 1semantic structures for questions and sentences are automatically derived using the model described in section 4 asemantic structure semstruc psetconsists of a predicate p and a set of semantic role assignments setp is a word or phrase evoking a frame f of frameneta semantic role assignment sra is a ternary structure wsrs consist ing of frame element w its semantic role sr and score s indicating to what degree sr qualifies as a label for wfor a question q we generate a semantic struc ture semstrucqquestion words such as what who when etc are considered expected answer phrases we require that eaps are frame elementsof semstrucqlikely answer candidates are extracted from answer sentences following some pre processing steps detailed in section 6for each candidate ac we derive its semantic structuresemstrucac and assume that ac is a frame element of semstrucacquestion and answer seman tic structures are compared using a model based on graph matching detailed in section 5 we calculate the similarity of all derived pairs semstrucqsemstrucacand select the candidate with the highest value as an answer for the questionour method crucially exploits the annotated sen tences in the framenet database together with theoutput of a dependency parserour guiding assumption is that sentences that share dependency rela tions will also share semantic roles as long as they evoke the same or related framesthis is motivated by much research in lexical semantics hypothesizing that the behavior of wordsparticularly with respect to the expression and interpretation of their arguments is to a large ex tent determined by their meaningwe first describe how predicates are identified and then introduce our model for semantic role labelingpredicate identification predicate candidates are identified using a simple lookup procedure whichcompares postagged tokens against framenet entriesfor efficiency reasons we make the simplifying assumption that questions have only one predicate which we select heuristically verbs are pre 14 ferred to other parts of speech if there is more than one verb in the question preference is given to the verb with the highest level of embedding in the dependency tree if no verbs are present a nounis chosenfor example in q who beat floyd pat terson to take the title away beat take away and title are identified as predicate candidates and beat is selected the main predicate of the questionforanswer sentences we require that the predicate is ei ther identical or semantically related to the question predicate in the example given above the predicate beat evoques a single frame howeverpredicates often have multiple meanings thus evo quing more than one frameknowing which is the appropriate frame for a given predicate impacts the semantic role assignment task selecting the wrong frame will unavoidably result in erroneous semanticrolesrather than disambiguiting polysemous pred icates prior to semantic role assignment we performthe assignment for each frame evoqued by the pred icatesemantic role assignment before describing our approach to semantic role labeling we define dependency relation pathsa relation path r is a relation sequence r1r2 rl in which rl is one of predefined dependency relations with suffix of traverse directionan example of arelation path is are sub ju ob jd where the sub scripts you and d indicate upward and downwardmovement in trees respectivelygiven an unannotated sentence whose roles we wish to label we as sume that words or phrases w with a dependency path connecting them to p are frame elementseachframe element is represented by an unlabeled depen dency path rw which we extract by traversing thedependency tree from w to p analogously we ex tract from the framenet annotations all dependencypaths rsr that are labeled with semantic role infor mation and correspond to p we next measure thecompatibility of labeled and unlabeled paths as fol lows s maxrsrm sim p where m is the set of dependency relation pathsfor sr in framenet sim the similarity be tween paths rw and rsr weighted by the relative w srw sr figure 2 sample original bipartite graph and its subgraph with edge covers in each graph the left partition represents frame elements and the right partition semantic rolesfrequency of rsr in framenet we consider both core and noncore semantic roles instan tiated by frames with at least one annotation in framenetcore roles tend to have more annotations in framenet and consequently are considered more probablewe measure sim by adapting a string kernel to our taskour hypothesis is that the more common substrings two dependency paths have the more similar they arethe string kernel we used is similar to leslie and defined asthe sum of weighted common dependency relation subsequences between rw and rsrfor efficiency we consider only unigram and bigram sub sequencessubsequences are weighted by a metricakin to t f id f which measures the degree of asso ciation between a candidate sr and the dependency relation r present in the subsequence weightsr fr log where fr is the frequency of r occurring in sr n is the total number of srs evoked by a given frame and nr is the number of srs containing r for each frame element we thus generate a set of semantic role assignments setthis initialassignment can be usefully represented as a com plete bipartite graph in which each frame element is connected to the semantic roleslicensed by the predicate and vice versaedges are weighted and represent how com patible the frame elements and semantic roles are now for each frame element w 15 q who discovered prionss 1997 stanley b prusiner united states discovery of prions semstruc q p discover original sr assignments optimized sr assignments 006 cognizerphenomenon ground state evidence eap prions 0000 001 01 005 005 002 006 cognizerphenomenon ground state evidence eap prions 01 005 005 002 semstruc ac p discovery original sr assignments optimized sr assignments 025 cognizerphenomenon topic evidence ac prions 015 02 016 025 cognizerphenomenon topic evidence ac prions 015 02 016 012007 0 0 figure 3 semantic structures induced by our model for a question and answer sentence we could simply select the semantic role with the highest scorehowever this decision procedure is local ie it yields a semantic role assignment foreach frame element independently of all other ele mentswe therefore may end up with the same role being assigned to two frame elements or with frameelements having no role at allwe remedy this short coming by treating the semantic role assignment as a global optimization problemspecifically we model the interaction between all pairwise labeling decisions as a minimum weight bipartite edge cover problem an edge cover is a sub graph of a bipartite graph so that each node is linked to at least one node of the other partitionthis yields a semantic role assignment for all frame elements edge covers have been success fully applied in several natural language processing tasks including machine translation and annotation projection formally optimal edge cover assignments are so lutions of following optimization problem max e is edge cover e swhere s is the compatibility score be tween the frame element node ndw and semantic role node ndsredge covers can be computed efficiently in cubic time using algorithms for the equivalent linear assignment problemour experiments used jonker and volgenants solver2 figure 3 shows the semantic role assignments generated by our model for the question q whodiscovered prionsand the candidate answer sen tence s 1997 stanley b prusiner united statesdiscovery of prionshere we identify two predi cates namely discover and discoverythe expectedanswer phrase who and the answer candi date stanley b prusiner are assigned the cognizerrolenote that frame elements can bear multiple se mantic rolesby inducing a soft labeling we hope to render the matching of questions and answers morerobust thereby addressing to some extent the cover age problems associated with framenetwe measure the similarity between a question and its candidate answer by matching their predicatesand semantic role assignmentssince srs are framespecific we prioritize frame matching to sr match ingtwo predicates match if they evoke the same frame or one of its hypernyms thelatter are expressed by the inherits from and is in herited by relations in the frame definitionsif the predicates match we examine whether the assignedsemantic roles matchsince we represent sr assignments as graphs with edge covers we can also for malize sr matching as a graph matching problemthe similarity between two graphs is measured as the sum of similarities between their subgraphswe first decompose a graph into subgraphs consist ing of one frame element node w and a set of sr nodes connected to itthe similarity between two subgraphs subg1 and subg2 is then formalized as sim ndsr1 subg1 ndsr2 subg2 ndsr1 ndsr2 1 ss1where ndsr1 and ndsr2 are semantic role nodes con nected to a frame element node ndw in subg1 and 2the software is available from httpwwwmagiclogiccomassignmenthtml 16 175711 20 211721 50 43951 100 40101 inf 33800 11751 512876 10 figure 4 distribution of numbers of predicates and annotated sentences each subpie lists the number of predicates with their corresponding range of annotated sentences subg2 respectivelys and sare edge weights between two nodes in correspond ing subgraphs our intuition here is that the more semantic roles two subgraphs share for a given frame element the more similar they are and the closer their corresponding edge weights should beedge weights are normalized by dividing by the sum of all edges in a subgraphdata all our experiments were performed on the trec0205 factoid questionswe excluded nil questions since trec does not supply an answer for themwe used the framenet v13 lexical databaseit contains 10195 predicates grouped into 795 se mantic frames and 141238 annotated sentencesfigure 4 shows the number of annotated sentences available for different predicatesas can be seenthere are 3380 predicates with no annotated sentences and 1175 predicates with less than 5 anno tated sentencesall framenet sentences questions and answer sentences were parsed using minipar a robust dependency parseras mentioned in section 4 we extract depen dency relation paths by traversing the dependency tree from the frame element node to the predicate nodewe used all dependency relations providedby minipar in order to increase cov erage we combine all relation paths for predicates that evoke the same frame and are labeled with the same pos tagfor example found and establish are both instances of the frame intentionally createbut the database does not have any annotated sen tences for foundv in default of not assigning anyrole labels for foundv our model employs the rela tion paths for the semantically related establishv preprocessing here we summarize the steps ofour qa system preceding the assignment of semantic structure and answer extractionfor each ques tion we recognize its expected answer type answer types are determined using classi fication rules similar to li and roth we alsoreformulate questions into declarative sentences fol lowing the strategy proposed in brill et al the reformulated sentences are submitted as queries to an ir engine for retrieving sentences with relevant answersspecifically we use the lemur toolkit3 a stateoftheart language modeldriven search enginewe work only with the 50 toprankedsentences as this setting performed best in previ ous experiments of our qa systemwe also add tolemurs output gold standard sentences which contain and support an answer for each questionspecifically documents relevant for each question are re trieved from the aquaint corpus4 according to trec supplied judgmentsnext sentences which match both the trec provided answer pattern and at least one question key word are extracted and their suitability is manually judged by humansthe set of relevant sentences thus includes at least one sentence with an appropriate answer as well as sentences that do not contain any answer specific informationthis setup is somewhat idealized however it allows us toevaluate in more detail our answer extraction mod ule relevant sentences are annotated with their named entities using lingpipe5 a mucbased named entity recognizerwhen we successfully classify a question with an expected answer type 3see httpwwwlemurprojectorg for details4this corpus consists of english newswire texts and is used as the main document collection in official trec evaluations5the software is available from wwwaliasicom lingpipe 17 we assume that all nps attested in the set of relevant sentences with the same answer type are candidate answers in cases where no answer type is found all nps in the relevant answers set are considered candidate answersbaseline we compared our answer extractionmethod to a qa system that exploits solely syntac tic information without making use of framenet or any other type of role semantic annotationsfor each question the baseline identifies key phrases deemed important for answer identificationthese are verbs noun phrases and expected answer phrases all dependency relation paths con necting a key phrase and an eap are compared tothose connecting the same key phrases and an answer candidatethe similarity of question and an swer paths is computed using a simplified version of the similarity measure6 proposed in shen and klakow our second baseline employs shalmaneser a publicly available shallow se mantic parser7 for the role labeling task instead of the graphbased model presented in section 4thesoftware is trained on the framenet annotated sen tences using a standard feature set for detailswe use shalmaneser to parse questions and answer sentencesthe parser makes hard decisions about the presence or absence of a semantic roleunfortunately this prevents usfrom using our method for semantic structure match ing which assumes a soft labelingwe therefore came up with a simple matching strat egy suitable for the parsers outputfor questionand answer sentences matching in their frame as signment phrases bearing the same semantic role asthe eap are considered answer candidatesthe latter are ranked according to word overlap use a dynamic time warping al gorithm to calculate the degree to which dependency relationpaths are correlatedcorrelations for individual relations are es timated from training data whereas we assume a binary value the modification wasnecessary to render the baseline system comparable to our an swer extraction model which is unsupervised7the software is available from httpwwwcoliunisaarlanddeprojectssalsashal overlap at allour evaluation was motivated by the following ques tions how does the incompleteness of framenet impact qa performance on the trec data setsin particular we wanted to examine whether there are questions for which in principle no answer can befound due to missing frame entries or missing an notated sentences are all questions and theircorresponding answers amenable to a framenet style analysisin other words we wanted to assess whether questions and answers often evoke the sameor related frames this is a prerequisite for semantic structure matching and ulti mately answer extraction do the graphbasedmodels introduced in this paper bring any perfor mance gains over stateoftheart shallow semanticparsers or more conventional syntaxbased qa systemsrecall that our graphbased models were de signed especially for the qa answer extraction taskour results are summarized in tables 13table 1 records the number of questions to be answered forthe trec0205 datasets we also give infor mation regarding the number of questions which arein principle unanswerable with a framenetstyle se mantic role analysiscolumn noframe shows the number of questions which do not have an appropriate frame or predicate in the databasefor example there is currently no predicate entry for sponsor or sink or for hit shal low semantic parsers trained on framenet would also have trouble assigning roles to predicates for which no data is availablefinally column nomatch reports the number of questions which cannot be answered due to frame 18 data total noframe noannot nomatch rest trec02 444 87 29 176 152 trec03 380 55 30 183 112 trec04 203 47 14 67 75 trec05 352 70 23 145 114 table 1 number of questions which cannot be answered using a framenet style semantic analysis numbers in parentheses are percentages of total qualify fordiscountsthe answer and the question evoke dif ferent frames in fact here a semantic role analysis is not relevant for locating the right answeras can be seen nomatch cases are by far the most frequentthe number of questions remaining after excluding noframe noannot and nomatch are shown under the rest heading in table 1these results indicate that framenetbased se mantic role analysis applies to approximately 35 of the trec datathis means that an extraction module relying solely on framenet will have poor performance since it will be unable to find answers for more than half of the questions beeing askedwe nevertheless examine whether our model brings any performance improvements on this limited dataset which is admittedly favorable towards a framenetstyle analysistable 2 shows the results of our an swer extraction module together with two baseline systemsthe first baseline uses only dependency relation path information whereas the second baseline uses shal maneser a stateoftheart shallow semantic parser for the role labeling taskwe consider an answer correct if it is returned with rank 1as can be seensemmatch is significantly better than both synmatch and semparse whereas the latter is signifi cantly worse than synmatchalthough promising the results in table 2 are not very informative since they show performance gainson partial datainstead of using our answer extrac tion model on its own we next combined it with the syntaxbased system mentioned above if framenet is indeedhelpful for qa we would expect an ensemble sys model trec02 trec03 trec04 trec05 semparse 1316 892 1733 1316 synmatch 3553330440003684semmatch 5329491154675965table 2 system performance on subset of trecdatasets signifi cantly better than semparse significantly better than synmatch model trec02 trec03 trec04 trec05 synmatch 3288307035953438semparse 2523 2368 2857 2670 semmatch 3896355342364176table 3 system performance on trec datasets significantly better than semparse significantly better than synmatch tem to yield better performance over a purely syn tactic answer extraction modulethe two systems were combined as followsgiven a question we first pass it to our framenet model if an answer is foundour job is done if no answer is returned the ques tion is passed on to synmatchour results are givenin table 3semmatch and semparse are ensem ble systems using synmatch together with the qaspecific role labeling method proposed in this pa per and shalmaneser respectivelywe also compare these systems against synmatch on its ownwe can now attempt to answer our third ques tion concerning our models performance on thetrec dataour experiments show that a framenet enhanced answer extraction module significantlyoutperforms a similar module that uses only syntactic information another interesting finding is that 19 the shallow semantic parser performs considerably worse in comparison to our graphbased models and the syntaxbased systeminspection of the parsers output highlights two explanations for thisfirst theshallow semantic parser has difficulty assigning ac curate semantic roles to questions and secondly it tends to favor precision over recall thus re ducing the number of questions for which answers can be founda similar finding is reported in sun et al for a propbank trained parserin this paper we assess the contribution of semanticrole labeling to opendomain factoid question answeringwe present a graphbased answer extrac tion model which effectively incorporates framenet style role semantic information and show that it achieves promising resultsour experiments show that the proposed model can be effectively combined with a syntaxbased system to obtain performancesuperior to the latter when used on its ownfur thermore we demonstrate performance gains over ashallow semantic parser trained on the framenet an notated corpuswe argue that performance gains are due to the adopted graphtheoretic framework which is robust to coverage and recall problemswe also provide a detailed analysis of the appropriateness of framenet for qawe show that per formance can be compromised due to incomplete coverage but also because of mismatching questionanswer representationsthe question and the answer may evoke different frames or the answer simply falls outside the scope of agiven frame our study shows that mismatches are rela tively frequent and motivates the use of semantically informed methods in conjunction with syntaxbased methodsimportant future directions lie in evaluating the contribution of alternative semantic role frameworks to the answer extraction task anddeveloping models that learn semantic roles di rectly from unannotated text without the support of framenet annotations beyond question answering we also plan to investigate the potential of our model for shallow semantic parsing since our experience so far has shown that it achieves good recallacknowledgements we are grateful to sebastian padofor running shalmaneser on our datathanks to frank keller and amit dubey for insightful comments and suggestionstheauthors acknowledge the support of dfg and epsrc
D07-1002
using semantic roles to improve question answeringshallow semantic parsing the automatic identification and labeling of sentential constituents has recently received much attentionour work examines whether semantic role information is beneficial to question answeringwe introduce a general framework for answer extraction which exploits semantic role annotations in the framenet paradigmwe view semantic role assignment as an optimization problem in a bipartite graph and answer extraction as an instance of graph matchingexperimental results on the trec datasets demonstrate improvements over stateoftheart modelswe show that shallow semantic information in the form of predicate argument structures improves the automatic detection of correct answers to a target questionwe also point out that the low coverage of the current version of framenet significantly limits the expected boost in performance
what is the jeopardy model a quasisynchronous grammar for qa this paper presents a syntaxdriven ap proach to question answering specifically the answersentence selection problem forshortanswer questions rather than using syntactic features to augment exist ing statistical classifiers we build on the idea that ques tions and their answers relate toeach other via loose but predictable syntactic transformations we propose a probabilistic quasisynchronous grammar inspired by one proposed for machine translation and parameterized by mixtures of a robust non lexical syntaxalignment model with a lexicalsemanticsdriven loglinear model our model learns soft alignments as a hidden variable in discriminative training experimental results using the trec dataset are shown to significantly outperform strong stateoftheart baselines opendomain question answering is a widelystudied and fastgrowing research problemstate oftheart qa systems are extremely complexthey usually take the form of a pipeline architecture chaining together modules that perform tasks such as answer type analysis document retrieval answer candidate extrac tion and answer rerankingthis architecture is so predominant that each task listed above has evolvedinto its own subfield and is often studied and evalu ated independently at a high level the qa task boils down to only two essential steps the first step retrieval narrows down the search spacefrom a corpus of millions of documents to a focused set of maybe a few hundred using an ir engine where efficiency and recall are the main focusthe second step selection assesses each can didate answer string proposed by the first step and finds the one that is most likely to be an answerto the given questionthe granularity of the tar get answer string varies depending on the type ofthe questionfor example answers to factoid ques tions are usually single words or short phrases while definitional questions and other more complex question types look for sentences or short passagesin this work we fix the granularity of an answer to a single sentenceearlier work on answer selection relies only on the surfacelevel text informationtwo approaches are most common surface pattern matching andsimilarity measures on the question and answer represented as bags of wordsin the former pat terns for a certain answer type are either craftedmanually or acquired from training examples automatically in the lattermeasures like cosinesimilarity are applied to bagofwords representations of the question and answeralthough many of these systems haveachieved very good results in trecstyle evaluations shallow methods using the bagofword repre sentation clearly have their limitationsexamples of 22 cases where the bagofwords approach fails abound in qa literature here we borrow an example used by echihabi and marcu the question is who is the leader of france and the sentence henri hadjenberg who is the leader of france s jewish community endorsed which is not the correct answer matches all keywords inthe question in exactly the same orderthis example illustrates two types of variation that need to be recognized in order to connect this questionanswer pairthe first variation is thechange of the word leaderto its semantically re lated term presidentthe second variation is the syntactic shift from leader of franceto french presidentit is also important to recognize thatfrancein the first sentence is modifying com munity and therefore henri hadjenbergis the leader of communityrather than the leader offrancethese syntactic and semantic variations occur in almost every questionanswer pair and typi cally they cannot be easily captured using shallow representationsit is also worth noting that such syntactic and semantic variations are not unique to qa they can be found in many other closely related nlp tasks motivating extensive community efforts in syntactic and semantic processingindeed in this work we imagine a generative story for qa in which the question is generatedfrom the answer sentence through a series of syn tactic and semantic transformationsthe same story has been told for machine translation in which a target languagesentence has undergone seman tic transformation and syntactic transformation to generate the source language sen tence similar stories can also be found in paraphrasing and textual entailment our story makes use of a weighted formalism known as quasisynchronous grammar originally developed by d smith and eisner for machine translationunlike most synchronous formalisms qg does not posit a strict iso morphism between the two trees and it providesan elegant description for the set of local configura tionsin section 2 we situate our contribution in thecontext of earlier work and we give a brief discus sion of quasisynchronous grammars in section 3our version of qg called the jeopardy model and our parameter estimation method are described insection 4experimental results comparing our ap proach to two stateoftheart baselines are presented in section 5we discuss portability to crosslingual qa and other applied semantic processing tasks in section 6to model the syntactic transformation process re searchers in these fieldsespecially in machine translationhave developed powerful grammatical formalisms and statistical models for representing and learning these treetotree relations we can also observe a trend in recent work in textual entailment that more emphasis is put onexplicit learning of the syntactic graph mapping between the entailed and entailedby sentences however relatively fewer attempts have been made in the qa communityas pointed out by katz and lin most early experiments in qa that tried to bring in syntactic or semantic features showed little or no improvement and itwas often the case that performance actually de graded morerecent attempts have tried to augment the bagof words representationwhich after all is simply a realvalued feature vectorwith syntactic featuresthe usual similarity measures can then be used onthe new feature representationfor example punyakanok et al used approximate tree match ing and treeeditdistance to compute a similarity score between the question and answer parse treessimilarly shen et al experimented with de pendency tree kernels to compute similarity between parse treescui et al measured sentencesimilarity based on similarity measures between de pendency paths among aligned wordsthey used heuristic functions similar to mutual information to 23 assign scores to matched pairs of dependency linksshen and klakow extend the idea furtherthrough the use of loglinear models to learn a scor ing function for relation pairsechihabi and marcu presented a noisy channel approach in which they adapted the ibm model 4 from statistical machine translation and applied it to qasimilarly murdock and croft adopted a sim ple translation model from ibm model 1 and applied it to qaporting the translation model to qa is not straight forward it involves parsetree pruning heuristics and also replacing the lexical trans lation table with a monolingual dictionarywhich simply encodes the identity relationthis brings usto the question that drives this work is there a statistical translationlike model that is natural and accu rate for question answeringwe propose smith andeisners quasisynchronous grammar as a general solution and the jeopardy model as a specific instancefor a formal description of qg we recommendsmith and eisner we briefly review the central idea hereqg arose out of the empirical observation that translated sentences often have some isomorphic syntactic structure but not usually in en tirety and the strictness of the isomorphism may vary across words or syntactic rulesthe idea is that rather than a synchronous structure over the source and target sentences a tree over the target sentence is modeled by a sourcesentencespecific grammar that is inspired by the source sentences tree1 this is implemented by a sensereally just a subsetof nodes in the source treeattached to each gram mar node in the target treethe senses define an alignment between the treesbecause it only loosely links the two sentencessyntactic structure qg is particularly wellsuited for qa insofar as qa is like freetranslationa concrete example that is easy to understand is a binary quasisynchronous contextfree grammar 1smith and eisner also show how qg formalisms generalize synchronous grammar formalismslet vs be the set of constituent to kens in the source treeqcfg rules would take the augmented form x s1y s2z s3x s1w where xy and z are ordinary cfg nonterminals each si 2vs and w is a targetlanguage wordqg can be made more or less lib eralby constraining the cardinality of the si and by constraining the relation ships among the si mentioned in a single rulethese are called permissible configurationsan example of a strict configuration is that a target parentchildpair must align to a source parent child pairconfigurations are shown in table 1here following smith and eisner we usea weighted quasisynchronous dependency grammarapart from the obvious difference in appli cation task there are a few important differences with their modelfirst we are not interested in thealignments per se we will sum them out as a hid den variable when scoring a questionanswer pairsecond our probability model includes an optional mixture component that permits arbitrary featureswe experiment with a small set of wordnet lexicalsemantics features third we apply a more discriminative training method our model informally speaking aims to follow theprocess a player of the television game show jeop ardymight followthe player knows the answer and must quickly turn it into a question2 the questionanswer pairs used on jeopardyare not precisely what we have in mind for the real task but the syntactic transformation in spires our modelin this section we formally define 2a round of jeopardyinvolves a somewhat involved and specific answerpresented to the competitors and the first competitor to hit a buzzer proposes the questionthat leads tothe answerfor example an answer might be this eastern eu ropean capital is famous for defenestrationsin jeopardythe players must respond with a queston what is prague24this probability model and present the necessary al gorithms for parameter estimation41 probabilistic modelthe jeopardy model is a qg designed for qalet q q1 qnbe a question sentence and let a a1 ambe a candidate an swer sentenceinpractice these sequences may include other infor mation such as pos but for clarity we assume justwords in the expositionlet a be the set of can didate answers under considerationour aim is to choose a argmax aa p at a high level we make three adjustmentsthe first is to apply bayesrule p p pbecause a is known and is assumed to be generated by an external extraction system we could use that extraction system to assign scores to the candidate an swersother scores could also be used such as reputability of the document the answer came from grammaticality etc here aiming for simplicity we do not aim to use such informationhence we treat p as uniform over a3 the second adjustment adds a labeled directed dependency tree to the question and the answerthe tree is produced by a stateoftheart depen dency parser trained on the wall street journal penn treebank a dependency tree on a sequence w w1 wkis a mapping of indices of words to in dices of their syntactic parents and a label for the syntactic relation 1 k 0 k l each word wi has a single parent denoted wpar cycles are not permittedw0 is taken to be the invis ible wallsymbol at the left edge of the sentence it has a single child 0 1the label for wi is denoted labthe third adjustment involves a hidden variable x the alignment between question and answer3the main motivation for modeling p is that it is easier to model deletion of information than insertionour qg doesnot model the realworld knowledge required to fill in an an swer its job is to know what answers are likely to look like syntacticallywordsin our model each questionword maps to exactly one answerwordlet x 1 n 1 m be a mapping from indices of words in qto indices of words in a 1 in general x couldrange over subsets of 1 mbecause we de fine the correspondence in this direction note that it is possible for multple question words to map to the same answer wordwhy do we treat the alignmentx as a hidden vari ablein prior work the alignment is assumed to be known given the sentences but we aim to discoverit from dataour guide in this learning is the struc ture inherent in the qg the configurations betweenparentchild pairs in the question and their corre sponding aligned words in the answerthe hidden variable treatment let us us avoid commitment to any one x mapping making the method more robust tonoisy parses and any wrong assumptions imposed by the model 1 for example or that syntactic transformations can explain the connection between q and a at all4 our model then defines p x p where q and a are the question tree and answer tree respectivelythe stochastic process defined by our model factors cleanly into recursive steps thatderive the question from the top downthe qg defines a grammar for this derivation the grammar de pends on the specific answerlet iw refer to the subtree of w rooted at withe model is defined by p x a pkids i j i qi right jqi mx0 pkid lab qi q x x a p x a 4if parsing performance is a concern we might also treat the question andor answer parse trees as hidden variables thoughthat makes training and testing more computationally expen sive25 note the recursion in the last linewhile the above may be daunting in practice it boils down only to defining the conditional distribution pkid since the number of left and right children of each node need not be modeled pkids is included above for completeness but in the model applied here we do not condition it on qi and therefore do not need to estimate it pkid defines a distribution over syntactic children of qi and their labels given the word qi the parent of qi the dependency relation between qi and its parent the answerword qi is aligned to the answerword the child being predicted is aligned to and the remainder of the answer tree42 dynamic programminggiven q the score for an answer is simply pcomputing the score requires summing over alignments and can be done efficiently by bottomup dynamic programminglet s refer to the score of jq assuming that the parent of qj qpar is aligned to athe base case for leaves of q is s pkids pkids m k0 pkid lab qq k a note that k ranges over indices of answerwords to be aligned to qj the recursive case is s pkids i j i qj right m k0 pkid lab qq k a jqi s solving these equations bottomup can be done in o time and o space in practice this is very efficientin our experiments computing the value of a questionanswer pair took two seconds on average5 we turn next to the details of pkid the core of the model43 base modelour base model factors pkid into three conditional multinomial distributionspbasekid lab qq k a p p plab config where qipos is questionword is pos label and qine is its namedentity labelconfig maps questionword i its parent and their alignees to a qg configuration as described in table 1 notethat some configurations are extended with addi tional tree informationthe base model does not directly predict the specific words in the questiononly their partsofspeech namedentity labels anddependency relation labelsthis model is very sim ilar to smith and eisner because we are interested in augmenting the qg with additional lexicalsemantic knowledge we also estimate pkid by mixing the base model with a model that exploits wordnet lexicalsemantic relationsthe mixture is given by pkid p base kid p ls kid 44 lexicalsemantics loglinear modelthe lexicalsemantics model plskid is defined by pre dicting a subset of the thirteen classes for the questionside word given the identity ofits aligned answerside wordthese classes in clude wordnet relations identicalword synonymantonym hy pernym hyponym derived form morphological variation verb group entailmententailedby seealso and causal relationin addition to capture the special importance of whwords in questions we add a special semantic re lation called qwordbetween any word and any whwordthis is done through a loglinear model with one feature per relationmultiple relations may fire motivating the loglinear model which permits overlappingfeatures and therefore prediction of 5experiments were run on a 64bit machine with 222ghz dualcore cpus and 4gb of memory26 any of the possible 213 1 nonempty subsetsit is important to note that this model assigns zero probability to alignment of an answerword with any questionword that is not directly related to it through any relationsuch words may be linked in the mixture model however via pbasekid 6 parentchild question parentchild pair align respectively to answer parentchild pairaugmented with the qside dependency la belchildparent question parentchild pair align respectively to answer childparent pairaugmented with the qside dependency la belgrandparentchild question parentchild pair align respec tively to answer grandparentchild pairaugmented with the qside dependency labelsame node question parentchild pair align to the same answerwordsiblings question parentchild pair align to sib lings in the answeraugmented withthe treedistance between the aside sib lingsccommand the parent of one answerside word is an ancestor of the other answerside wordother a catchall for all other types of config urations which are permittedtable 1 syntactic alignment configurations are partitioned into these sets for prediction under the jeop ardy model45 parameter estimationthe parameters to be estimated for the jeopardy model boil down to the conditional multinomialdistributions in pbasekid the loglinear weights in side of plskid and the mixture coefficient 7 stan6it is to preserve that robustness property that the models are mixed and not combined some other way7in our experiments all loglinear weights are initialized tobe 1 all multinomial distributions are initialized as uniform disdard applications of loglinear models apply con ditional maximum likelihood estimation which for our case involves using an empirical distribution pover questionanswer pairs to opti mize as follows max qqaa p log p p x p note the hidden variable x being summed out that makes the optimization problem nonconvexthissort of problem can be solved in principle by conditional variants of the expectationmaximization al gorithm we use a quasinewton method known as lbfgs that makes use of the gradient of the above function to evaluate our model we conducted experiments using text retrieval conference 813 qa dataset8 51 experimental setupthe trec dataset contains questions and answer patterns as well as a pool of documents returned byparticipating teamsour task is the same as pun yakanok et al and cui et al where we search for singlesentence answers to factoid questionswe follow a similar setup to shen and klakow by automatically selecting answer candidate sentences and then comparing against a humanjudged gold standardwe used the questions in trec 812 for training and set aside trec 13 questions for development and testing to gen erate the candidate answer set for development and testing we automatically selected sentences from each questions document pool that contains one ormore nonstopwords from the questionfor gen erating the training candidate set in addtion to thesentences that contain nonstopwords from the ques tion we also added sentences that contain correct tributions is initialized to be 018we thank the organizers and nist for making the dataset publicly available27 answer patternmanual judgement was produced for the entire trec 13 set and also for the first 100questions from the training set trec 8129 on av erage each question in the development set has 31 positive and 171 negative answersthere are 36 positive and 200 negative answers per question in the test setwe tokenized sentences using the standard tree bank tokenization script and then we performedpartofspeech tagging using mxpost tagger the resulting postagged sentences were then parsed using mstparser trained on the entire penn treebank to produce labeled dependency parse trees we used bbn identifinder for namedentity taggingas answers in our task are considered to be sin gle sentences our evaluation differs slightly from trec where an answer string has to be accompaniedby a supporting document id as discussed by punyakanok et al the singlesentence assump tion does not simplify the task since the hardest part of answer finding is to locate the correct sentencefrom an endusers point of view presenting thesentence that contains the answer is often more in formative and evidentialfurthermore although the judgement data in our case are more laborintensiveto obtain we believe our evaluation method is a better indicator than the trec evaluation for the qual ity of an answer selection algorithmto illustrate the point consider the example question when did james dean diethe correct an9more humanjudged data are desirable though we will address training from noisy automatically judged data in section 54it is important to note that human judgement of answer sentence correctness was carried out prior to any experi ments and therefore is unbiasedthe total number of questions in trec 13 is 230we exclude from the trec 13 set questions that either have no correct answer candidates or no incorrect answer candidates any algorithm will get the same performance on these questions and thereforeobscures the evaluation results6 such questions were also excluded from the 100 manuallyjudged training questions result ing in 94 questions for trainingfor computational reasons we also eliminated answer candidate sentences that are longer than 40 words from the training and evaluation setafter these data preparation steps we have 348 positive qa pairs for training 1415 qa pairs in the development set and 1703 qa pairs in the test setswer as appeared in the sentence in 1955 actor james dean was killed in a twocar collision nearcholame califis 1955but from the same docu ment there is another sentence which also contains 1955 in 1955 the studio asked him to become a technical adviser on elia kazans east of edenstarring james deanif a system missed the first sentence but happened to have extracted 1955 fromthe second one the trec evaluation grants it a cor rect and wellsupportedpoint since the document id matches the correct document ideven though the latter answer does not entail the true answerour evaluation does not suffer from this problemwe report two standard evaluation measures commonly used in ir and qa research mean av erage precision and mean reciprocal rank all results are produced using the standard trec eval program52 baseline systemswe implemented two stateoftheart answerfinding algorithms as strong baselines for comparisoncui et al is the answerfinding algorithm behindone of the best performing systems in trec eval uationsit uses a mutual informationinspired scorecomputed over dependency trees and a single alignment between themwe found the method to be brit tle often not finding a score for a testing instance because alignment was not possiblewe extendedthe original algorithm allowing fuzzy word align ments through wordnet expansion both results are reportedthe second baseline is the approximate tree matching work by punyakanok et al their algorithm measures the similarity between q and a by computing tree edit distanceour replication is close to the algorithm they describe with one subtle differencepunyakanok et al used answertyping in computing edit distance this is not available in our dataset their heuristics for reformulating questions into statements were not replicatedwedid however apply wordnet typechecking and ap proximate penalized lexical matchingboth results are reported28 development set test set training dataset model map mrr map mrr 100 manuallyjudged treematch 04074 04458 03814 04462 wn 04328 04961 04189 04939 cui et al 04715 06059 04350 05569 wn 05311 06162 04271 05259 jeopardy 05189 05788 04828 05571 jeopardy 06812 07636 06029 06852 2293 noisy cui et al 02165 03690 02833 04248 wn 04333 05363 03811 04964 jeopardy 05174 05570 04922 05732 jeopardy 06683 07443 05655 06687 table 2 results on development and test setstreematch is our implementation of punyakanok et al wn modifies their edit distance function using wordnetwe also report our implementation of cui et al along with our wordnet expansion the jeopardy base model and mixture with the lexicalsemantics loglinear model perform best both are trained using conditional maximum likelihood estimationthe top part of the table shows performance using 100 manuallyannotated question examples and the bottom part adds noisily automatically annotated questions 1012393boldface marks the best score in a column and any scores in that column not significantly worse under a a twotailed paired ttest 53 resultsevaluation results on the development and test setsof our model in comparison with the baseline algo rithms are shown in table 2both our model and the model in cui et al are trained on the manuallyjudged training set the approximate tree matching algorithm in punyakanok et al uses fixed edit distance functions and therefore does not require trainingfrom the table we can see that our model signif icantly outperforms the two baseline algorithmseven when they are given the benefit of wordneton both development and test set and on both mrr and map54 experiments with noisy training dataalthough manual annotation of the remaining 2293 training sentencesanswers in trec 812 was too laborintensive we did experiment with a simple noisy automatic labeling techniqueany answer that had at least three nonstop word types seen in the question and contains the answer pattern defined in the dataset was labeled as correctand used intrainingthe bottom part of table 2 shows the re sultsadding the noisy data hurts all methods butthe jeopardy model maintains its lead and consis tently suffers less damage than cui et al does not use training examples55 summing vs maximizingunlike most previous work our model does not try to find a single correspondence between words in the question and words in the answer during training or during testingan alternative method might choose the best alignment rather than the sum of all alignment scoresthis involves a slight change to equation 3 replacing the summation with a maximizationthe change could be made during training during testing or bothtable 3 shows that summing is preferable especially during trainingthe key experimental result of this work is that loose syntactic transformations are an effective way to carry out statistical question answeringone unique advantage of our model is the mix ture of a factored multinomialbased base model and a potentially very rich loglinear modelthebase model gives our model robustness and the log 29 test set training decoding map mrr 06029 06852 max 05822 06489 max 05559 06250 max max 05571 06365table 3 experimental results on comparing sum ming over alignments with maximizing over alignments on the test setboldface marks the best score in a column and any scores in that columnnot significantly worse under a a twotailed paired t test linear model allows us to throw in task or domainspecific featuresusing a mixture gives the advantage of smoothing without hav ing to normalize the loglinear model by summing over large setsthis powerful combination leads us to believe that our model can be easily ported to other semantic processing tasks where modeling syntactic and semantic transformations is the keysuch as textual entailment paraphrasing and cross lingual qathe traditional approach to crosslingual qa isthat translation is either a preprocessing or post processing step done independently from the main qa tasknotice that the qg formalism that we have employed in this work was originally proposed formachine translationwe might envision transformations that are performed together to form ques tions from answers and to translatea jeopardygame in which bilingual players must ask a question in a different language than that in which the answer is posedwe described a statistical syntaxbased model that softly aligns a question sentence with a candidateanswer sentence and returns a scorediscriminative training and a relatively straightforward barelyengineered feature set were used in the implementationour scoring model was found to greatly out perform two stateoftheart baselines on an answer selection task using the trec datasetacknowledgments the authors acknowledge helpful input from three anonymous reviewers kevin gimpel and david smiththis work is supported in part by ardadto advanced question answering for intelligence program award number nbchc040164
D07-1003
what is the jeopardy model a quasisynchronous grammar for qathis paper presents a syntaxdriven approach to question answering specifically the answersentence selection problem for shortanswer questionsrather than using syntactic features to augment existing statistical classifiers we build on the idea that questions and their answers relate to each other via loose but predictable syntactic transformationswe propose a probabilistic quasisynchronous grammar inspired by one proposed for machine translation and parameterized by mixtures of a robust nonlexical syntaxalignment model with a lexicalsemanticsdriven loglinear modelour model learns soft alignments as a hidden variable in discriminative trainingexperimental results using the trec dataset are shown to significantly outperform strong stateoftheart baselineswe explore the use a formalism called quasi synchronous grammar in order to find a more explicit model for matching the set of dependencies and yet still allow for looseness in the matchingwe use quasisynchronous translation to map all parentchild paths in a question to any path in an answer
improving statistical machine translation using word sense disambiguation we show for the first time that incorporatingthe predictions of a word sense disambigua tion system within a typical phrasebased statistical machine translation model consistently improves translation qualityacross all three different iwslt chineseenglish test sets as well as producing sta tistically significant improvements on the larger nist chineseenglish mt task and moreover never hurts performance on any test set according not only to bleubut to all eight most commonly used au tomatic evaluation metrics recent work has challenged the assumption that word sense disambiguation systems areuseful for smt yet smt translation qual ity still obviously suffers from inaccurate lexical choice in this paper we addressthis problem by investigating a new strategy for integrating wsd into an smt sys tem that performs fully phrasal multiworddisambiguation instead of directly incor porating a sensevalstyle wsd system weredefine the wsd task to match the ex act same phrasal translation disambiguation task faced by phrasebased smt systemsour results provide the first known empirical evidence that lexical semantics are in deed useful for smt despite claims to the contrary this material is based upon work supported in part by the defense advanced research projects agency under gale contract no hr001106c0023 and by the hong kong research grants council research grants common assumptions about the role and useful ness of word sense disambiguation models in fullscale statistical machine translation systems have recently been challengedon the one hand in previous work we obtained disappointing results when using the predictions of a senseval wsd system inconjunction with a standard wordbased smt system we reported slightly lower bleu scores de spite trying to incorporate wsd using a number of apparently sensible methodsthese results cast doubt on the assumption that sophisticated dedicated wsd systems that were developed independently from any particular nlp application can easily beintegrated into a smt system so as to improve trans lation quality through stronger models of context and rich linguistic informationrather it has beenargued smt systems have managed to achieve sig nificant improvements in translation quality without directly addressing translation disambiguation as awsd taskinstead translation disambiguation deci sions are made indirectly typically using only word surface forms and very local contextual information forgoing the much richer linguistic information that wsd systems typically take advantage ofon the other hand error analysis reveals that theperformance of smt systems still suffers from inaccurate lexical choicein subsequent empirical stud ies we have shown that smt systems perform much worse than dedicated wsd models both supervised rgc608399e rgc625600e and dag0304eg09anyopinions findings and conclusions or recommendations ex pressed in this material are those of the author and do not necessarily reflect the views of the defense advanced research projects agency61 and unsupervised on a senseval wsd task and therefore suggest that wsd should have a role to play in stateoftheart smt systemsin addition to the senseval shared tasks which have provided standard sense inventories and data sets wsd research has also turned increasinglyto designing specific models for a particular applica tionfor instance vickrey et al and specia proposed wsd systems designed for frenchto english and portuguese to english translation re spectively and present a more optimistic outlook forthe use of wsd in mt although these wsd sys tems have not yet been integrated nor evaluated in fullscale machine translation systemstaken together these seemingly contradictory results suggest that improving smt lexical choice ac curacy remains a key challenge to improve current smt quality and that it is still unclear what is the most appropriate integration framework for the wsd models in smtin this paper we present first results with a new architecture that integrates a stateoftheartwsd model into phrasebased smt so as to per form multiword phrasal lexical disambiguation and show that this new wsd approach not only produces gains across all available chineseenglishiwslt06 test sets for all eight commonly used au tomated mt evaluation metrics but also produces statistically significant gains on the much larger nist chineseenglish taskthe main differencebetween this approach and several of our earlier ap proaches as described in carpuat and wu and subsequently carpuat et al lies in the fact that we focus on repurposing the wsd systemfor multiword phrasebased smtrather than us ing a generic senseval wsd model as we did incarpuat and wu here both the wsd train ing and the wsd predictions are integrated into the phrasebased smt frameworkfurthermore rather than using a single word based wsd approach to augment a phrasebased smt model as we did in carpuat et al to improve bleu and nistscores here the wsd training and predictions operate on full multiword phrasal units resulting in significantly more reliable and consistent gains as eva luted by many other translation accuracy metrics as wellspecifically instead of using a senseval system we redefinethe wsd task to be exactly the same as lexi cal choice task faced by the multiword phrasal translation disambiguation task faced by the phrasebased smt systeminstead of using predefined senses drawn from manually constructed sense inventories such ashownet our wsd for smt sys tem directly disambiguates between all phrasaltranslation candidates seen during smt train inginstead of learning from manually annotated training data our wsd system is trained on the same corpora as the smt systemhowever despite these adaptations to the smt task the core sense disambiguation task remains pure wsd the rich context features are typical of wsd and almost never used in smtthe dynamic integration of contextsensitive translation probabilities is not typical of smtalthough it is embedded in a real smt sys tem the wsd task is exactly the same as inrecent and coming senseval multilingual lexi cal sample tasks where sense inventories represent the semantic distinctions made by another languagewe begin by presenting the wsd module and the smt integration techniquewe then show that incorporating it into a standard phrasebased smt baseline system consistently improves translation quality across all three different test sets from the chineseenglish iwslt text translation evaluationas well as on the larger nist chineseenglish trans lation taskdepending on the metric the individualgains are sometimes modest but remarkably incorporating wsd never hurts and helps enough to al ways make it a worthwile additional component in an smt systemfinally we analyze the reasons for the improvement62choice for smtto the best of our knowledge there has been no pre vious attempt at integrating a stateoftheart wsd system for fully phrasal multiword lexical choiceinto phrasebased smt with evaluation of the re sulting system on a translation taskwhile there are many evaluations of wsd quality in particular the senseval series of shared tasks kilgarriff mihalcea et al very little work has been done to addressthe actual integration of wsd in realistic smt ap plicationsto fully integrate wsd into phrasebased smt it is necessary to perform lexical disambiguation on multiword phrasal lexical units in contrast the model reported in cabezas and resnik can only perform lexical disambiguation on sin gle wordslike the model proposed in this papercabezas and resnik attempted to integrate phrasebased wsd models into decodinghowever although they reported that incorporating these predic tions via the pharaoh xml markup scheme yielded a small improvement in bleu score over a pharaoh baseline on a single spanishenglish translation data set we have determined empirically that applyingtheir singleword based model to several chineseenglish datasets does not yield systematic improve ments on most mt evaluation metrics the singleword model has the disad vantage of forcing the decoder to choose between the baseline phrasal translation probabilities versusthe wsd model predictions for single wordsin ad dition the singleword model does not generalize to wsd for phrasal lexical choice as overlapping spans cannot be specified with the xml markup schemeproviding wsd predictions for phraseswould require committing to a phrase segmenta tion of the input sentence before decoding which is likely to hurt translation qualityit is also necessary to focus directly on translationaccuracy rather than other measures such as alignment error rate which may not actually lead to i am proved translation quality in contrast for example garciavarea et al and garciavarea et al show improved alignment error rate with a maximum entropy based contextdependent lexicalchoice model but not improved translation accuracyin contrast our evaluation in this paper is conducted on the actual decoding task rather than in termediate tasks such as word alignmentmoreoverin the present work all commonly available auto mated mt evaluation metrics are used rather than only bleu score so as to maintain a more balanced perspectiveanother problem in the contextsensitive lexical choice in smt models of garcia varea et al is that their feature set is insufficiently rich to make much better predictions than the smt model itselfin contrast our wsdbased lexical choice models are designed to directly model the lexical choice in the actual translation direction and take full advantageof not residing strictly within the bayesian source channel model in order to benefit from the much richer sensevalstyle feature set this facilitatesgarcia varea et al found that the best results are obtained when the training of the contextdependent translation model is fully incorporated with the them training of the smt systemas described belowthe training of our new wsd model though not in corporated within the them training is also far more closely tied to the smt model than is the case with traditional standalone wsd modelsin contrast with brown et al our approach incorporates the predictions of stateofthe art wsd models that use rich contextual features for any phrase in the input vocabularyin brown et als early study of wsd impact on smt performance the authors reported improved translation quality on a french to english task by choosing an english translation for a french word based on the single contextual feature which is reliably discriminativehowever this was a pilot study which is limited to words with exactly two translation candidates and it is not clear that the conclusions would generalize to more recent smt architecturesthe close relationship between wsd and smt has been emphasized since the emergence of wsd asan independent taskhowever most of previous re search has focused on using multilingual resourcestypically used in smt systems to improve wsd ac curacy eg dagan and itai li and li 63 diab in contrast this paper focuses on theconverse goal of using wsd models to improve ac tual translation qualityrecently several researchers have focused on de signing wsd systems for the specific purpose oftranslationvickrey et al train a logistic regression wsd model on data extracted from auto matically word aligned parallel corpora but evaluateon a blank filling task which is essentially an eval uation of wsd accuracyspecia describesan inductive logic programmingbased wsd sys tem which was specifically designed for the purpose of portuguese to english translation but this system was also only evaluated on wsd accuracy and not integrated in a fullscale machine translation systemng et al show that it is possible to use automatically word aligned parallel corpora to train accurate supervised wsd modelsthe purpose ofthe study was to lower the annotation cost for su pervised wsd as suggested earlier by resnik andyarowsky however this result is also en couraging for the integration of wsd in smt since it suggests that accurate wsd can be achieved using training data of the kind needed for smtsmt 41 wsd models for every phrase in the inputvocabulary just like for the baseline phrase translation modelwsd models are defined for every phrase in the in put vocabularylexical choice in smt is naturallyframed as a wsd problem so the first step of inte gration consists of defining a wsd model for every phrase in the smt input vocabularythis differs from traditional wsd tasks wherethe wsd target is a single content wordsense val for instance has either lexical sample or all wordtasksthe target words for both categories of sen seval wsd tasks are typically only content wordsprimarily nouns verbs and adjectiveswhile in thecontext of smt we need to translate entire sen tences and therefore have a wsd model not only for every word in the input sentences regardless of their pos tag but for every phrase including tokens such as articles prepositions and even punctuationfurther empirical studies have suggested that includ ing wsd predictions for those longer phrases is akey factor to help the decoder produce better trans lations 42 wsd uses the same sense definitions as thesmt system instead of using predefined sense inventories thewsd models disambiguate between the smt trans lation candidatesin order to closely integrate wsdpredictions into the smt system we need to formu late wsd models so that they produce features that can directly be used in translation decisions taken by the smt systemit is therefore necessary for the wsd and smt systems to consider exactly the same translation candidates for a given word in the input languageassuming a standard phrasebased smt system wsd senses are thus ei ther words or phrases as learned in the smt phrasal translation lexiconthose sensecandidates arevery different from those typically used even in ded icated wsd tasks even in the multilingual sensevaltaskseach candidate is a phrase that is not neces sarily a syntactic noun or verb phrase as in manuallycompiled dictionariesit is quite possible that dis tinct sensesin our wsd for smt system could beconsidered synonyms in a traditional wsd frame work especially in monolingual wsdin addition to the consistency requirements for integration this requirement is also motivated by empirical studies which show that predefined trans lations derived from sense distinctions defined in monolingual ontologies do not match translation distinction made by human translators 43 wsd uses the same training data as thesmt system wsd training does not require any other resourcesthan smt training nor any manual sense annota tionwe employ supervised wsd systems sincesenseval results have amply demonstrated that supervised models significantly outperform unsupervised approaches training examples are annotated using the phrasealignments learned during smt trainingevery in 64 put language phrase is sensetagged with its aligned output language phrase in the parallel corpusthe phrase alignment method used to extract the wsd training data therefore depends on the one used by the smt systemthis presents the advantage of training wsd and smt models on exactly the same data thus eliminating domain mismatches betweensenseval data and parallel corporabut most importantly this allows wsd training data to be gener ated entirely automatically since the parallel corpus is automatically phrasealigned in order to learn the smt phrase bilexicon44 the wsd systemthe word sense disambiguation subsystem is mod eled after the best performing wsd system in the chinese lexical sample task at senseval3 the features employed are typical of wsd and are therefore far richer than those used in mostsmt systemsthe feature set consists of positionsensitive syntactic and local collocational fea tures since these features yielded the best results when combined in a nave bayes model on severalsenseval2 lexical sample tasks these features scale easily to the bigger vocabulary and sense candidates to be considered in a smt taskthe senseval system consists of an ensemble of four combined wsd models the first model is a nave bayes model since yarowsky and florian found this model to be the most accurate classifier in a comparative study on a subset of senseval2 english lexical sample datathe second model is a maximum entropy model since klein and manning found that this model yielded higher accuracy than nave bayes in a subsequent comparison of wsd performancethe third model is a boosting model since boosting has consistently turned in very competitive scores on related tasks such as named entity classificationwe also use the adaboostmh algorithmthe fourth model is a kernel pcabased model kernel principal component analysis or kpca is a nonlinear kernel method forextracting nonlinear principal components from vector sets where conceptually the ndimensional input vectors are nonlinearly mapped from their origi nal space right now to a highdimensional feature space f where linear pca is performed yielding a transformby which the input vectors can be mapped nonlin early to a new set of vectors wsd can be performed by a nearest neighbor clas sifier in the highdimensional kpca feature spaceall these classifiers have the ability to handle large numbers of sparse features many of which may be irrelevantmoreover the maximum entropy and boosting models are known to be well suited to handling features that are highly interdependent45 integrating wsd predictions inphrasebased smt architectures it is nontrivial to incorporate wsd into an existing phrasebased architecture such as pharaoh since the decoder is not set up to easily accept multiple translation probabilities that are dy namically computed in contextsensitive fashionfor every phrase in a given smt input sentencethe wsd probabilities can be used as additional feature in a loglinear translation model in combination with typical contextindependent smt bilexi con probabilitieswe overcome this obstacle by devising a callingarchitecture that reinitializes the decoder with dynamically generated lexicons on a persentence ba sisunlike a nbest reranking approach which is limited by the lexical choices made by the decoder using only the baseline contextindependent transla tion probabilities our method allows the system to make full use of wsd information for all competing phrases at all decoding stagesthe evaluation is conducted on two standard chinese to english translation taskswe follow standard machine translation evaluation procedure us ing automatic evaluation metricssince our goal is to evaluate translation quality we use standard mtevaluation methodology and do not evaluate the ac curacy of the wsd model independently65 table 1 evaluation results on the iwslt06 dataset integrating the wsd translation predictions improves bleu nist meteor wer per cder and ter across all 3 different available test setstest set experbleu nist meteor meteor ter wer per cder test 1 smt 4221 7888 6540 6324 4045 4558 3780 4009 smtwsd 4238 7902 6573 6364 3998 4530 3760 3991 test 2 smt 4149 8167 6625 6385 4095 4642 3752 4035 smtwsd 4197 8244 6635 6386 4063 4614 3725 4010 test 3 smt 4991 9016 7336 7070 3560 4060 3230 3546 smtwsd 5105 9142 7413 7144 3468 3975 3171 3458 table 2 evaluation results on the nist test set integrating the wsd translation predictions improves bleu nist meteor wer per cder and ter experbleu nist meteor meteor ter wer per cder smt 2041 7155 6021 5615 7676 8826 6171 7032 smtwsd 2092 7468 6030 5679 7134 8387 5729 6738 51 data setpreliminary experiments are conducted using training and evaluation data drawn from the multilin gual btec corpus which contains sentences used inconversations in the travel domain and their transla tions in several languagesa subset of this data wasmade available for the iwslt06 evaluation cam paign the training set consists of 40000 sentence pairs and each test set contains around 500 sentenceswe used only the pure text data and notthe speech transcriptions so that speechspecific issues would not interfere with our primary goal of understanding the effect of integrating wsd in a full scale phrasebased modela larger scale evaluation is conducted on the stan dard nist chineseenglish test set whichcontains 1788 sentences drawn from newswire cor pora and therefore of a much wider domain than the iwslt data setthe training set consists of about 1 million sentence pairs in the news domainbasic preprocessing was applied to the corpusthe english side was simply tokenized and case normalizedthe chinese side was word segmented using the ldc segmenter52 baseline smt systemsince our focus is not on a specific smt architec ture we use the offtheshelf phrasebased decoderpharaoh trained on the iwslt train ing setpharaoh implements a beam search decoder for phrasebased statistical models and presents the advantages of being freely available and widely usedthe phrase bilexicon is derived from the inter section of bidirectional ibm model 4 alignmentsobtained with giza aug mented to improve recall using the growdiagfinalheuristicthe language model is trained on the english side of the corpus using the sri language mod eling toolkit the loglinear model weights are learned using chiangs implementation of the maximum bleutraining algorithm both for the base line and the wsdaugmented systemdue totime constraints this optimization was only con ducted on the iwslt taskthe weights used in the wsdaugmented nist model are based on the best iwslt modelgiven that the two tasks are quite different we expect further improvements on the wsdaugmented system after running maximum bleu optimization for the nist taskusing wsd predictions in smt yields better trans lation quality on all test sets as measured by all eight commonly used automatic evaluation metrics66 table 3 translation examples with and without wsd for smt drawn from iwslt data setsinput lx refplease transfer to the chuo train linesmt please turn to the central linesmtwsd please transfer to central lineinput fhf p refdo i pay on the bussmt please get on the bussmtwsd i buy a ticket on the businput
D07-1007
improving statistical machine translation using word sense disambiguationwe show for the first time that incorporating the predictions of a word sense disambiguation system within a typical phrasebased statistical machine translation model consistently improves translation quality across all three different iwslt chineseenglish test sets as well as producing statistically significant improvements on the larger nist chineseenglish mt task and moreover never hurts performance on any test set according not only to bleu but to all eight most commonly used automatic evaluation metricsrecent work has challenged the assumption that word sense disambiguation systems are useful for smtyet smt translation quality still obviously suffers from inaccurate lexical choicein this paper we address this problem by investigating a new strategy for integrating wsd into an smt system that performs fully phrasal multiword disambiguationinstead of directly incorporating a sensevalstyle wsd system we redefine the wsd task to match the exact same phrasal translation disambiguation task faced by phrasebased smt systemsour results provide the first known empirical evidence that lexical semantics are indeed useful for smt despite claims to the contrarywe provide a machine translation system with the wsd probabilities for a phrase translation as extra features in a loglinear modelwe use rich context features based on position syntax and local collocations to dynamically adapt the lexicons for each sentence and facilitate the choice of longer phraseswe use a stateoftheart wsd engine to dynamically determine the score of a phrase pair under consideration and thus let the phrase selection adapt to the context of the sentence
why does not them find good hmm postaggers this paper investigates why the hmms es timated by expectationmaximization produce such poor results as partofspeech taggers we find that the hmms es timated by them generally assign a roughlyequal number of word tokens to each hid den state while the empirical distribution of tokens to pos tags is highly skewed this motivates a bayesian approach using a sparse prior to bias the estimator toward such a skewed distribution we investigate gibbs sampling and variational bayes estimators and show that vb con verges faster than gs for this task and thatvb significantly improves 1to1 tagging ac curacy over them we also show that them does nearly as well as vb when the number of hidden hmm states is dramatically reduced we also point out the high variance in all of these estimators and that they requiremany more iterations to approach conver gence than usually thought it is well known that expectationmaximization performs poorly in unsupervised induction of linguistic structure in ret rospect one can certainly find reasons to explain this failure after all likelihood does not appear in thewide variety of linguistic tests proposed for identi fying linguistic structure this paper focuses on unsupervised partofspeech tagging because it is perhaps the sim plest linguistic induction taskwe suggest that onereason for the apparent failure of them for pos tagging is that it tends to assign relatively equal numbers of tokens to each hidden state while the them pirical distribution of pos tags is highly skewed like many linguistic phenomenawe focus on firstorder hid den markov models in which the hidden state is interpreted as a pos tag also known as bitag modelsin this setting we show that them performs poorlywhen evaluated using a 1to1 accuracyevalua tion where each pos tag corresponds to at most one hidden state but is more competitive when evaluatedusing a manyto1 accuracyevaluation where sev eral hidden states may correspond to the same postagwe explain this by observing that the distribution of hidden states to words proposed by the them estimated hmms is relatively uniform while the empirical distribution of pos tags is heavily skewed towards a few highfrequency tagsbased on thiswe propose a bayesian prior that biases the sys tem toward more skewed distributions and show that this raises the 1to1 accuracy significantlyfinally we show that a similar increase in accuracy can be achieved by reducing the number of hidden states in the models estimated by themthere is certainly much useful information that bitag hmms models cannot capturetoutanova etal describe a wide variety of morphological and distributional features useful for pos tagging and clark proposes ways of incorporat ing some of these in an unsupervised tagging modelhowever bitag models are rich enough to capture at least some distributional information moreover more complex models add addi tional complicating factors that interact in ways stillpoorly understood for example smoothing is gen erally regarded as essential for higherorder hmmsyet it is not clear how to integrate smoothing into un supervised estimation procedures most previous work exploiting unsupervised training data for inferring pos tagging models has focused on semisupervised methods in the in which the learner is provided with a lexicon specifying the possible tags for each word or a small number of prototypesfor each posin the context of semi supervised learning using a tag lexicon wang and schuurmans observe discrepencies between the empirical and estimated tag frequencies similar to those observed here and show that constraining the estimation procedure to preserve the empiricalfrequencies improves tagging accuracyhowever as banko and moore point out the accuracy achieved by these unsupervised methodsdepends strongly on the precise nature of the su pervised training data which makes it more difficult to understand the behaviour of such systemsall of the experiments described below have the same basic structure an estimator is used to infera bitag hmm from the unsupervised training cor pus wall street journal corpus and then the resulting model is used to label each word of that corpus with one of the hmms hidden statesthis section describes how we evaluate how well thesesequences of hidden states correspond to the gold standard pos tags for the training corpus the chief difficulty is determining the correspondence between the hidden states and the goldstandard pos tagsperhaps the most straightforward method of es tablishing this correspondence is to deterministically map each hidden state to the pos tag it cooccurs most frequently with and return the proportion of the resulting pos tags that are the same as the pos tags of the goldstandard corpuswe call this themanyto1 accuracy of the hidden state sequence be because several hidden states may map to the same pos tag as clark points out manyto1 accuracy has several defectsif a system is permitted to posit an unbounded number of hidden states then it can achieve a perfect manyto 1 accuracy by placing every word token into its own unique statecrossvalidation ie identifying themanyto1 mapping and evaluating on different subsets of the data would answer many of these objectionshaghighi and klein propose constrain ing the mapping from hidden states to pos tags so that at most one hidden state maps to any pos tagthis mapping is found by greedily assigning hidden states to pos tags until either the hidden states or pos tags are exhausted we call the accuracy of the pos sequence obtained using this map its 1to1 accuracyfinally several authors have proposed using informationtheoretic measures of the divergence between the hidden state and pos tag sequencesgoldwater and griffiths propose using the variation of information metric described bymeilawe regard the assignments of hidden states and pos tags to the words of the cor pus as two different ways of clustering those wordsand evaluate the conditional entropy of each clus tering conditioned on the otherthe vi is the sum of these conditional entropiesspecifically given a corpus labeled with hidden states and pos tags if p p and p are the empirical probabilities of a hidden state y a pos tag t and the cooccurance of y and t respectively then the mutual information i entropies h and variation of information vi are defined as follows h y p log p h t p log p i yt p log p pp h hi 297 h hi vi h h as meila shows vi is a metric on the space of probability distributions whose value reflects the divergence between the two distributions and only takes the value zero when the two distributions are identicalexpectationmaximization there are several excellent textbook presentations of hidden markov models and the forwardbackward algorithm for expectationmaximization so we do not cover them in detail hereconceptu ally a hidden markov model generates a sequence of observations x by first using a markov model to gen erate a sequence of hidden states y and then generating each word xi conditioned on its corresponding state yiwe insert endmarkers at the beginning and ending of the corpus and between sentence boundaries and constrain the estimator to associate endmarkers with a state that never appears with any other observationtype in more detail the hmm is specified by multi nomials y and y for each hidden state y where y specifies the distribution over states following y and y specifies the distribution over observations x given state y yi yi1 y multi xi yi y multi we used the forwardbackward algorithm to perform expectationmaximization which is a procedure that iteratively reestimates the model param eters converging on a local maximum of the likelihoodspecifically if the parameter estimate attime is then the reestimated parame ters at time 1 are yy enyyeny xy enxyeny 695e06 700e06 705e06 710e06 715e06 0 250 500 750 1000 lo g lik el ih oo d iteration figure 1 variation in negative log likelihood with increasing iterations for 10 them runs from different random starting pointswhere nxy is the number of times observation x oc curs with state y nyy is the number of times state yfollows y and ny is the number of occurences of state y all expectations are taken with respect to the model we took care to implement this and the other al gorithms used in this paper efficiently since optimal performance was often only achieved after several hundred iterationsit is wellknown that them oftentakes a large number of iterations to converge in likelihood and we found this here too as shown in fig ure 1as that figure makes clear likelihood is still increasing after several hundred iterationsperhaps more surprisingly we often found dramatic changes in accuracy in the order of 5 occur ing after several hundred iterations so we ran 1000 iterations of them in all of the experiments describedhere each run took approximately 25 days compu tation on a 36ghz pentium 4its wellknown thataccuracy often decreases after the first few them it erations however in our experiments we found that performance improves again after 100 iterations and continues improving roughly monotonicallyfigure 2 shows how 1to1 accuracy varies with iteration during 10 runs from different random starting pointsnote that 1to1 accuracy at termination ranges from 038 to 045 a spread of 007we obtained a dramatic speedup by working directly with probabilities and rescaling after each ob servation to avoid underflow rather than workingwith log probabilities since we evaluatedthe accuracy of the estimated tags after each iteration it was important that decoding be done effi ciently as wellwhile most researchers use viterbidecoding to find the most likely state sequence maximum marginal decoding is faster because it reuses the forward and backward tables already constructed by the forwardbackward algorithmmoreover in separate experiments we found that the maximum marginal state sequence almost always scored higherthan the viterbi state sequence in all of our evalua tions and at modest numbers of iterations often scored more than 5 betterwe also noticed a wide variance in the perfor mance of models due to random initialization thiswide variance was observed with all of the estima tors investigated in this paperthis means we cannot compare estimators on the basis of single runs so we ran each estimator 10 times from different random starting points and report both mean and standard deviation for all scoresfinally we also experimented with annealing in which the parameters and are raised to the power 1t where t is a temperatureparameter that isslowly lowered toward 1 at each iteration accord ing to some annealing schedulewe experimented with a variety of starting temperatures and annealing schedules but wereunable to find any that produced models whose like 0e0 1e5 2e5 fre quen cy tag hidden state pt b v b them them 25 figure 3 the average number of words labeled with each hidden state or tag for the them vb and them25 estimators lihoods were significantly higher than those found without annealingthe evaluation of the models produced by the them and other estimators is presented in table 1it is difficult to compare these with previous work but haghighi and klein report that in a completely unsupervised setting their mrf model which uses a large set of additional features and amore complex estimation procedure achieves an average 1to1 accuracy of 413because they provide no information about the variance in this accuracy it is difficult to tell whether there is a signifi cannot difference between their estimator and the them estimator but it is clear that when them is run longenough the performance of even very simple models like the bitag hmm is better than generally rec ognizedas table 1 makes clear the them estimator pro duces models that are extremely competitive in manyto1 accuracy and variation of information but are significantly worse in 1to1 accuracywecan understand these results by comparing the dis tribution of words to hidden states to the distribution of words to pos tags in the goldstandard evaluation corpusas figure 3 shows the distribution of words to pos tags is highly skewed with just 6 pos tags nn in nnp dt jj and nns accounting for over 55 of the tokens in the corpusby contrast the them distribution is much flatterthis also explains why the manyto1 accuracy is so much better than the onetoone accuracy presumably several hidden 299 estimator 1to1 manyto1 vi h h them 040 062 446 175 271 vb 047 050 428 239 189 vb 046 050 428 239 190 vb 042 060 463 186 277 vb 042 060 462 185 276 gs 037 051 545 235 320 gs 038 051 547 226 322 gs 036 049 573 241 331 gs 037 049 574 242 332 them 042 060 437 184 255 them 046 056 423 205 219 them 041 043 432 274 158 table 1 evaluation of models produced by the various estimatorsthe values of the dirichlet prior param eters for x and y appear in the estimator name for the vb and gs estimators and the number of hidden states is given in parenthesesreported values are means over all runs followed by standard deviations10 runs were performed for each of the them and vb estimators while 5 runs were performed for the gsestimatorseach them and vb run consisted of 1000 iterations while each gs run consisted of 50000 it erationsfor the estimators with 10 runs a 3standard error 95 confidence interval is approximately the same as the standard deviationstates are being mapped onto a single pos tagthis is also consistent with the fact that the crossentropy h of tags given hidden states is relatively low while the crossentropy h is rela tively highand variational bayes a bayesian estimator combines a likelihood termp and a prior p to estimate the poste rior probability of a model or hidden state sequencewe can use a bayesian prior to bias our estimatortowards models that generate more skewed distributionsbecause hmms are prod ucts of multinomials dirichlet distributions are a particularly natural choice for the priors since they are conjugate to multinomials which simplifies both the mathematical and computational aspects of theproblemthe precise form of the model we investi gated is y y dir y x dir yi yi1 y multi xi yi y multiinformally y controls the sparsity of the stateto state transition probabilities while x controls thesparsity of the statetoobservation emission proba bilitiesas x approaches zero the prior strongly prefers models in which each hidden state emitsas few words as possiblethis captures the intu ition that most word types only belong to one possince the minimum number of nonzero statetoobservation transitions occurs when each observa tion type is emitted from only one statesimilarly as y approaches zero the statetostate transitions become sparserthere are two main techniques for bayesian esti mation of such models markov chain monte carlo and variational bayes mcmc en compasses a broad range of sampling techniques including componentwise gibbs sampling which is the mcmc technique we used here in general mcmctechniques do not produce a single model that char acterizes the posterior but instead produce a stream of samples from the posteriorthe application of mcmc techniques including gibbs sampling to hmm inference problems is relatively wellknown see besag for a tutorial introduction and goldwater and griffiths for an applicationof gibbs sampling to hmm inference for semi 300 supervised and unsupervised pos taggingthe gibbs sampler produces state sequences y sampled from the posterior distribution p ppp ddbecause dirichlet priors are conjugate to multinomials it is possible to integrate out the model parameters and to yield the conditional distribu tion for yi shown in figure 4for each observation xi in turn we resample its state yi conditioned on the states yi of the other observations eventually the distribution of state sequences converges to the desired posterioreach iteration of the gibbs sampler is much faster than the forwardbackward algorithm time while each iteration of the forwardbackward algorithm takes o time so we ran 50000 iterations of all samplers as can be seen from table 1 the posterior state sequences we obtained are not particularly goodfurther when we examined how the posterior like lihoods varied with increasing iterations of gibbs sampling it became apparent that the likelihood was still increasing after 50000 iterationsmoreoverwhen comparing posterior likelihoods from different runs with the same prior parameters but differ ent random number seeds none of the likelihoods crossed which one would expect if the samplers had converged and were mixing well just as with them we experimented with a variety of annealing regimes but were unable to find any which significantly improved accuracy or posterior likelihoodwe also experimented with evaluating state se quences found using maximum posterior decoding rather than the samples from the pos terior produced by the gibbs samplerwe found that the maximum posterior decoding sequences usually scored higher than the posterior samples but the scores converged after the first thousand iterationssince the posterior samples are produced as a byproduct of gibbs sampling while maximum poste rior decoding requires an additional time consuming step that does not have much impact on scores we used the posterior samples to produce the results in table 1in contrast to mcmc variational bayesian in ference attempts to find the function q thatminimizes an upper bound of the negative log likeli hood log p log q p q dy dd q log p q dy dd the upper bound in is called the variational free energywe make a meanfieldassumption thatthe posterior can be well approximated by a factor ized modelq in which the state sequence y does not covary with the model parameters p q q1q2 the calculus of variations is used to minimize thekl divergence between the desired posterior distri bution and the factorized approximationit turnsout that if the likelihood and conjugate prior be long to exponential families then the optimalq1 andq2 do too and there is an themlike iterative pro cedure that finds locallyoptimal model parameters this procedure is especially attractive for hmminference since it involves only a minor modification to the mstep of the forwardbackward algo rithmmackay and beal describe variational bayesian inference for hmms in detail and kurihara and sato describe vbfor pcfgs specifically the estep for vb inference for hmms is the same as in them while themstep is as follows yy ff xy ff f exp g 1v g log 004167x2 000729x4 000384x6 000413x8 301 p y nyi i figure 4 the conditional distribution for state yi used in the gibbs sampler which conditions on the states yi for all observations except xihere m is the number of possible observations s is the number of hidden states and iis the indicator function nxy is the number of times observation x occurs with state y nyy is the number of times state yfollows y and ny is the number of times state y occurs these counts are from ie excluding xi and yi0 1 2 0 1 2 figure 5 the scaling function y f exp which is bounded above by the line y x and below by the line y x05where is the digamma function gives an asymptotic ap proximation and the remaining quantities are just as in the them updates ie nxy is the number of times observation x occurs with state y nyy is the number of times state yfollows y ny is the number of occurences of state y s is the number of hiddenstates and m is the number of observations all ex pectations are taken with respect to the variational parameters a comparison between and reveals two dif ferences between the them and vb updatesfirst the dirichlet prior parameters are added to the expected countssecond these posterior counts are passed through the function f exp which is plotted in figure 5when v 0 f v 05 so roughly speaking vb for multinomials involves adding 05 to the expected counts when they are much larger than zero where is the dirichlet prior parameterthus vb canbe viewed as a more principled version of the well known ad hoc technique for approximating bayesian estimation with them that involves adding 1 to the expected countshowever in the ad hoc approach the expected count plus 1 may be less than zeroresulting in a value of zero for the corresponding parameter vb avoids this problem because f is always positive when v 0 even when v is smallnote that because the counts are passed through f the updated values for and in are in general not normalized this is because the variational free energy is only an upper bound on the negative log likelihood we found that in general vb performed much bet ter than gscomputationally it is very similar to them and each iteration takes essentially the same time as an them iterationagain we experimented with annealing in the hope of speeding convergencebut could not find an annealing schedule that significantly lowered the variational free energy while we had hoped that thebayesian prior would bias vb toward a common solution we found the same sensitivity to initial condi tions as we found with them so just as for them we ran the estimator for 1000 iterations with 10 different random initializations for each combination of prior parameterstable 1 presents the results of vb runs with several different values for the dirichlet priorparametersinterestingly we obtained our best per formance on 1to1 accuracy when the dirchlet priorx 01 a relatively large number but best per formance on manyto1 accuracy was achieved with a much lower value for the dirichlet prior namely x 104the dirichlet prior y that controls 302sparsity of the statetostate transitions had little ef fect on the resultswe did not have computational resources to fully explore other values for the prior as figure 3 shows vb can produce distributions of hidden states that are peaked in the same way that pos tags arein fact with the priors used here vb produces state sequences in which only a subset ofthe possible hmm states are in fact assigned to ob servationsthis shows that rather than fixing the number of hidden states in advance the bayesian prior can determine the number of states this idea is more fully developed in the infinite hmm of beal et al and teh et al them already performs well in terms of the manyto1 accuracy but we wondered if there might be some way to improve its 1to1 accuracy and vi scorein section 3 we suggested that one reason for its poorperformance in these evaluations is that the distri butions of hidden states it finds tend to be fairly flat compared to the empirical distribution of pos tagsas section 4 showed a suitable bayesian priorcan bias the estimator towards more peaked distribu tions but we wondered if there might be a simpler way of achieving the same resultwe experimented with dramatic reductions in the number of hidden states in the hmms estimated by themthis should force the hidden states to bemore densely populated and improve 1to1 accu racy even though this means that there will be nohidden states that can possibly map onto the less fre quent pos tags in effect we abandon the lowfrequency pos tags in the hope of improving the 1to1 accuracy of the highfrequency tagsas table 1 shows this markedly improves both the 1to1 accuracy and the vi scorea 25state hmm estimated by them performs effectively as wellas the best vb model in terms of both 1to1 accu racy and vi score and runs 4 times faster because it has only half the number of hidden states6 conclusion and future workthis paper studied why them seems to do so badly in hmm estimation for unsupervised pos taggingin fact we found that it does not do so badly at all thebitag hmm estimated by them achieves a mean 1to 1 tagging accuracy of 40 which is approximately the same as the 413 reported by for their sophisticated mrf modelthen we noted the distribution of words to hidden states found by them is relatively uniform comparedto the distribution of words to pos tags in the eval uation corpusthis provides an explanation of why the manyto1 accuracy of them is so high while the 1to1 accuracy and vi of them is comparatively lowwe showed that either by using a suitable bayesian prior or by simply reducing the number of hidden states it is possible to significantly improve both the 1to1 accuracy and the vi score achieving a 1to1 tagging accuracy of 46we also showed that them and other estimators take much longer to converge than usually thought and often require several hundred iterations to achieve optimal performancewe also found that there is considerable variance in the performance of all of these estimators so in general multiple runs fromdifferent random starting points are necessary in or der to evaluate an estimators performancefinally there may be more sophisticated ways of improving the 1to1 accuracy and vi score thanthe relatively crude methods used here that primarily reduce the number of available statesfor example we might obtain better performance by us ing them to infer an hmm with a large number of states and then using some kind of distributionalclustering to group similar hmm states these clusters rather than the underlying states would be in terpreted as the pos tag labelsalso the bayesian framework permits a wide variety of different priors besides dirichlet priors explored herefor example it should be possible to encode linguistic knowledge such markedness preferences in a prior and there are other linguistically uninformative priors such the entropic priorsof brand that may be worth exploringacknowledgementsi would like to thank microsoft research for providing an excellent environment in which to con duct this work and my friends and colleagues at microsoft research especially bob moore chrisquirk and kristina toutanova for their helpful com ments on this paper303
D07-1031
why does not them find good hmm postaggersthis paper investigates why the hmms estimated by expectationmaximization produce such poor results as partofspeech taggerswe find that the hmms estimated by them generally assign a roughly equal number of word tokens to each hidden state while the empirical distribution of tokens to pos tags is highly skewedthis motivates a bayesian approach using a sparse prior to bias the estimator toward such a skewed distributionwe investigate gibbs sampling and variational bayes estimators and show that vb converges faster than gs for this task and that vb significantly improves 1to1 tagging accuracy over themwe also show that them does nearly as well as vb when the number of hidden hmm states is dramatically reducedwe also point out the high variance in all of these estimators and that they require many more iterations to approach convergence than usually thoughtwe demonstrate good performance on unsupervised partofspeech tagging with a bayesian model
vmeasure a conditional entropybased external cluster evaluation measure we present vmeasure an external entropybased cluster evaluation measure v measure provides an elegant solution tomany problems that affect previously defined cluster evaluation measures includ ing 1 dependence on clustering algorithm or data set 2 the problem of matching where the clustering of only a portion of datapoints are evaluated and 3 accurate evaluation and combination of two desirable aspects of clustering homogeneity and completeness we compare vmeasure to a num ber of popular cluster evaluation measuresand demonstrate that it satisfies several desirable properties of clustering solutions us ing simulated clustering results finally we use vmeasure to evaluate two clustering tasks document clustering and pitch accent type clustering clustering techniques have been used successfully for many natural language processing tasks such as document clustering word sense disambiguation semantic role labeling pitch accent type disambiguation they are particularly appealing for tasks in which there is an abundance of language data available but manual annotation of this data is very resourceintensiveunsupervised clustering can eliminate the need for manual annotation of the data into desired classes but often at the cost of making evaluation of success more difficultexternal evaluation measures for clustering can be applied when class labels for each data point in some evaluation set can be determined a priorithe clustering task is then to assign these data points toany number of clusters such that each cluster con tains all and only those data points that are membersof the same class given the ground truth class la bels it is trivial to determine whether this perfect clustering has been achievedhowever evaluating how far from perfect an incorrect clustering solution is a more difficult task and proposed approaches often lack rigor in this paper we describe a new entropybased external cluster evaluation measure vmeasure1 designed to address the problem of quantifying such imperfectionlike all external measures vmeasurecompares a target clustering eg a manually an notated representative subset of the available data against an automatically generated clustering to de termine now similar the two arewe introduce twocomplementary concepts completeness and homo geneity to capture desirable properties in clustering tasksin section 2 we describe vmeasure and how itis calculated in terms of homogeneity and complete nesswe describe several popular external cluster evaluation measures and draw some comparisons to vmeasure in section 3in section 4 we discusshow some desirable properties for clustering are satisfied by vmeasure vs other measuresin sec tion 5 we present two applications of vmeasure ondocument clustering and on pitch accent type clus teringvmeasure is an entropybased measure which explicitly measures how successfully the criteria of homogeneity and completeness have been satisfiedvmeasure is computed as the harmonic mean of dis tinct homogeneity and completeness scores just as1the v stands for validity a common term used to de scribe the goodness of a clustering solution410 precision and recall are commonly combined into fmeasure as fmeasure scores can be weighted vmeasure can be weightedto favor the contributions of homogeneity or com pletenessfor the purposes of the following discussion as sume a data set comprising n data points and two partitions of these a set of classes c cii 1 n and a set of clusters k ki1 mlet a be the contingency table produced by the clus tering algorithm representing the clustering solution such that a aij where aij is the number of data points that are members of class ci and elements of cluster kj to discuss cluster evaluation measures we introduce two criteria for a clustering solution homogeneity and completenessa clustering result sat isfies homogeneity if all of its clusters contain only data points which are members of a single classa clustering result satisfies completeness if all the data points that are members of a given class are elementsof the same clusterthe homogenity and completeness of a clustering solution run roughly in opposition increasing the homogeneity of a clustering so lution often results in decreasing its completenessconsider two degenerate clustering solutionsin one assigning every datapoint into a single cluster guarantees perfect completeness all of the datapoints that are members of the same class are triv ially elements of the same clusterhowever this cluster is as unhomogeneous as possible since allclasses are included in this single clusterin another solution assigning each data point to a dis tinct cluster guarantees perfect homogeneity each cluster trivially contains only members of a singleclasshowever in terms of completeness this so lution scores very poorly unless indeed each classcontains only a single memberwe define the dis tance from a perfect clustering is measured as theweighted harmonic mean of measures of homogene ity and completenesshomogeneity in order to satisfy our homogeneity criteria a clustering must assign only those datapoints that are members of a single class to a single clusterthat is the class distribution within each cluster should beskewed to a single class that is zero entropywe de termine how close a given clustering is to this ideal by examining the conditional entropy of the class distribution given the proposed clusteringin the perfectly homogeneous case this value h is 0however in an imperfect situation the size of this value in bits is dependent on the size of thedataset and the distribution of class sizesthere fore instead of taking the raw conditional entropy we normalize this value by the maximum reduction in entropy the clustering information could provide specifically hnote that h is maximal when the clustering provides no new information the class distribution within each cluster is equal to the overall class distribiutionh is 0 when each cluster contains only members of a single classa perfectly homogenous clusteringin the degen erate case where h 0 when there is only a single class we define homogeneity to be 1for a perfectly homogenous solution this normalization h h equals 0thus to adhere to the conventionof 1 being desirable and 0 undesirable we define ho mogeneity as h 1 if h 0 1hh else where h k k1 c c1 ack n log ack c c1 ack h c c1 k k1 ack n log k k1 ack n completeness completeness is symmetrical to homogeneityinorder to satisfy the completeness criteria a clustering must assign all of those datapoints that are members of a single class to a single clusterto eval uate completeness we examine the distribution of cluster assignments within each classin a perfectlycomplete clustering solution each of these distribu tions will be completely skewed to a single clusterwe can evaluate this degree of skew by calculat ing the conditional entropy of the proposed clusterdistribution given the class of the component dat apoints hin the perfectly complete case h 0however in the worst case scenario 411each class is represented by every cluster with a dis tribution equal to the distribution of cluster sizes h is maximal and equals hfinally in the degenerate case where h 0 when there is a single cluster we define completeness to be 1therefore symmetric to the calculation above we define completeness as c 1 if h 0 1 hh else where h c c1 k k1 ack n log ack k k1 ack h k k1 c c1 ack n log c c1 ack n based upon these calculations of homogeneity and completeness we then calculate a clustering solutions vmeasure by computing the weighted harmonic mean of homogeneity and completenessv hcc similarly to the familiar f measure if is greater than 1 completeness is weighted more strongly in the calculation if is less than 1 homogeneity is weighted more stronglynotice that the computations of homogeneitycompleteness and vmeasure are completely inde pendent of the number of classes the number ofclusters the size of the data set and the clustering al gorithm usedthus these measures can be applied toand compared across any clustering solution regard less of the number of data points thenumber of classes or the number of clustersmore over by calculating homogeneity and completenessseparately a more precise evaluation of the perfor mance of the clustering can be obtainedclustering algorithms divide an input data set into a number of partitions or clustersfor tasks wheresome target partition can be defined for testing purposes we define a clustering solutionas a map ping from each data point to its cluster assignments in both the target and hypothesized clusteringin the context of this discussion we will refer to the target partitions or clusters as classes referring only to hypothesized clusters as clusterstwo commonly used external measures for as sessing clustering success are purity and entropy defined as purity kr1 1n maxi entropy kr1 nrn where q is the number of classes k the number of clusters nr is the size of cluster r and nir is the number of data points in class i clustered in cluster r both these approaches represent plausable ways to evaluate the homogeneity of a clustering solutionhowever our completeness criterion is not measured at allthat is they do not address the question of whether all members of a given class are in cluded in a single clustertherefore the purity and entropy measures are likely to improve monotonically withthe number of clusters in the result up to a degen erate maximum where there are as many clusters as data pointshowever clustering solutions rated high by either measure may still be far from idealanother frequently used external clustering evaluation measure is commonly refered to as cluster ing accuracythe calculation of this accuracy isinspired by the information retrieval metric of f measure the formula for this clustering fmeasure as described in is shown in figure 3let n be the number of data points c the set of classes k the set of clusters and nij be the number of members of class ci c that are elements of cluster kj k f x cic ci n maxkjk f f 2 are p are p are nij ci p nij kj figure 1 calculation of clustering fmeasure this measure has a significant advantage over purity and entropy in that it does measure boththe homogeneity and the completeness of a cluster ing solutionrecall is calculated as the portion of items from class i that are present in cluster j thus measuring how complete cluster j is with respect toclass i similarly precision is calculated as the por 412 solution a solution b fmeasure05 fmeasure05 vmeasure014 vmeasure039 solution c solution d fmeasure06 fmeasure06 vmeasure030 vmeasure041 figure 2 examples of the problem of matchingtion of cluster j that is a member of class i thus mea suring how homogenous cluster j is with respect to class ilike some other external cluster evaluation tech niques h l d micro averaged precision and recall fmeasure relies on a postprocessing step in which each cluster is assigned to a classthese techniques share certain problemsfirst they calculate the goodness not only of the given clustering solution but also of the clusterclass matchingtherefore in order for the goodness of two clustering solutions to be compared using one these measures an identicalpostprocessing algorithm must be usedthis problem can be trivially addressed by fixing the classcluster matching function and including it in the def inition of the measure as in h however a secondand more critical problem is the problem of matchingin calculating the similarity between a hypothesized clustering and a truecluster ing these measures only consider the contributions from those clusters that are matched to a target classthis is a major problem as two significantly differ ent clusterings can result in identical scoresin figure 2 we present some illustrative examples of the problem of matchingfor the purposes of thisdiscussion we will be using fmeasure as the mea sure to describe the problem of matching however these problems affect any measure which requires a mapping from clusters to classes for evaluationin the figures the shaded regions represent clus ters the shapes represent classesin a perfect clustering each shaded region would contain all and only the same shapesthe problem of matchingcan manifest itself either by not evaluating the en tire membership of a cluster or by not evaluating every clusterthe former situation is presented in the figures a and b in figure 2the fmeasure ofboth of these clustering solutions in 06 that is for each class the best or matchedcluster contains 3 of 5 elements of the class and 3 of 5 elements of the cluster are members of the class the make up of the clusters beyond the majority class is not evaluated by fmeasuresolution b is a better clustering solution than solution a in terms of both homogeneity and completeness indeed the vmeasure of so lution b is greater than that of solution a solutions c and d represent a case in which not every cluster is considered in the evaluation of fmeasurein this example the fmeasure of both solutions is 05 the small unmatchedclusters are not measured at allin the calculation of fmeasuresolution d is a bet ter clustering than solution c there are no incorrect clusterings of different classes in the small clustersvmeasure reflects this solution c has a vmeasure of 030 while the vmeasure of solution d is 041a second class of clustering evaluation techniquesis based on a combinatorial approach which examines the number of pairs of data points that are clustered similarly in the target and hypothesized clus teringthat is each pair of points can either be 1clustered together in both clusterings 2 clustered separately in both clusterings 3 clustered together in the hypothesized but not the tar get clustering or 4 clustered together in the target but not in the hypothesized clustering based on these 4 values a number of measures have been proposed including rand index 2homogeneity is not measured by vmeasure as a count of the number of classes contained by a cluster but feweris an acceptable way to conceptualize this criterion for the purposes of these examples413 adjusted rand index statistic jaccard fowlkesmallows and mirkin we il lustrate this class of measures with the calculation of rand indexrand n11n00n2 rand index can be interpreted as the probability that a pair of points is clustered similarly in c and k meila describes a number of poten tial problems of this class of measures posed by and the most basic is that these measures tend not to vary over the interval of 0 1transformations likethose applied by the adjusted rand index and a mi nor adjustment to the mirkin measure can address this problemhowever pair matching measures also suffer from distributional problemsthe baseline for fowlkesmallows varies sig nificantly between 06 and 0 when the ratio of datapoints to clusters is greater than 3 thus including nearly all realworld clustering problemssimilarly the adjusted rand index as demonstrated using monte carlo simulations in varies from 05 to 095this variance in the measures baseline prompts meila to ask if the assumption of linearity following normalization can be maintainedif the behavior of the measure is so unstable before normalization can users reasonably expect stable behavior following normalizationa final class of cluster evaluation measures arebased on information theorythese measures analyze the distribution of class and cluster member ship in order to determine how successful a givenclustering solution is or how different two parti tions of a data set arewe have already examined one member of this class of measures entropyfrom a coding theory perspective entropy is theweighted average of the code lengths of each clusterour vmeasure is a member of this class of clustering measuresone significant advantage that in formation theoretic evaluation measures have is that they provide an elegant solution to the problem of matchingby examining the relative sizes of the classes and clusters being evaluated these measures all evaluate the entire membership of each cluster not just a matchedportiondoms q0 measure uses conditional entropy h to calculate the goodness of a clustering solutionthat is given the hypothesized partition what is the number of bits necessary to represent the true clusteringhowever this term like the purity andentropy measures only evaluates the homogene ity of a solutionto measure the completeness of the hypothesized clustering dom includes a model cost term calculated using a coding theory argumentthe overall clustering quality measure presented is the sum of the costs of representing the data and the modelthe motivation for this approachis an appeal to parsimony given identical condi tional entropies h the clustering solution with the fewest clusters should be preferreddom also presents a normalized version of this term q2 which has a range of h 1 n k k1 log c 1 c 1 where c is the target partition k is the hypothe sized partition and h is the size of cluster k q2 1 n c c1 log c1 c1 q0 we believe that vmeasure provides two significantadvantages over q0 that make it a more useful diag nostic toolfirst q0 does not explicitly calculate the degree of completeness of the clustering solutionthe cost term captures some of this information since a partition with fewer clusters is likely to be more complete than a clustering solution with more clustershowever q0 does not explicitly address the interaction between the conditional entropy and the cost of representing the modelwhile this is an application of the minimum description length principle it does not provide an intuitive manner for assessingour two competing criteria of homogeneity and com pletenessthat is at what point does an increase inconditional entropy justify a reduc tion in the number of clusters another informationbased clustering measure is variation of information v i hhv i is presented 414 as a distance measure for comparing partitions of the same datait therefore does notdistinguish between hypothesized and target cluster ingsv i has a number of useful propertiesfirst it satisfies the metric axiomsthis quality allowsusers to intuitively understand how v i values combine and relate to one anothersecondly it is con vexly additivethat is to say if a cluster is split the distance from the new cluster to the original is the distance induced by the split times the size of the clusterthis property guarantees that all changes to the metric are local the impact of splitting ormerging clusters is limited to only those clusters in volved and its size is relative to the size of these clustersthird vi is ninvariant the number of data points in the cluster do not affect the value of the measurev i depends on the relative sizes of the partitions of c and k not on the number of points in these partitionshowever v i is bounded by themaximum number of clusters in c or k kwith out manual modification however k n where each cluster contains only a single data pointthus while technically ninvariant the possible values of v i are heavily dependent on the number of datapoints being clusteredthus it is difficult to compare v i values across data sets and clustering algorithms without fixing k as v i will vary over differ ent rangesit is a trivial modification to modify v i such that it varies over 01normalizing v i by log n or 12 log kguarantee this rangehowever meila raises two potential problems with thismodificationthe normalization should not be applied if data sets of different sizes are to be com pared it negates the ninvariance of the measureadditionally if two authors apply the latter normal ization and do not use the same value for k their results will not be comparablewhile v i has a number of very useful distance properties when analyzing a single data set across a number of settings it has limited utility as a general purpose clustering evaluation metric for use across disparate clusterings of disparate data setsour homogeneity and completeness terms both range over 01 and are completely ninvariant andkinvariantfurthermore measuring each as a ra tio of bit lengths has greater intuitive appeal than a more opportunistic normalizationvmeasure has another advantage as a clusteringevaluation measure over v i and q0by evaluating homogeneity and completeness in a symmetrical complementary manner the calculation of v measure makes their relationship clearly observableseparate analyses of homogeneity and completeness are not possible with any other cluster evalu ation measuremoreover by using the harmonic mean to combine homogeneity and completeness vmeasure is unique in that it can also prioritize one criterion over another depending on the clustering task and goalsdom describes a parametric technique for generating example clustering solutionshe then proceeds to define five desirable propertiesthat clustering accuracy measures should display basedon the parameters used to generate the clustering solutionto compare vmeasure more directly to alter native clustering measures we evaluate vmeasure and other measures against these and two additional desirable propertiesthe parameters used in generating a clustering so lution are as followsc the number of classes k the number of clusters knoise number of noiseclusters knoise 0 p2 for ku c mku 0 p4 m1 0 with equality only if ku 1 p5 m2 0 with equality only if knoise 0 p6 mcnoise 0 p7 m3 0 with equality only if cnoise 0 table 1 desirable properties of a cluster evaluation measure mto evaluate how different clustering measures satisfy each of these properties we systematically var ied each parameter keeping c 5 fixedku 10 values 2 3 11 knoise 7 values 0 1 6 cnoise 7 values 0 1 6 1 4 values 0 0033 0066 01 2 4 values 0 0066 0133 02 3 4 values 0 0066 0133 02 we evaluated the behavior of vmeasure rand mirkin fowlkesmallows gamma jaccard viq0 fmeasure against the desirable properties p1 p74based on the described systematic modificationof each parameter only vmeasure vi and q0 empirically satisfy all of p1p7 in all experimental con ditionsfull results reporting how frequently each evaluated measure satisfied the properties based on these experiments can be found in table 2all evaluated measures satisfy p4 and p7however rand mirkin fowlkesmallows gamma jac card and fmeasure all fail to satisfy p3 and p6 inat least one experimental configurationthis indi cates that the number of noiseclasses or clusterscan be increased without reducing any of these mea suresthis implies a computational obliviousness topotentially significant aspects of an evaluated clus tering solutionin this section we present two clustering experimentswe describe a document clustering experiment and evaluate its results using vmeasure high lighting the interaction between homogeneity and completenesssecond we present a pitch accent type clustering experimentwe present results fromboth of these experiments in order to show how v measure can be used to drawn comparisons across data sets51 document clusteringclustering techniques have been used widely to sort documents into topic clusterswe reproduce such an experiment here to demonstrate the usefulnessof vmeasureusing a subset of the tdt4 cor pus we ran clustering experiments using kmeans clustering and evaluated the results using vmeasurevi and q0 those measures that satisfied the desirable properties defined in section 4the top ics and relative distributions are as follows acts 4the inequalities in the desirable properties are inverted inthe evaluation of vi q0 and mirkin as they are defined as dis tance as opposed to similarity measures416 property rand mirkin fowlkes jaccard fmeasure q0 vi vmeasure p1 018 022 10 10 10 10 10 10 10 p2 10 10 076 10 089 098 10 10 10 p3 00 00 030 019 021 00 10 10 10 p4 10 10 10 10 10 10 10 10 10 p5 050 057 10 10 10 10 10 10 10 p6 020 020 041 026 052 087 10 10 10 p7 10 10 10 10 10 10 10 10 10 table 2 rates of satisfaction of desirable propertiesof violencewar elections diplomatic meetings accidents natural disasters human interest scan dals legal cases miscellaneous sports new laws science and discovery we employed stemmed tfidf weighted term vectors extracted for each document as the clustering space for these experiments which yielded a very high dimension spaceto reduce this dimensionality we performed a simple feature selection procedure including in the feature vector only those terms that represented the highest tfidf value for at least one data pointthis resulted in a feature vector containing 484 tfidf values for each documentresults from kmeans clustering are are shown in figure 40 01 02 03 04 05 1 10 100 1000 3 35 4 45 5 55v m ea su re a nd q 2 va lue s vi v al ue s number of clusters vmeasure vi q2 figure 4 results of document clustering measured by vmeasure vi and q2 the first observation that can be drawn from these results is the degree to which vi is dependent on the number of clusters this dependency severelylimits the usefulness of vi it is inappropriate in selecting an appropriate parameter for k or for evaluating the distance between clustering solutions gen erated using different values of k vmeasure and q2 demonstrate similar behavior in evaluating these experimental resultsthey both reach a maximal value with 35 clusters however q2shows a greater descent as the number of clusters in creaseswe will discuss this quality in greater detail in section 5252 pitch accent clusteringpitch accent is how speakers of many languages make a word intonational prominentin mostpitch accent languages words can also be accented in different ways to convey different meanings in the tobi labeling con ventions for standard american english for example there are five different ac cent types we extracted a number of acoustic features from accented words within the read portion of the boston directions corpus andexamined how well clustering in these acoustic dimensions correlates to manually annotated pitch ac cent typeswe obtained a very skewed distributionwith a majority of h pitch accents5 we there fore included only a randomly selected 10 sample of h accents providing a more even distribution of pitch accent types for clustering h l lh lh hh we extracted ten acoustic features from each ac cented word to serve as the clustering space for this experimentusing praats get pitch function we calculated the mean f0and f0 as well as zscore speaker normalized ver sions of the samewe included in the feature vector the relative location of the maximum pitch value inthe word as well as the distance between this max5pitch accents containing a high tone may also be downstepped or spoken in a compressed pitch rangehere we col lapsed all downstepped instances of each pitch accent with the corresponding nondownstepped instances417 imum and the point of maximum intensityfinally we calculated the raw and speaker normalized slope from the start of the word to the maximum pitch and from the maximum pitch to the end of the wordusing this feature vector we performed kmeansclustering and evaluate how successfully these di mensions represent differences between pitch accenttypesthe resulting vmeasure vi and q0 calcula tions are shown in figure 50 005 01 015 02 1 10 100 1000 2 3 4 5 6 7 8v m ea su re a nd q 2 va lue s vi v al ue s number of clusters vi vmeasure q2figure 5 results of pitch accent clustering mea sured by vmeasure vi and q0 in evaluating the results from these experimentsq2 and vmeasure reveal considerably different behaviorsq2 shows a maximum at k 10 and de scends at k increasesthis is an artifact of the mdlprincipleq2 makes the claim that a clustering so lution based on fewer clusters is preferable to one using more clusters and that the balance between the number of clusters and the conditional entropy h should be measured in terms of codinglengthwith vmeasure we present a different argu mentwe contend that the a high value of k does notinherently reduce the goodness of a clustering solu tionusing these results as an example we find that at approximately 30 clusters an increase of clusters translates to an increase in vmeasurethis is due to an increased homogeneity h and a relatively stable completeness h that is inclusion of more clusters leads to clusters with a more skewedwithincluster distribution and a equivalent distribu tion of cluster memberships within classesthis is intuitively preferable one criterion is improved the other is not reduced despite requiring additionalclustersthis is an instance in which the mdl principle limits the usefulness of q2we again observe the close dependency of vi and kmoreover in considering figures 5 and 4 simulta neously we see considerably higher values achieved by the document clustering experimentsgiven the nave approaches taken in these experiments this is expected and even desired given the previous work on these tasks document clustering has been notably more successfully applied than pitch accent clusteringthese examples allow us to observe how transparently vmeasure can be used to compare the behavior across distinct data setswe have presented a new external cluster evaluation measure vmeasure and compared it with existing clustering evaluation measuresvmeasure is basedupon two criteria for clustering usefulness homogeneity and completeness which capture a clustering solutions success in including all and only data points from a given class in a given clusterwe havealso demonstrated vmeasures usefulness in com paring clustering success across different domainsby evaluating document and pitch accent cluster ing solutionswe believe that vmeasure addressessome of the problems that affect other cluster measures1 it evaluates a clustering solution indepen dent of the clustering algorithm size of the data set number of classes and number of clusters2 it does not require its user to map each cluster to a classtherefore it only evaluates the quality of the clustering not a posthoc classcluster mapping3 it eval uates the clustering of every data point avoiding the problem of matching4 by evaluating the criteria of both homogeneity and completeness vmeasure is more comprehensive than those that evaluate onlyone5 moreover by evaluating these criteria separately and explicitly vmeasure can serve as an el egant diagnositic tool providing greater insight into clustering behavioracknowledgments the authors thank kapil thadani martin jansche and sasha blairgoldensohn and for their feedbackthis work was funded in part by the darpa gale program under a subcontract to sri international418
D07-1043
vmeasure a conditional entropybased external cluster evaluation measurewe present vmeasure an external entropybased cluster evaluation measurevmeasure provides an elegant solution to many problems that affect previously defined cluster evaluation measures including 1 dependence on clustering algorithm or data set 2 the problem of matching where the clustering of only a portion of data points are evaluated and 3 accurate evaluation and combination of two desirable aspects of clustering homogeneity and completenesswe compare vmeasure to a number of popular cluster evaluation measures and demonstrate that it satisfies several desirable properties of clustering solutions using simulated clustering resultsfinally we use vmeasure to evaluate two clustering tasks document clustering and pitch accent type clusteringf score is not suitable for comparing results with different cluster numbersthe vmeasure is an information theoretic metric that reports the harmonic mean of homogeneity and completeness a significant limitation of fscore is that it does not evaluate the make up of clusters beyond the majority class
lexical semantic relatedness with random graph walks many systems for tasks such as question answering multidocument summarization and infor mation retrieval need robust numerical measures of lexical relatedness standard thesaurusbased measures of word pair similarity are based on only a single path between those words in the thesaurus graph by contrast we propose a newmodel of lexical semantic relatedness that incorporates information from every explicit or implicit path connecting the two words in the en tire graph our model uses a random walk over nodes and edges derived from wordnet links and corpus statistics we treat the graph as amarkov chain and compute a wordspecific sta tionary distribution via a generalized pagerank algorithm semantic relatedness of a word pair is scored by a novel divergence measure zkl that outperforms existing measures on certain classes of distributions in our experiments the resultingrelatedness measure is the wordnetbased measure most highly correlated with human similar ity judgments by rank ordering at 90 several kinds of natural language processing systems need measures of semantic relatedness for arbitrary wordpairsfor example document summarization and ques tion answering systems often use similarity scores to evaluate candidate sentence alignments and informationretrieval systems use relatedness scores for query expan sionseveral popular algorithms calculate scores from information contained in wordnet an electronic dictionary where word senses are explicitly connected by zero or more semantic relationshipsthecentral challenge of these algorithms is to compute rea sonable relatedness scores for arbitrary word pairs given that few pairs are directly connectedmost pairs in wordnet share no direct semantic link and for some the shortest connecting path can be surprisingeven pairs that seem intuitively related such furnaceand stoveshare a lowest common ancestor in the hypernymy taxonomy all the way upat artifactseveral existing algorithms compute relatedness only by traversing the hyper nymy taxonomy and find that furnaceand stoveare relatively unrelatedhowever wordnet provides other types of semantic links in addition to hypernymy such as meronymy antonymy andverb entailment as well as implicit links defined by over lap in the text of definitional glossesthese links can provide valuable relatedness informationif we assume that relatedness is transitive across a wide variety of such links then it is natural to follow paths such as furnacecrematorygas ovenovenkitchen appliancestove and find a higher degree of relatedness between furnaceand stovethis paper presents the application of random walkmarkov chain theory to measuring lexical semantic re latednessa graph of words and concepts is constructedfrom wordnetthe random walk model posits the exis tence of a particle that roams this graph by stochastically following local semantic relational linksthe particle is biased toward exploring the neighborhood around a target word and is allowed to roam until the proportion of time it visits each node in the limit converges to a stationarydistributionin this way we can compute distinct word specific probability distributions over how often a particle visits all other nodes in the graph when startingfrom a specific wordwe compute the relatedness of two words as the similarity of their stationary distributionsthe random walk brings with it two distinct advan tagesfirst it enables the similarity measure to have a principled means of combination of multiple types of edges from wordnetsecond by traversing all links thewalk aggregates local similarity statistics across the en tire graphthe similarity scores produced by our method are to our knowledge the wordnetbased scores most highly correlated with human judgments581budanitsky and hirst provide a survey of many wordnetbased measures of lexical similarity based on paths in the hypernym taxonomyas an example one of the best performing is the measure proposed by jiang and conrath which finds the shortest path in the taxonomic hi erarchy between two candidate words before computing similarity as a function of the information content of thetwo words and their lowest common subsumer in the hi erarchywe note the distinction between word similarityand word relatednesssimilarity is a special case of relat edness in that related words such as catand furshare some semantic relationships but do not express the same likeness of form as would similarwords such as catand lionthe jiangconrath mea sure and most other measures that primarily make use of of hypernymy in the wordnet graph are better categorized as measures of similarity than of relatednessother measures have been proposed that utilize the text in wordnets definitional glosses such as extended lesk and later the gloss vectors methodthese ap proaches are primarily based on comparing the bag of wordsof two synsetsgloss text concatenated with the text of neighboring wordsglosses in the taxonomyas a result these glossbased methods measure relatednessour model captures some of this relatedness information by including weighted links based on gloss texta variety of other measures of semantic relatedness have been proposed including distributional similarity measures based on cooccurrence in a body of textsee for a surveyother measures make use of alternative structured information resources than wordnet such as rogets thesaurus more recently measures incorporating information from wikipedia have reported stronger results on some tasks than have been achieved by existing measures based on shallower lexical resourcesthe results of our algorithm are competitivewith some wikipedia algorithms while using only word net 21 as the underlying lexical resourcethe approach presented here is generalizable to construction from any underlying semantic resourcepagerank is the most wellknown example of a random walk markov chainsee for a sur veyit uses the local hyperlink structure of the web to define a graph which it walks to aggregate popularityinformation for different pagesrecent work has ap plied random walks to nlp tasks such as pp attachment word sense disambiguation and query expansion however to ourknowledge the literature in nlp has only considered us ing one stationary distribution per speciallyconstructedgraph as a probability estimatorin this paper we in troduce a measure of semantic relatedness based on thedivergence of the distinct stationary distributions result ing from random walks centered at different positions in the word graphwe believe we are the first to define such a measureour model is based on a random walk of a particle through a simple directed graphg whose nodes v and edges e are extracted from wordnet version 21formally we define the probability ni of finding the particle at node ni v at time t as the sum of all ways in which the particle could have reached ni from any other node at the previous timestep ni njv nj p where p is the conditional probability of moving to ni given that the particle is at nj in partic ular we construct the transition distribution such that p 0 whenever wordnet specifies a local link relationship of the form j i note that this randomwalk is a markov chain because the transition probabilities at time t are independent of the particles past trajec torythe subsections that follow present the construction of the graph for our random walk from wordnet and the mathematics of computing the stationary distribution for a given word31 graph constructionwordnet is itself a graph over synsetsa synset is best thought of as a concept evoked by one sense of one or more wordsfor instance different senses of the word banktake part in different synsets and a single synset can be represented by multiple synonymous words such asmiddleand centerwordnet explicitly marks seman tic relationships between synsets but we are additionally interested in representing relatedness between wordswetherefore extract the following types of nodes fromword net synset each wordnet synset has a corresponding nodefor example one node corresponds to the synset re ferred to by dogn3the third sense of dog as noun whose meaning is an informal term for a manthere are 117597 synset nodes582tokenpos one node is allocated to every word coupled with a part of speech such as dogn mean ing dog as a nounthese nodes link to all the synsets they participate in so that dogn links the synset nodes for canine hound hot dog etc collocationsmultiword expressions such as hot dogthat take part in a synsets are also represented by these nodesthere are 156588 tokenpos nodestoken every tokenpos is connected to a token node corresponding to the word when no part of speech information is presentfor example doglinks to dogn and dogvthere are 148646 token nodessynset nodes are connected with edges corresponding to many of the relationship types in word netwe use these wordnet relationships to form edges hypernymhyponym instanceinstance of all holonymmeronym links antonym entailsentailed by adjective satellite causescaused by participle pertainsto derivesderived from attributehas attribute and topical domain linksby con struction each edge created from a wordnet relationshipis guaranteed to have a corresponding edge in the oppo site directionedges that connect a tokenpos to the synsets using it are weighted based on a bayesian estimate drawn from the semcor frequency counts included in wordnet but with a nonuniform dirichlet priorour edge weights are the semcor frequency counts for each target synset with pseudocounts of 1 for all synsets 1 for first sense ofeach word and 1 for the first word in each synsetintuitively this causes the particle to have a higher probabil ity of moving to more common senses of a tokenpos for example the edges from dogn to dogn1 and dogn5 have unnormalized weights of432 and 01 respectivelythe edges connecting a to ken to the tokenpos nodes in which it can occur are alsoweighted by the sum of the weights of the outgoing tokenpossynset linkshence a walk starting at a com mon word like catis far more likely to follow a link to catn than to rarities like catvthese edges are unidirectional no edges are created from a synset to a tokenpos that can represent the synsetin order for our graph construction to incorporatetextual glossbased information we also create uni directional edges from synset nodes to the tokenpos nodes for the words and collocations used in that synsets gloss definitionthis requires partofspeech tagging the glosses for which we use the stanford maximum entropytagger it is important to correctly weight these edges because highfrequency stopwords such as byand hedo not convey much in formation and might serve only to smear the probability mass across the whole graphglossbased links to these nodes should therefore be downweighted or removedon the other hand upweighting extremely rare words such as by tfidf scoring might also be inappropriate because such rare words would get extremely high scoreswhich is an undesirable trait in similarity search and others have shown that a non monotonic document frequency weighting can be more effective in such a settingbecause the frequency of words in the glosses is distributed by a powerlaw we weight each word by its distance from the mean word count in log spaceformally the weight wi for a word appearing ri times is wi exp 2 22 where and are the mean and standard deviation ofthe logs of all word countsthis is a smooth approximation to the high and low frequency stop lists used effectively by other measures such as we believe that because nonmonotonic frequency scaling has no parameters and is datadrivenit could stand to be more widely adopted among gloss based lexical similarity measureswe also add bidirectional edges between synsets whose word senses overlap with a common tokenposthese edges have raw weights given by the number oftokenpos nodes shared by the synsetsthe intuition be hind adding these edges is that wordnet often divides the meanings of words into finegrained senses with similarmeanings so there is likely to be some semantic relation ship between synsets sharing a common tokenposthe final graph has 422831 nodes and 5133281 edgesthis graph is very sparse fewer than 1 in 10000node pairs are directly connectedwhen only the un weighted wordnet relationship edges are considered the largest degree of any node is cityn1with 667 edges followed by lawn2with 602 edges and each node is on aver age connected to 17 other nodeswhen the glossbased edges are considered separately the highest degree nodesare those with the longest definitions the maximum out degree is 56 and the average outdegree is 62for the edges linking tokenpos nodes to the synsets in which they participate tokenpos nodes with many senses are the most connected breakv with 59 outgoing edges and makev with 49 outgoing edges have the highest outdegrees with the average outdegree being 1332 computing the stationary distributioneach of the k edge types presented above can be repre sented as separate transition matrix ek right nown where 583n is the total number of nodesfor each matrix column j contains contains a normalized outgoing proba bility distribution1 so the weight in cell contains pk the conditional probability of moving from node nj to node ni in edge type k for many of the edge types this is either 0 or 1 but for the weighted edges these are real valuedthe full transition matrixm is then the column normalized sum of all of the edge types m k ek m 1 m m is a distillation of relevant relatedness informationabout all nodes extracted from wordnet and is not tailored for computing a stationary distribution for any specific wordin order to compute the stationary distribu tion vdogn for a walk centered around the tokenpos dognwe first define an initial distribution vdogn that places all the probability mass in the single vector entry corresponding to dognthen at every step of the walk we will return to v with probability intuitively this return probability captures the notion that nodes close todogn should be given higher weight and also guaran tees that the stationary distribution exists and is uniquethe stationary distribution v is com puted via an iterative update algorithm v v mv because the walk may return to the initial distribution v at any step with probability we found that v converges to its unique stationary distribution vin anumber of steps roughly proportional to 1we experi mented with a range of return probabilities and found that our results were relatively insensitive to this parameterour convergence criteria was v v 1 0 and where pj is exactly one order of magnitude greater than qj ie pj 2 qj for such a term in the standard kl divergence we would get pj log pj qj pj log pj therefore the term in skew divergence implicitly defines a parameter stating how many orders of magnitude smaller than pj to count qj if qj 0we define the zerokl divergence with respect to 2in lees original presentation skew divergence isdefined not as s but rather as swe reverse the ar gument order for consistency with the other measures discussed here586 gamma zkl i pi log piqi qi 6 0 qi 0note that this is exactly kldivergence when kldivergence is defined and like skew divergence approx imates kl divergence in the limit as a similar analysis of the skew divergence terms for when 0 qi pi shows that such a term in the skew divergence sum is again approximated by pizkl does not have this propertybecause zkl is a better approximation to kl divergence and because they havethe same behavior in the limit we expect zkls performance to dominate that of skew divergence in many distributionshowever if there is a wide range in the ex ponent of noisy terms the maximum possible penalty tosuch terms ascribed by skew divergence may be benefi cialfigure 3 shows the relative performance of zkl versus jensenshannon skew divergence cosine similarity and the jaccard score for correlations with human judgment on the markovlink modelzkl consistently outperforms the other measures on distributions resulting from this model but zkl is not optimal on distributions generated by our other modelsthe next section explores this topic in more detailtraditionally there have been two primary types of evaluation for measures of semantic relatedness one is correlation to human judgment the other is the relative per formance gains of a taskdriven system when it uses the measurethe evaluation here focuses on correlation with human judgments of relatednessfor consistency with previous literature we use rank correlation rather than linear correlation when comparing sets of relatedness judgments because the rank corre lation captures information about the relative ordering ofthe scoreshowever it is worth noting that many applica tions that make use of lexical relatedness scores would better be served by scores on a linear scale with human judgmentsrubenstein and goodenough solicited humanjudgments of semantic similarity for 65 pairs of com mon nouns on a scale of zero to fourmiller and charles repeated their experiment on a subset of 29 nounpairs and found that although indi viduals varied among their judgments in aggregate thescores were highly correlated with those found by ruben stein and goodenough resnik replicated the miller and charles experiment and reported that the average persubject linear cor relation on the dataset was around are 090 providing a rough upper bound on any systems linear correlation performance with respect to the miller and charles datafigure 3 shows that the zkl measure on the markovlink model has linear correlation coefficient are 903at the limit of human interannotator agreementrecently a larger set of word relatedness judg ments was obtained by in the wordsimilarity353 collectiondespite the collections name the study instructed participants to score word pairs for relatedness which is in contrast to the similarity judgments re quested of the miller and charles and rubenstein and goodenough participantsfor this reason the wordsimilarity353 data contains many pairs that are not semantically similar but still receive high scores such as computersoftwareat 881ws353 contains pairs that include nonnouns such as eatdrinkone proper noun not appearing in wordnet and some pairs potentially subject to political biasagainthe aggregate human judgments correlate well with ear lier data sets where they overlapthe 30 judgments that wordsimilarity353 shares with the miller and charles data have 939 and the 29 shared with rubenstein and goodenough have 904 we generated similarity scores for word pairs in all three data sets using the three variants of our walk model and with multiple distributional distance measureswe used the wordnetsimilarity package to compute baseline scores for several existing measures noting that one word pair was not processed in ws353 because one of the words was missing from wordnetthe results are summarized in table 2these num bers differ slightly from previously reported scores due tovariations in the exact experimental setup wordnet ver sion and the method of breaking ties when computingit is worth noting that in their experiments report that the vector method has rank correlation coefficients of 91 and 90 for mc and rg respectively which are also top performing valuesin our experiments the markovlink model with zkldistance measure was the best performing model over allmarkovgloss and markovjoined were also strong contenders but with the cosine measure instead of zklone reason for this distinction is that the stationary distributions resulting from the markovlink model are non zero for all but the initial word nodes consequently zkls reestimate for the zero terms adds little informationby contrast themarkovgloss andmarkovjoined models include linksthat traverse from synset nodes to tokenpos nodes re 587 figure 3 correlation with the miller charles data sets by linear correlation and rank correlation for the markovlink modelall data points were based on one set of stationary distributions over the graph only the divergence measure between those distributions is variednote that zkldominates both graphs but skew divergence does well for increasing gamma is swept over the range 0 to 1 then 1 through 20 then 20 through 40 at equal resolutionsmodel mc rank rg rank ws353 rank markovlink 904 817 552 markovgloss 841 762 467 markovjoined 841 838 547 gloss vectors 888 789 445 extended lesk 869 829 511 jiangconrath 653 584 195 lin 625 599 216 table 2 spearmans rank correlation coefficients withhuman judgments using 20 for zklnote that fig ure 3 demonstrates zkls insensitivity with regard to the parameter setting for the markovlink modelsulting in a final stationary distribution with more zerononzero pairshence the proper setting of gamma is of greater importancezkls performance improves with tuning of gamma but cosine similarity remained the more robust measure for these distributionsin this paper we have introduced a new measure oflexical relatedness based on the divergence of the sta tionary distributions computed from random walks overgraphs extracted wordnetwe have explored the structural properties of extracted semantic graphs and characterized the distinctly different types of stationary distribu tions that resultwe explored several distance measures on these distributions including zkl a novel variant of kldivergenceour best relatedness measure is at the limit of human interannotator agreement and is one of the strongest measures of semantic relatedness that uses only wordnet as its underlying lexical resourcein future work we hope to integrate other lexical resources such as wikipedia into the walkincorporating more types of links from more resources will underline the importance of determining appropriate rela tive weights for all of the types of edges in the walks matrixeven for wordnet we believe that certain link types such as antonyms may be more or less appropriate for certain tasks and should weighted accordinglyand while our measure of lexical relatedness correlates well with human judgments we hope to show performance gains in a realword task from the use of our measureacknowledgments thanks to christopher d manning and dan jurafsky for their helpful comments and suggestionswe are also grateful to siddharth patwardhan and ted pedersen for assistance in comparing against their systemthanks to sushant prakash rion snow and varun ganapathi fortheir advice on pursuing some of the ideas in this paper and to our anonymous reviewers for their helpful cri tiquesdaniel ramage was funded in part by an ndseg fellowshipthis work was also supported in part by the dto aquaint program the darpa gale program and the onr 588
D07-1061
lexical semantic relatedness with random graph walksmany systems for tasks such as question answering multidocument summarization and information retrieval need robust numerical measures of lexical relatednessstandard thesaurusbased measures of word pair similarity are based on only a single path between those words in the thesaurus graphby contrast we propose a new model of lexical semantic relatedness that incorporates information from every explicit or implicit path connecting the two words in the entire graphour model uses a random walk over nodes and edges derived from wordnet links and corpus statisticswe treat the graph as a markov chain and compute a wordspecific stationary distribution via a generalized pagerank algorithmsemantic relatedness of a word pair is scored by a novel divergence measure zkl that outperforms existing measures on certain classes of distributionsin our experiments the resulting relatedness measure is the wordnetbased measure most highly correlated with human similarity judgments by rank ordering at 90we use random walks over wordnet incorporating information such as meronymy and dictionary glosses
online learning of relaxed ccg grammars for parsing to logical form we consider the problem of learning toparse sentences to lambdacalculus repre sentations of their underlying semantics and present an algorithm that learns a weighted combinatory categorial grammar a key idea is to introduce nonstandard ccg combinators that relax certain parts of the grammarfor example allowing flexible word order or insertion of lexical items with learned costs we also present a new online algorithm for inducing a weighted ccg results for the approach on atis data show 86 fmeasure in recovering fully correct semantic analyses and 959 fmeasure by a partialmatch criterion a more than 5 improvement over the 903 partialmatch figure reported by he and young recent work has developed learn ing algorithms for the problem of mapping sentences to underlying semantic representationsin one such approach the input to the learning algorithm is a training set consisting of sentences paired with lambdacalculus expressionsfor instance the training data might contain the following example sentence list flights to boston logical form xflight to in this case the lambdacalculus expression denotes the set of all flights that land in bostonin zc05 it is assumed that training examples do not include additional information for example parse trees or a on may four atlanta to denver delta flight 257 xmonth day numberfrom toairline flightflight number b show me information on american airlines from fort worth texas to philadelphia xairlinefrom to c okay that ones great too now were going to go on april twenty second dallas to washington the latest nighttime departure one way argmax fromto monthday number duringone way ydepart time figure 1 three sentences from the atis domainother derivationsthe output from the learning algo rithm is a combinatory categorial grammar together with parameters that define a loglinear distribution over parses under the grammarexperi ments show that the approach gives high accuracy on two databasequery problems introduced by zelle and mooney and tang and mooney the use of a detailed grammatical formalism such as ccg has the advantage that it allows a system tohandle quite complex semantic effects such as co ordination or scoping phenomenain particular it allows us to leverage the considerable body of work on semantics within these formalisms for example see carpenter however a grammar based on a formalism such as ccg can be somewhat rigid and this can cause problems when a system is faced with spontaneous unedited natural language input as is commonly seen in natural language interface applicationsfor example consider the sentences shown in figure 1 which were taken from the atis travelplanning domain thesesentences exhibit characteristics which present significant challenges to the approach of zc05for ex 678 ample the sentences have quite flexible word order and include telegraphic language where some words are effectively omittedin this paper we describe a learning algorithm that retains the advantages of using a detailed grammar but is highly effective in dealing with phenomenaseen in spontaneous natural language as exempli fied by the atis domaina key idea is to extendthe approach of zc05 by allowing additional nonstandard ccg combinatorsthese combinators relax certain parts of the grammarfor example al lowing flexible word order or insertion of lexical itemswith learned costs for the new operationsthis approach has the advantage that it can be seam lessly integrated into ccg learning algorithms such as the algorithm described in zc05a second contribution of the work is a new online algorithm for ccg learningthe approach in volves perceptron training of a model with hidden variablesin this sense it is related to the algorithmof liang et al however it has the addi tional twist of also performing grammar induction in an online mannerin our experiments we show that the new algorithm is consid erably more efficient than the zc05 algorithm this is important when training on large training sets for example the atis data used in this paperresults for the approach on atis data show 86fmeasure accuracy in recovering fully correct semantic analyses and 959 fmeasure by a partial match criterion described by he and young the latter figure contrasts with a figure of 903 for the approach reported by he and young 1results on the geo880 domain also show an i am provement in accuracy with 889 fmeasure for the new approach compared to 870 fmeasure for the method in zc0521 semanticstraining examples in our approach consist of sen tences paired with lambdacalculus expressionswe use a version of the lambda calculus that is closely related to the one presented by carpenter there are three basic types t the type of truth val 1he and young do not give results for recovering fully correct parsesues e the type for entities and r the type for realnumbersfunctional types are defined by specify ing their input and output types for example e it is the type of a function from entities to truth val uesin general declarative sentences have a logical form of type t question sentences generally have functional types2 each expression is constructed from constants logical connectors quantifiers and lambda functions22 combinatory categorial grammarscombinatory categorial grammar is a syn tactic theory that models a wide range of linguistic phenomena the core of a ccg grammar is a lexicon for example consider the lexicon flights n xflight to np yfxf to boston np boston each entry in the lexicon is a pair consisting of aword and an associated categorythe category con tains both syntactic and semantic informationfor example the first entry states that the word flightscan have the category n xflightthis cat egory consists of a syntactic type n together withthe semantics xflightin general the seman tic entries for words in the lexicon can consist of anylambdacalculus expressionsyntactic types can ei ther be simple types such as n np or s or can be more complex types that make use of slash notation for example np ccg makes use of a set of combinators which are used to combine categories to form larger pieces of syntactic and semantic structurethe simplest such rules are the functional application rules ab f b g a f b g ab f a f fxf to or b because g ab f ac xf snp xx sn fxf figure 2 rules used in genlexeach row represents a rulethe first column lists the triggers that identify some substructure within a logical formthe second column lists the category that is createdthe third column lists categories that are created when the rule is applied to the logical form at the top of this columnwe use the 10 rules described in zc05 and add two new rules listed in the last two rows abovethis first new rule is instantiated for greater than and less than fromthe new rule effectively allows the preposi tional phrase from new york to typeshift to an entry with syntactic type n and semantics xfrom representing the set of all things from new york7 we introduce a single additional feature which counts the number of times this rule is used33 crossed composition rulesfinally we include crossed functional composition rules ab f because g ac xf because g ab f ac xf from fxf to fxf day b tn nn n fxf from to xday b npn f argmax from to ydepart time np argmax from to ydepart time figure 3 a parse with the flexible parserproblem by introducing features that count the num ber of times they are used in a parse8 34 an exampleas a final point to see how these rules can interact in practice see figure 3this example demonstrates the use of the relaxed application and composition rules as well as the new typeraising rulesthis section describes an approach to learning in ourmodelwe first define the features used and then de scribe a new online learning algorithm for the task41 features in the modelsection 23 described the use of a function f which maps a sentence x together with a ccg parse y to a feature vectoras described in section 3we introduce features for the new ccg combinatorsin addition we follow zc05 in defining fea tures which track the number of times each lexical item in is usedfor example we would have one feature tracking the number of times the lexical entry flights n xflights is used in a parse and similar features for all other members of finally we introduce new features which directly consider the semantics of a parsefor each predicate f seen in training data we introduce a feature that counts the number of times f is conjoined with itself at some level in the logical formfor example the expression xflight from from would trigger the new feature for 8in general applications of the crossed composition rules can be lexically governed as described in work on multimodalccg in the future we would like to incorpo rate more finegrained lexical distinctions of this typethe from predicate signaling that the logicalform describes flights with more than one origin citywe introduce similar features which track disjunction as opposed to conjunction42 an online learning algorithmfigure 4 shows a learning algorithm that takes a training set of pairs as input and returns a weighted ccg as its outputthe algorithm is online in that it visits each example in turn and updates both w and if neces saryin step 1 on each example the input xi isparsedif it is parsed correctly the algorithm i am mediately moves to the next examplein step 2the algorithm temporarily introduces all lexical en tries seen in genlex and finds the highest scoring parse that leads to the correct semantics zia small subset of genlexnamely only those lexical entries that are contained in the highest scoring parseare added to in step 3 a simple perceptron update is performedthe hypothesis is parsed again with the new lexicon andan update to the parameters w is made if the result ing parse does not have the correct logical formthis algorithm differs from the approach in zc05in a couple of important respectsfirst the zc05 al gorithm performed learning of the lexicon at each iteration in a batch method requiring a pass over the entire training setthe new algorithm is fully online learning both and w in an examplebyexample fashionthis has important consequences for the efficiency of the algorithmsecond the parameter estimation method in zc05 was based on stochasticgradient descent on a loglikelihood objective func tionthe new algorithm makes use of perceptron 683 inputs training examples i 1 n where each xi is a sentence each zi is a logical forman initial lexicon 0number of training iterations t definitions genlex takes as input a sentence x anda logical form z and returns a set of lexical items as de scribed in section 24gen is the set of all parses for x with lexicon gen is the set of all parses for x with lexicon which have logical form z thefunction f represents the features described in sec tion 41the function l maps a parse tree y to its associated logical forminitialization set parameters w to initial values described in section 62set 0algorithm for t 1 t i 1 n step 1 let y argmaxygenw f if l zi go to the next examplestep 2 set genlex let y argmaxygenw f define i to be the set of lexical entries in yset lexicon to i step 3 let y argmaxygenw f if l 6 zi set w w ffoutput lexicon together with parameters w figure 4 an online learning algorithmupdates which are simpler and cheaper to computeas in zc05 the algorithm assumes an initial lex icon 0 that contains two types of entriesfirst we compile entries such as boston np boston for entities such as cities times and monthnames that occur in the domain or underlying databasein practice it is easy to compile a list of these atomic entitiessecond the lexicon has entries for some function words such as whwords and determiners9there has been a significant amount of previous work on learning to map sentences to under lying semantic representationsa wide variety 9our assumption is that these entries are likely to be domain independent so it is simple enough to compile a list that can be reused in new domainsanother approach which we may consider in the future would be to annotate a small subset of the training examples with full ccg derivations from which these frequently occurring entries could be learnedof techniques have been considered including ap proaches based on machine translation techniques parsing techniques tech niques that use inductive logic programming andideas from string kernels and support vector ma chines in our experiments we compare to he and young on the atis domain and zettlemoyerand collins on the geo880 domain because these systems currently achieve the best per formance on these problemsthe approach of zettlemoyer and collins was presented in section 24he and young describe an algorithm that learns a probabilisticpushdown automaton that models hierarchical de pendencies but can still be trained on a data set that does not have full treebankstyle annotationsthisapproach has been integrated with a speech recog nizer and shown to be robust to recognition errors there is also related work in the ccg litera tureclark and curran present a method forlearning the parameters of a loglinear ccg pars ing model from fully annotated normalform parse treeswatkinson and manandhar present an unsupervised approach for learning ccg lexiconsthat does not represent the semantics of the training sentencesbos et al present an al gorithm that learns ccg lexicons with semantics but requires fullyspecified ccg derivations in thetraining databozsahin presents work on us ing ccg to model languages with free word orderin addition there is related work that focuses on modeling child language learningsiskind presents an algorithm that learns wordtomeaning mappings from sentences that are paired with a set of possible meaning representationsvillavicencio describes an approach that learns a categorial grammar with syntactic and semantic informationboth of these approaches use sentences from child directed speech which differ significantly from the natural language interface queries we considerfinally there is work on manually developing parsing techniques to improve robustness in contrast our ap proach is integrated into a learning frameworkthe main focus of our experiments is on the atis travel planning domainfor development we used4978 sentences split into a training set of 4500 ex amples and a development set of 478 examplesfortest we used the atis nov93 test set which con tains 448 examplesto create the annotations wecreated a script that maps the original sql annotations provided with the data to lambdacalculus ex pressionshe and young previously reported results on the atis domain using a learning approachwhich also takes sentences paired with semantic annotations as inputin their case the semantic struc tures resemble contextfree parses with semantic nonterminal labelsin our experiments we have used the same split into training and test data as he and young ensur ing that our results are directly comparablehe and young report partial match figures for their parser based on precision and recall in recovering attributevalue pairsit is sim ple for us to map from lambdacalculus expressions to attributevalue entries of this form for example the expression to would be mapped to destination bostonhe and young gave us their data and annotations so we can directly compare results on the partialmatch criterionwealso report accuracy for exact matches of lambda calculus expressions which is a stricter criterionin addition we report results for the method on the geo880 domainthis allows us to compare directly to the previous work of zettlemoyer and collins using the same split of the data intotraining and test sets of sizes 600 and 280 respec tivelywe use crossvalidation of the training set asopposed to a separate development set for optimiza tion of parameters61 improving recallthe simplest approach to the task is to train the parser and directly apply it to test sentencesin our experiments we will see that this produces resultswhich have high precision but somewhat lower recall due to some test sentences failing to parse a simple strategy to alle viate this problem is as followsif the sentence failsto parse we parse the sentence again this time al lowing parse moves which can delete words at some costthe cost of this deletion operation is optimizedon development datathis approach can significantly improve fmeasure on the partialmatch cri terion in particularwe report results both with and without this second pass strategy62 parameters in the approachthe algorithm in figure 4 has a number of param eters the set t which we now describethe values of these parameters were chosen to op timize the performance on development datat is the number of passes over the training set and was set to be 4each lexical entry in the initial lexicon0 has an associated feature which counts the num ber of times this entry is seen in a parsethe initial parameter value in w for all features of this form was chosen to be some value each of the newccg rulesthe application composition crossedcomposition and typeraising rules described in sec tion 3has an associated parameterwe set al of these parameters to the same initial value finally when new lexical entries are added to their initial weight is set to some value in practice optimization on developmentdata led to a positive value for and negative val ues for and 63 resultstable 1 shows accuracy for the method by the exact match criterion on the atis test setthe two passstrategy actually hurts fmeasure in this case al though it does improve recall of the methodtable 2 shows results under the partialmatch cri terionthe results for our approach are higher than those reported by he and young even without the second highrecall strategywith the twopass strategy our method has more than halved the fmeasure error rate giving improvements from 903 fmeasure to 959 fmeasuretable 3 shows results on the geo880 domainthe 685 precision recall f1 singlepass parsing 9061 8192 8605 twopass parsing 8575 846 8516 table 1 exactmatch accuracy on the atis test setprecision recall f1 singlepass parsing 9676 8689 9156 twopass parsing 9511 9671 959 he and young 903 table 2 partialcredit accuracy on the atis test setnew method gives improvements in performance both with and without the two pass strategy showingthat the new ccg combinators and the new learn ing algorithm give some improvement on even this domainthe improved performance comes from aslight drop in precision which is offset by a large in crease in recalltable 4 shows ablation studies on the atis data where we have selectively removed various aspectsof the approach to measure their impact on performanceit can be seen that accuracy is seriously de graded if the new ccg rules are removed or if the features associated with these rules are removedfinally we report results concerning the effi ciency of the new online algorithm as compared to the zc05 algorithmwe compared running times for the new algorithm and the zc05 algorithm on the geography domain with both methods making 4 passes over the training datathe new algorithm took less than 4 hours compared to over 12 hours for the zc05 algorithmthe main explanation for this improved performance is that on many trainingexamples10 in step 1 of the new algorithm a cor rect parse is found and the algorithm immediately moves on to the next examplethus genlex is not required and in particular parsing the example with the large set of entries generated by genlex is not requiredwe presented a new online algorithm for learning a combinatory categorial grammar together with parameters that define a loglinear pars ing modelwe showed that the use of nonstandardccg combinators is highly effective for parsing sen 10measurements on the geo880 domain showed that in the 4 iterations 833 of all parses were successful at step 1precision recall f1 singlepass parsing 9549 832 8893 twopass parsing 9163 8607 8876 zc05 9625 7929 8695 table 3 exactmatch accuracy on the geo880 test setprecision recall f1 full online method 8726 7444 8035 without control features 7033 4245 5295 without relaxed word order 8281 6398 7219 without word insertion 7731 5694 6558 table 4 exactmatch accuracy on the atis development setfor the full algorithm and restricted versions of itthe sec ond row reports results of the approach without the featuresdescribed in section 3 that control the use of the new combi natorsthe third row presents results without the combinators from section 31 that relax word orderthe fourth row reports experiments without the typeraising combinators presented in section 32tences with the types of phenomena seen in spontaneous unedited natural languagethe resulting sys tem achieved significant accuracy improvements in both the atis and geo880 domainsacknowledgements wewould like to thank yulan he and steve young for their help with obtaining the atis data setwe also acknowledge the support for this researchluke zettlemoyer was funded by a microsoft graduateresearch fellowship and michael collins was sup ported by the national science foundation under grants 0347631 and dms0434222
D07-1071
online learning of relaxed ccg grammars for parsing to logical formwe consider the problem of learning to parse sentences to lambdacalculus representations of their underlying semantics and present an algorithm that learns a weighted combinatory categorial grammar a key idea is to introduce nonstandard ccg combinators that relax certain parts of the grammar for example allowing flexible word order or insertion of lexical items with learned costswe also present a new online algorithm for inducing a weighted ccgresults for the approach on atis data show 86 fmeasure in recovering fully correct semantic analyses and 959 fmeasure by a partialmatch criterion a more than 5 improvement over the 903 partialmatch figure reported by he and young we develop atis dataset for semantic parsingwe develop a set which includes features that are sensitive to lexical choices and the structure of the logical form that is constructedwe introduce the standard application composition and coordination combinators as well as typeshifting rules to model spontaneous unedited text
the infinite pcfg using hierarchical dirichlet processes we present a nonparametric bayesian model of tree structures based on the hierarchical dirichlet process our hdppcfg model allows the complexity of the grammar to grow as more training data is available in addition to presenting a fully bayesianmodel for the pcfg we also develop an ef ficient variational inference procedure onsynthetic data we recover the correct grammar without having to specify its complexity in advance we also show that our tech niques can be applied to fullscale parsingapplications by demonstrating its effective ness in learning statesplit grammars probabilistic contextfree grammars havebeen a core modeling technique for many aspects of linguistic structure particularly syntac tic phrase structure in treebank parsing an important question when learning pcfgs is how many grammar symbols to allocate to the learning algorithm based on the amount of available datathe question of how many clusters has been tackled in the bayesian nonparametricsliterature via dirichlet process mixture mod els dp mixture models have since been extended to hierarchical dirichlet processes and hdphmms and applied to many different types of clusteringinduction problems in nlp in this paper we present the hierarchical dirich let process pcfg a nonparametric bayesian model of syntactic tree structures based on dirichlet processesspecifically an hdppcfg is defined to have an infinite number of symbols the dirichlet process prior penalizes the use of more symbols than are supported by the training datanote that nonparametricdoes not mean noparameters rather it means that the effective num ber of parameters can grow adaptively as the amount of data increases which is a desirable property of a learning algorithmas models increase in complexity so does the un certainty over parameter estimatesin this regime point estimates are unreliable since they do not take into account the fact that there are different amountsof uncertainty in the various components of the pa rametersthe hdppcfg is a bayesian model which naturally handles this uncertaintywe present an efficient variational inference algorithm for thehdppcfg based on a structured meanfield ap proximation of the true posterior over parametersthe algorithm is similar in form to them and thus inherits its simplicity modularity and efficiencyun like them however the algorithm is able to take theuncertainty of parameters into account and thus in corporate the dp priorfinally we develop an extension of the hdp pcfg for grammar refinement since treebanks generally consist of coarselylabeled contextfree tree structures the maximum likelihood treebank grammar is typically a poormodel as it makes overly strong independence as sumptionsas a result many generative approaches to parsing construct refinements of the treebankgrammar which are more suitable for the model ing tasklexical methods split each preterminal symbol into many subsymbols one for each wordand then focus on smoothing sparse lexical statis 688 tics unlexicalized methods refine the grammar in a more conservative fashion splitting each nonterminal or preterminal symbol into a much smaller number of subsymbols we apply our hdppcfggrmodel to automatically learn the number of subsym bols for each symbolat the heart of the hdppcfg is the dirichlet pro cess mixture model which isthe nonparametric bayesian counterpart to the clas sical finite mixture modelin order to build up an understanding of the hdppcfg we first review the bayesian treatment of the finite mixture model we then consider the dp mixture model and use it as a building block for developing nonparametric structured versions of the hmm and pcfg our presentation highlights the similarities between these models so that each step along this progression reflects only the key differences21 bayesian finite mixture modelwe begin by describing the bayesian finite mixture model to establish basic notation that will carry over the more complex models we consider laterbayesian finite mixture model dirichlet draw component probabilities for each component z 1 k z g0 draw component parameters for each data point i 1 n zi multinomialchoose component xi f generate data pointthe model has k components whose prior dis tribution is specified by the dirichlet hyperparameter controls how uniformthis distribution is as increases it becomes in creasingly likely that the components have equal probabilityfor each mixture component z 1 k the parameters of the component z aredrawn from some prior g0given the model param eters the data points are generated iid by first choosing a component and then generating from a data model f parameterized by that componentin document clustering for example each datapoint xi is a document represented by its term frequency vectoreach component z has multinomial parameters z which specifies adistribution f over wordsit is custom ary to use a conjugate dirichlet prior g0 dirichlet of the dirichlet process instead of the stochastic process definition or the chinese restaurant process the stick breaking representation captures the dp prior most explicitly and allows us to extend the finite mixture model with minimal changeslater it will enable us to readily define structured models in a form similar to their classical versionsfurthermore an efficient variational inference algorithm can be developed in this representation the key difference between the bayesian finite mixture model and the dp mixture model is that the latter has a countably infinite number of mixture components while the former has a predefined k note that if we have an infinite number of mixture components it no longer makes sense to consider a symmetric prior over the component probabilities the prior over component probabilities must decay in some waythe stickbreaking distribution achieves this as followswe write gemto mean that is distributed according to the stickbreaking distributionhere the concentrationparameter controls the number of effective com ponentsto draw gem we first generatea countably infinite collection of stickbreaking pro portions u1 u2 where each uz betathe stickbreaking weights are then defined in terms of the stick proportions z uz z k while the posterior grammar does havean infinite number of symbols the exponential decay of the dp prior ensures that most of the probability mass is contained in the first few symbols 2 while our variational ap proximation q is truncated the actual pcfg model is notask increases our approximation improves28 coordinatewise ascentthe optimization problem defined by equation is intractable and nonconvex but we can use a simple coordinateascent algorithm that iteratively op timizes each factor of q in turn while holding the others fixedthe algorithm turns out to be similar in form to them for an ordinary pcfg optimizing q is the analogue of the estep and optimizing qis the analogue of the mstep however optimizing qhas no analogue in themwe summarize each of these updates below for complete derivationsparse trees q the distribution over parse treesq can be summarized by the expected suffi cient statistics which we denote as c for binary productions and c for emissionswe can compute these expected counts using dynamic programming as in the estep of themwhile the classical estep uses the current ruleprobabilities our meanfield approximation in volves an entire distribution qfortunately wecan still handle this case by replacing each rule probability with a weight that summarizes the uncer tainty over the rule probability as represented by q we define this weight in the sequelit is a common perception that bayesian inference is slow because one needs to compute integralsour meanfield inference algorithm is a counterexamplebecause we can represent uncertainty over rule prob abilities with single numbers much of the existing pcfg machinery based on them can be modularly imported into the bayesian frameworkrule probabilities q for an ordinary pcfg the mstep simply involves taking ratios of expected2in particular the variational distance between the stickbreaking distribution and the truncated version decreases expo nentially as the truncation level k increases693 counts bz c c for the variational hdppcfg the optimal qis given by the standard posterior update for dirichlet distributions3 q dirichlet where c is the matrix of counts of rules with lefthand side z these distributions can then be summa rized with multinomial weights which are the onlynecessary quantities for updating q in the next it eration wbz def expeqlogbz ebzlzr eb where is the digamma functionthe emission parameters can be defined similarlyinspection of equations and reveals that the only difference between the maximum likelihood and the meanfieldupdate is that the latter applies the exp func tion to the counts when the truncation k is large bzlzr is near 0 for most righthand sides so exp hasthe effect of downweighting countssince this sub traction affects large counts more than small countsthere is a richgetricher effect rules that have al ready have large counts will be preferredspecifically consider a set of rules with the same lefthand sidethe weights for all these rules only differ in the numerator so applying exp creates a local preference for righthand sides with larger countsalso note that the rule weights are not normalized they always sum to at most one and are equal to one exactly when qisdegeneratethis lack of normalization gives an extra degree of freedom not present in maximum like lihood estimation it creates a global preference for lefthand sides that have larger total countstoplevel symbol probabilities q recall that we restrict q so optimizing is equivalent to finding a single best unlike q3because we have truncated the toplevel symbol weights the dp prior on bz reduces to a finite dirichlet distribution0 05 1 15 2 0 05 1 15 2 x exp x figure 5 the exp function which is used in computing the multinomial weights for meanfieldinferenceit has the effect of reducing a larger frac tion of small counts than large countsand q there is no closed form expression forthe optimal and the objective function is not convex in nonetheless we canapply a standard gradient projection method to improve to a local maximathe part of the objective function in equation that depends on is as follows l loggem kz1 eqlogdirichletsee liang et al for the derivation of the gradientin practice this optimization has very little ef fect on performancewe suspect that this is because the objective function is dominated by p andp while the contribution of p is mi norwe now present an empirical evaluation of the hdppcfg model and variational inference tech niqueswe first give an illustrative example of theability of the hdppcfg to recover a known gram mar and then present the results of experiments on largescale treebank parsing31 recovering a synthetic grammarin this section we show that the hdppcfggr can recover a simple grammar while a standard 694 s x1x1 x2x2 x3x3 x4x4 x1 a1 b1 c1 d1 x2 a2 b2 c2 d2 x3 a3 b3 c3 d3 x4 a4 before c4 d4 s xi xi ai bi ci di ai bi ci di figure 6 a synthetic grammar with a uniform distribution over rules the grammar generates trees of the form shown on the rightpcfg fails to do so because it has no builtin con trol over grammar complexityfrom the grammar in figure 6 we generated 2000 treesthe two terminalsymbols always have the same subscript but we col lapsed xi to x in the training datawe trained the hdppcfggr with truncation k 20 for boths and x for 100 iterationswe set al hyperparame ters to 1figure 7 shows that the hdppcfggr recoversthe original grammar which contains only 4 sub symbols leaving the other 16 subsymbols unusedthe standard pcfg allocates all the subsymbols to fit the exact cooccurrence statistics of left and right terminalsrecall that a rule weight as defined in equation is analogous to a rule probability for stan dard pcfgswe say a rule is effective if its weight is at least 106 and its left handside has posterior is also at least 106in general rules with weightsmaller than 106 can be safely pruned without af fect parsing accuracythe standard pcfg uses all 20 subsymbols of both s and x to explain the data resulting in 8320 effective rules in contrast the hdppcfg uses only 4 subsymbols for x and 1 fors resulting in only 68 effective rulesif the thresh old is relaxed from 106 to 103 then only 20 rules are effective which corresponds exactly to the true grammar32 parsing the penn treebankin this section we show that our variational hdp pcfg can scale up to realworld data setswe ranexperiments on the wall street journal por tion of the penn treebankwe trained on sections 221 used section 24 for tuning hyperparameters and tested on section 22we binarize the trees in the treebank as followsfor each nonterminal node with symbol x we in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 025 subsymbol pos ter ior 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 025 subsymbol pos ter ior standard pcfg hdppcfg figure 7 the posteriors over the subsymbols of thestandard pcfg is roughly uniform whereas the pos teriors of the hdppcfg is concentrated on four subsymbols which is the true number of symbols in the grammartroduce a rightbranching cascade of new nodes with symbol x the end result is that each node has at most two childrento cope with unknown words we replace any word appearing fewer than 5 timesin the training set with one of 50 unknown word to kens derived from 10 wordform featuresour goal is to learn a refined grammar where eachsymbol in the training set is split into k subsym bolswe compare an ordinary pcfg estimated with maximum likelihood andthe hdppcfg estimated using the variational in ference algorithm described in section 26to parse new sentences with a grammar we com pute the posterior distribution over rules at each spanand extract the tree with the maximum expected cor rect number of rules 321 hyperparametersthere are six hyperparameters in the hdppcfg gr model which we set in the following manner 1 t 1 e 1 you b 1n wheren is the number of different unary righthand sides of rules with lefthand side s in the treebank grammarthe two most important hyperpa rameters are you and b which govern the sparsity of the righthand side for unary and binary ruleswe set you b although more performance could probably be gained by tuning these individuallyit turns out that there is not a single b that works for all truncation levels as shown in table 1if the toplevel distribution is uniform the value of b corresponding to a uniform prior over pairs ofchildren subsymbols is k2interestingly the opti mal b appears to be superlinear but subquadratic 695 truncation k 2 4 8 12 16 20 best b 16 12 20 28 48 80 uniform b 4 16 64 144 256 400 table 1 for each truncation level we report the b that yielded the highest f1 score on the development setk pcfg pcfg hdppcfg f1 size f1 size f1 size 1 6047 2558 6036 2597 605 2557 2 6953 3788 6938 4614 7108 4264 8 7432 4262 7926 120598 7915 50629 12 7099 7297 788 160403 7894 86386 16 6699 19616 792 261444 7824 131377 20 6444 27593 7927 369699 7781 202767 table 2 shows development f1 and grammar sizes as we increase the truncation k in k we used these values of b in the following experiments322 results the regime in which bayesian inference is most important is when training data is scarce relative tothe complexity of the modelwe train on just sec tion 2 of the penn treebanktable 2 shows how the hdppcfggr can produce compact grammars that guard against overfittingwithout smoothingordinary pcfgs trained using them improve as k in creases but start to overfit around k 4simple add101 smoothing prevents overfitting but at thecost of a sharp increase in grammar sizesthe hdp pcfg obtains comparable performance with a much smaller number of ruleswe also trained on sections 221 to demon strate that our methods can scale up and achievebroadly comparable results to existing stateofthe art parserswhen using a truncation level of k 16 the standard pcfg with smoothing obtains an f1 score of 8836 using 706157 effective rules whilethe hdppcfggr obtains an f1 score of 8708 us ing 428375 effective ruleswe expect to see greaterbenefits from the hdppcfg with a larger trunca tion levelthe question of how to select the appropriate gram mar complexity has been studied in earlier workit is well known that more complex models nec essarily have higher likelihood and thus a penaltymust be imposed for more complex grammarsexamples of such penalized likelihood procedures in clude stolcke and omohundro which used an asymptotic bayesian model selection criterion and petrov et al which used a splitmerge algorithm which procedurally determines when to switch between grammars of various complexitiesthese techniques are model selection techniquesthat use heuristics to choose among competing sta tistical models in contrast the hdppcfg relies on the bayesian formalism to provide implicit control over model complexity within the framework of a single probabilistic modeljohnson et al also explored nonparametric grammars but they do not give an inference algorithm for recursive grammars eg grammars in cluding rules of the form a because and b darecursion is a crucial aspect of pcfgs and our inference algorithm does handle itfinkel et al independently developed another nonpara metric model of grammarsthough their model is also based on hierarchical dirichlet processes and issimilar to ours they present a different inference al gorithm which is based on samplingkurihara and sato and kurihara and sato applied variational inference to pcfgstheir algorithm issimilar to ours but they did not consider nonpara metric modelswe have presented the hdppcfg a nonparametric bayesian model for pcfgs along with an efficient variational inference algorithmwhile our primarycontribution is the elucidation of the model and algorithm we have also explored some important empirical properties of the hdppcfg and also demon strated the potential of variational hdppcfgs on a fullscale parsing task696
D07-1072
the infinite pcfg using hierarchical dirichlet processeswe present a nonparametric bayesian model of tree structures based on the hierarchical dirichlet process our hdppcfg model allows the complexity of the grammar to grow as more training data is availablein addition to presenting a fully bayesian model for the pcfg we also develop an efficient variational inference procedureon synthetic data we recover the correct grammar without having to specify its complexity in advancewe also show that our techniques can be applied to fullscale parsing applications by demonstrating its effectiveness in learning statesplit grammarswe find that because the latent variable grammars are not explicitly regularized them keeps fitting the training data and eventually begins over fitting
tree kernelbased relation extraction with contextsensitive structured parse tree information this paper proposes a tree kernel with contextsensitive structured parse tree information for re lation extraction it resolves two critical problems in previous tree kernels for relation extraction in two ways first it automatically determines a dynamic contextsensitive tree span for relation ex traction by extending the widelyused shortest pathenclosed tree to include necessary context information outside spt second it pro poses a contextsensitive convolution tree kernel which enumerates both contextfree and context sensitive subtrees by considering their ancestor node paths as their contexts moreover this paper evaluates the complementary nature between our tree kernel and a stateoftheart linear kernel evaluation on the ace rdc corpora shows that our dynamic contextsensitive tree span is much more suitable for relation extraction than spt and our tree kernel outperforms the stateoftheart collins and duffys convolution tree kernel it also shows that our tree kernel achieves much bet ter performance than the stateoftheart linear kernels finally it shows that featurebased and tree kernelbased methods much complement each other and the composite kernel can well integrate both flat and structured features relation extraction is to find various predefined se mantic relations between pairs of entities in textthe research in relation extraction has been promoted by the message understanding conferences and the nist automatic content extraction program ac cording to the ace program an entity is an object or a set of objects in the world and a relation is an explicitly or implicitly stated relationship among enti tiesfor example the sentence bill gates is the chairman and chief software architect of microsoft corporationconveys the acestyle relation employmentexecbetween the entities bill gates and microsoft corporationextraction of semantic relations between entities can be very useful in many applica tions such as question answering eg to answer the query who is the president of the united states and information retrieval eg to expand the query george w bushwith the president of the united statesvia his relationship with the united statesmany researches have been done in relation extractionamong them featurebased methods achieve certain success by employing a large amount of diverse linguistic features varying from lexical knowledge entityrelated information to syntactic parse trees depend ency trees and semantic informationhowever it is difficult for them to effectively capture structured parse tree information which is critical for further performance improvement in rela tion extractionas an alternative to featurebased methods tree kernelbased methods provide an elegant solution to explore implicitly structured features by directly computing the similarity between two treesalthough earlier researches only achieve success on simple tasks and fail on complex tasks such as the ace rdc task tree kernelbased methods achieve much progress recentlyas the stateoftheart zhang et al applied the conversation lution tree kernel and achieved comparable performance with a stateofthe art linear kernel on the 5 relation types in the ace rdc 2003 corpushowever there are two problems in collins and duffys convolution tree kernel for relation extractionthe first is that the subtrees enumerated in the tree kernel computation are contextfreethat is each subtree enumerated in the tree kernel computation 728 does not consider the context information outside the subtreethe second is to decide a proper tree span in relation extractionzhang et al explored five tree spans in relation extraction and it was a bit sur prising to find that the shortest pathenclosed tree performed bestthis is contrast to our intuitionfor ex ample got marriedis critical to determine the relationship between johnand maryin the sen tence john and mary got marriedas shown in figure 1it is obvious that the information con tained in spt is not enough to determine their relationshipthis paper proposes a contextsensitive convolu tion tree kernel for relation extraction to resolve the above two problemsit first automatically determines a dynamic contextsensitive tree span for relation ex traction by extending the shortest pathenclosed tree to include necessary context information outside sptthen it proposes a contextsensitive convolution tree kernel whic h not only enumerates context free subtrees but also contextsensitive subtrees by considering their ancestor node paths as their contextsmoreover this paper evaluates the complementary nature of different linear kernels and tree kernels via a composite kernelthe layout of this paper is as followsin section 2 we review related work in more detailsthen the dynamic contextsensitive tree span and the contextsensitive convolution tree kernel are proposed in sec tion 3 while section 4 shows the experimental resultsfinally we conclude our work in sec tion 5the relation extraction task was first introduced as part of the template element task in muc6 and then formulated as the template relation task in muc7since then many methods such as featurebased tree ker nelbased and composite kernelbased have been proposed in lit eraturefor the featurebased methods kambhatla employed maximum entropy models to combine diverse lexical syntactic and semantic features in rela tion extraction and achieved the fmeasure of 528 on the 24 relation subtypes in the ace rdc 2003 corpuszhou et al further systematically ex plored diverse features through a linear kernel and support vector machines and achieved the f measures of 680 and 555 on the 5 relation types and the 24 relation subtypes in the ace rdc 2003 cor pus respectivelyone problem with the featurebased methods is that they need extensive feature engineeringanother problem is that although they can ex plore some structured information in the parse tree used the nonterminal path connecting the given two entities in a parse tree while zhou et al introduced additional chunking features to enhance the performance it is found dif ficult to well preserve structured information in the parse trees using the featurebased methodszhou et al further improved the performance by ex ploring the commonality among related classes in a class hierarchy using hierarchical learning strategyas an alternative to the featurebased methods the kernelbased methods have been proposed to implicitly explore various features in a high dimensional space by employing a kernel to cal culate the similarity between two objects directlyin particular the kernelbased methods could be very effective at reducing the burden of feature engineer ing for structured objects in nlp researches eg the tree structure in relation extractionzelenko et al proposed a kernel between two parse trees which recursively matches nodes from roots to leaves in a topdown mannerfor each pair of matched nodes a subsequence kernel on their child nodes is invokedthey achieved quite success on two simple relation extraction tasksculotta and sorensen extended this work to estimate simi larity between augmented dependency trees and achieved the fmeasure of 458 on the 5 relation types in the ace rdc 2003 corpusone problem with the above two tree kernels is that matched nodes must be at the same height and have the same path to the root nodebunescu and mooney pro posed a shortest path dependency tree kernel which just sums up the number of common word classes at each position in the two paths and achieved the fmeasure of 525 on the 5 relation types in the ace rdc 2003 corpusthey argued that the information to model a relationship between two entities can be typically captured by the shortest path between them in the dependency graphwhile the shortest path may not be able to well preserve structured de pendency tree information another problem with their kernel is that the two paths should have same lengththis makes it suffer from the similar behavior with that of culotta and sorensen high preci sion but very low recallas the stateoftheart tree kernelbased method zhang et al explored various structured feature 729 spaces and used the convolution tree kernel over parse trees to model syntac tic structured information for relation extractionthey achieved the fmeasures of 619 and 636 on the 5 relation types of the ace rdc 2003 corpus and the 7 relation types of the ace rdc 2004 corpus respectively without entityrelated information while the f measure on the 5 relation types in the ace rdc 2003 corpus reached 687 when entityrelated infor mation was included in the parse treeone problem with collins and duffys convolution tree kernel is that the subtrees involved in the tree kernel computa tion are contextfree that is they do not consider the information outside the subtreesthis is different from the tree kernel in culota and sorensen where the subtrees involved in the tree kernel com putation are contextsensitive zhang et al also showed that the widelyused shortest pathenclosed tree performed bestone problem with spt is that it fails to capture the contextual information outside the shortest path which is important for relation extraction in many casesour random selection of 100 pos i tive training instances from the ace rdc 2003 training corpus shows that 25 of the cases need contextual information outside the shortest pathamong other kernels bunescu and mooney proposed a subsequence kernel and applied it in pro tein interaction and ace relation extraction tasksin order to integrate the advantages of featurebased and tree kernelbased methods some research ers have turned to composite kernelbased methodszhao and grishman defined several feature based composite kernels to integrate diverse features for relation extraction and achieved the fmeasure of 704 on the 7 relation types of the ace rdc 2004 corpuszhang et al proposed two composite kernels to integrate a linear kernel and collins and duffys convolution tree kernelit achieved the fmeasure of 709572 on the 5 relation types24 rela tion subtypes in the ace rdc 2003 corpus and the fmeasure of 721636 on the 7 relation types23 relation subtypes in the ace rdc 2004 corpusthe above discussion suggests that structured in formation in the parse tree may not be fully utilized in the previous works regardless of featurebased tree kernelbased or composite kernelbased methodscompared with the previous works this paper pro poses a dynamic contextsensitive tree span trying to cover necessary structured information and a context sensitive convolution tree kernel considering both contextfree and contextsensitive subtreesfurther more a composite kernel is applied to combine our tree kernel and a stateoftheart linear kernel for in tegrating both flat and structured features in relation extraction as well as validating their complementary naturekernel for relation extraction in this section we first propose an algorithm to dy namically determine a proper contextsensitive tree span and then a contextsensitive convolution tree kernel for relation extraction31 dynamic contextsensitive tree span inrelation extraction a relation instance between two entities is encaps you lated by a parse treethus it is critical to understand which portion of a parse tree is important in the tree kernel calculationzhang et al systematically explored seven different tree spans including the shortest pathenclosed tree and a context sensitive pathenclosed tree1 and found that spt per formed bestthat is spt even outperforms csptthis is contrary to our intuitionfor example got marriedis critical to determine the relationship between johnand maryin the sentence john and mary got marriedas shown in figure 1 and the information contained in spt is not enough to determine their relationshipobviously contextsensitive tree spans should have the potential for better performanceone problem with the contextsensitive tree span explored in zhang et al is that it only considers the availability of entitiessiblings and fails to consider following two factors 1 whether is the information contained in spt enough to determine the relationship between two entitiesit dependsin the embedded cases spt is enoughfor example johns wifeis enough to determine the relationship between johnand johns wifein the sentence johns wife got a good jobas shown in figure 1 however spt is not enough in the coordinated cases eg to determine the relationship between johnand maryin the sentence john and mary got marriedas shown in figure 11 cspt means spt extending with the 1st left sibling ofthe node of entity 1 and the 1st right sibling of the node of entity 2in the case of no available sibling it moves to the parent of current node and repeat the same proc ess until a sibling is available or the root is reached730 2 how can we extend spt to include necessary context information if there is no enough infor mation in spt for relation extractionto answer the above two questions we randomly chose 100 positive instances from the ace rdc 2003 training data and studied their necessary tree spansit was observed that we can classify them into 5 categories 1 embedded where one entity is embedded in another entity eg johnand johns wifeas shown in figure 1 2 pplinked where one entity is linked to another entity via pp attachment eg ceoand microsoftin the sentence ceo of microsoft announced as shown in figure 1 3 semistructured where the sentence consists of a sequence of noun phrases eg janeand abc newsin the sentence jane abc news californiaas shown in figure 1 4 de scriptive eg the citizenship between his motherand lebanesein the sentence his mother lebanese landed at as shown in figure 1 5 predicatelinked and others where the predicate information is necessary to determine the relationship between two entities eg johnand maryin the sentence john and mary got marriedas shown in figure 1 based on the above observations we implement an algorithm to determine the necessary tree span for the relation extract taskthe idea behind the algorithm is that the necessary tree span for a relation should be determined dynamically according to its tree span category and contextgiven a parsed tree and two entities in consideration it first determin es the tree span category and then extends the tree span accordinglyby default we adopt the shortest pathenclosed tree as our tree spanwe only ex pand the tree span when the tree span belongs to the predicatelinkedcategorythis is based on our observation that the tree spans belonging to the predi catelinkedcategory vary much syntactically and majority of them need information outside spt while it is quite safe to use spt as the tree span for the remaining categoriesin our algo rithm the expansion is done by first moving up until a predicateheaded phrase is found and then moving down along the predicatedheaded path to the predi cate terminal nodefigure 1 shows an example for the predicatelinkedcategory where the lines with arrows indicate the expansion pathe predicatelinked spt and the dynamic contextsensitive tree span figure 1 different tree span categories with spt and an ex ample of the dynamic contextsensitive tree span figure 2 examples of contextfree and contextsensitive sub trees related with figure 1note the bold node is the root for a subtreea problem with our algorithm is how to determine whether an entity pair belongs to the predi catelinkedcategoryin this paper a simple method is applied by regarding the predicate linkedcategory as the default categorythat is those entity pairs which do not belong to the four well defined and easily detected categories are classified into the predicatelinkedcate goryhis mother lebanese landed prp nnp vbd in npe1per npe2gpe pp s d descriptive np nn at vp jane abc news nnp nnp nns nnp np npe1per npe2org np c semistructured california np of microsoft in nnp npe2org ppsubroot b context sensitive np of microsoft in nnp npe2org s ppsubroot c context sensitive ppsubtoot npe2org of microsoft in nnp a context free np john and mary got nnp cc nnp vbd married npe1per npe2per vp s vp vbn john and mary got nnp cc nnp vbd married npe1per npe2per vp np vp np ceo of microsoft announced nn in nnp vbd npe1per npe2org vp s b pp linked pp john s wife found a job nnp pos nn vbd dt jj nn np npe1per npe2per vp s a embedded good 731 since predicate linkedinstances only occupy 20 of cases this explains why spt performs better than the contextsensitive pathenclosed tree as described in zhang et al consistently adopting cspt may introduce too much noiseunnecessary information in the tree kernel32 contextsensitive convolution tree kernelgiven any tree span eg the dynamic context sensitive tree span in the last subsection we now study how to measure the similarity between two trees using a convolution tree kernela convolution kernel aims to capture structured information in terms of substructures as a specialized convolution kernel collins and duffys convolu tion tree kernel counts the number of common subtrees as the syntactic structure similarity be tween two parse trees t1 and t2 d 2211 2121 where nj is the set of nodes in tree tj and 1 2n nd evaluates the common subtrees rooted at n1 and n2 2 and is computed recursively as follows 1 if the contextfree productions rules at 1n and 2n are different 1 2 0n nd otherwise go to 22 if both 1n and 2n are pos tags 1 2 1n n ld otherwise go to 33 calculate 1 2n nd recursively as dd where is the decay factor in order to make the kernel value less variable with respect to different subtree sizesthis convolution tree kernel has been successfully applied by zhang et al in relation extractionhowever there is one problem with this tree kernel the subtrees involved in the tree kernel computation are contextfree this is contrast to 2 that is each node n encodes the identity of a subtree rooted at n and if there are two nodes in the tree with the same label the summation will go over both of themthe tree kernel proposed in culota and sorensen which is contextsensitive that is it considers the path from the tree root node to the subtree root nodein order to integrate the advantages of both tree kernels and resolve the problem in collins and duffys convolution tree kernel this paper proposes a contextsensitive convolution tree kernelit works by taking ancestral information of subtrees into consideration d m i nnnn ii c iiii nnttk 1 2211 11 1111 2121 where 1 jn i is the set of root node paths with length i in tree tj while the maximal length of a root node path is defined by m 21 our tree ker nel computes 21 if the contextsensitive productions rules with root node paths as their left hand sides rooted at 11in and 21 in are different return 21 if both 11n and 21n are pos tags ld 21 calculate 2112121 where is the decay factor in order to make the kernel value less variable with respect to different sizes of the contextsensitive subtreesit is worth comparing our tree kernel with previous tree kernelsobviously our tree kernel is an exten sion of collins and duffys convolution tree kernel which is a special case of our tree kernel our tree kernel not only counts the occurrence of each contextfree subtree which does not consider its ancestors but also counts the occurrence of each contextsensitive subtree which con siders its ancestorsas a result our tree kernel is not limited by the constraints in previous tree kernels such as collins and duffy zhang et al culotta and sorensen and bunescu and mooney finally let us study the computational issue with our tree kernelalthough our tree kernel takes the context sensitive subtrees into consideration it only slightly increases the computational burden compared with collins and duffys convolution tree kernelthis is due to that 021 and that computation for contextsensitive sub tree pairs is necessary only when 021this paper uses the ace rdc 2003 and 2004 cor pora provided by ldc in all our experiments41 experimental settingthe ace rdc corpora are gathered from various newspapers newswire and broadcastsin the 2003 corpus the training set consists of 674 documents and 9683 positive relation instances w hile the test set consists of 97 documents and 1386 positive relation in stancesthe 2003 corpus defines 5 entity types 5 major relation types and 24 relation subtypesall the reported performances in this paper on the ace rdc 2003 corpus are evaluated on the test datathe 2004 corpus contains 451 documents and 5702 positive relation instancesit redefines 7 entity types 7 major relation types and 23 relation subtypesfor compari son we use the same setting as zhang et al by applying a 5fold crossvalidation on a subset of the 2004 data containing 348 documents and 4400 rela tion instancesthat is all the reported performances in this paper on the ace rdc 2004 corpus are evalu ated using 5fold cross validation on the entire corpus both corpora are parsed using charniaks parser with the boundaries of all the entity mentions kept 4 we iterate over all pairs of entity mentions occurring in the same sentence to generate potential relation instances5in our experimentation svm is selected as our classifierfor efficiency we apply the one vs others strategy which builds k classifiers so as to separate one class from all othersthe training parameters are chosen using crossvalidation on the ace rdc 2003 training datain particular l in our tree kernel is finetuned to 05this suggests that about 50 dis count is done as our tree kernel moves down one level in computing 21moreover we only model explicit relations and explicitly model the argument order of the two mentions in volved733finally we compare our system with the stateof theart systems in the literaturecontextsensitive convolution tree kernel in this paper the m parameter of our contextsensitive convolution tree kernel as shown in equation indicates the maximal length of root node paths and is optimized to 3 using 5fold cross validation on the ace rdc 2003 training datatable 1 compares the impact of different m in contextsensitive convolution tree kernels using the shortest pathenclosed tree on the major relation types of the ace rdc 2003 and 2004 corpora in detailsit also shows that our tree kernel achieves best performance on the test data using spt with m 3 which outperforms the one with m 1 by 23 in fmeasurethis suggests the parent and grandparent nodes of a subtree contains much information for relation extraction while considering more ancestral nodes may not helpthis may be due to that although our experimentation on the training data indicates that more than 80 of subtrees has a root node path longer than 3 including a root node path longer than 3 may be vulnerable to the full parsing errors and have negative impacttable 1 also evaluates the impact of entityrelated information in our tree kernel by attaching entity type information into both entity nodesit shows that such information can significantly improve the performance by 60 in fmeasurein all the following experiments we will apply our tree kernel with m3 and entityrelated information by defaulttable 2 compares the dynamic contextsensitive tree span with spt using our tree kernelit shows that the dynamic tree span can futher improve the performance by 12 in fmeasure6this suggests the usefulness of extending the tree span beyond spt for the predicatelinkedtree span categoryin the future work we will further explore expanding the dynamic tree span beyond spt for the remaining tree span categories6 significance test shows that the dynamic tree span performs s tatistically significantly better than spt with p values smaller than 005m p r f 1 723 566 635 2 749 579 653 3 757 583 659 4 760 583 660 a without entityrelated information m p r f 1 772 635 697 2 791 650 713 3 796 656 719 4 794 656 718 b with entityrelated information table 1 evaluation of contextsensitive convolution tree kernels using spt on the major relation types of the ace rdc 2003 and 2004 corporatree span p r f shortest path enclosed tree 796 656 719 dynamic context sensitive tee 811 667 732 table 2 comparison of dynamic contextsensitive tree span with spt using our contextsensitive convolution tree kernel on the major relation types of the ace rdc 2003 and 2004 corpora18 of positive instances in the ace rdc 2003 test data belong to the predicatelinked categorycomposite kernel in this paper a composite kernel via polynomial interpolation as described zhang et al is ap plied to integrate the proposed contextsensitive convolution tree kernel with a stateoftheart linear kernel 7 1 here pk is the poly nomial expansion of k with degree d2 ie 2 1pk k and a is the coefficient 7 here we use the same set of flat features as zhou et al734 table 3 evaluates the performance of the composite kernelit shows that the composite kernel much further improves the performance beyond that of either the stateoftheart linear kernel or our tree kernel and achieves the fmeasures of 741 and 758 on the major relation types of the ace rdc 2003 and 2004 corpora respectivelythis suggests that our tree kernel and the stateoftheart linear kernel are quite complementary and that our composite kernel can effectively integrate both flat and structured featuressystem p r f linear kernel 782 634 701 contextsensitive con volution tree kernel 811 667 732 composite kernel 822 702 758 table 3 performance of the compos ite kernel via polynomial interpolation on the major relation types of the ace rdc 2003 and 2004 corpora comparison with other systems ace rdc 2003 p r f ours composite kernel 808 684 741 zhang et al composite kernel 773 656 709 ours contextsensitive convolution tree kernel 801 638 710 zhang et al convolution tree kernel 761 626 687 bunescu et al shortest path dependency kernel 655 438 525 culotta et al dependency kernel 671 350 458 zhou et al featurebased 772 607 680 kambhatla featurebased table 4 comparison of difference systems on the ace rdc 2003 corpus over both 5 types and 24 subtypes ace rdc 2004 p r f ours composite kernel 822 702 758 zhang et al composite kernel 761 684 721 zhao et al8 composite kernel 692 705 704 ours contextsensitive convolution tree kernel 811 667 732 zhang et al convolution tree kernel 725 567 636 table 5 comparison of difference systems on the ace rdc 2004 corpus over both 7 types and 23 subtypes finally tables 4 and 5 compare our system with other stateoftheart systems9 on the ace rdc 2003 and 2004 corpora respectivelythey show that our tree kernelbased system outperforms previous tree kernelbased systemsthis is largely due to the con textsensitive nature of our tree kernel which resolves the limitations of the previous tree kernelsthey also show that our tree kernelbased system outperforms the stateoftheart featurebased systemthis proves the great potential inherent in the parse tree structure for relation extraction and our tree kernel takes a big stride towards the right directionfinally they also show that our composite kernelbased system outper forms other composite kernelbased systemsstructured parse tree information holds great potential for relation extractionthis paper proposes a contextsensitive convolution tree kernel to resolve two critical problems in previous tree kernels for relation ex traction by first automatically determining a dynamic contextsensitive tree span and then applying a con textsensitive convolution tree kernelmoreover this paper evaluates the complementary nature between our tree kernel and a stateoftheart linear kernelevaluation on the ace rdc corpora shows that our dynamic contextsensitive tree span is much more suitable for relation extraction than the widely used shortest pathenclosed tree and our tree kernel outperforms the stateoftheart collins and duffys con volution tree kernelit also shows that featurebased 8 there might be some typing errors for the performancereported in zhao and grishman since p r and f do not match9 all the stateoftheart systems apply the entityrelatedinformationit is not supervising our experiments show that using the entityrelated information gives a large performance improvement735 and tree kernelbased methods well complement each other and the composite kernel can effectively inte grate both flat and structured featuresto our knowledge this is the first research to dem onstrate that without extensive feature engineer ing an individual tree kernel can achieve much better performance than the stateoftheart linear kernel in re lation extractionthis shows the great potential of structured parse tree information for relation extrac tion and our tree kernel takes a big stride towards the right directionfor the future work we will focus on improving the contextsensitive convolution tree kernel by ex ploring more useful context informationmoreover we will explore more entityrelated information in the parse treeour preliminary work of including the entity type information significantly improves the per formancefinally we will study how to resolve the data imbalance and sparseness issues from the learn ing algorithm viewpointacknowledgement this research is supported by project 60673041 under the national natural science foundation of china and project 2006aa01z147 under the 863national hightech research and development of chinawe would also like to thank the critical and insightful comments from the four anonymous reviewers
D07-1076
tree kernelbased relation extraction with contextsensitive structured parse tree informationthis paper proposes a tree kernel with contextsensitive structured parse tree information for relation extractionit resolves two critical problems in previous tree kernels for relation extraction in two waysfirst it automatically determines a dynamic contextsensitive tree span for relation extraction by extending the widelyused shortest pathenclosed tree to include necessary context information outside sptsecond it proposes a contextsensitive convolution tree kernel which enumerates both contextfree and contextsensitive subtrees by considering their ancestor node paths as their contextsmoreover this paper evaluates the complementary nature between our tree kernel and a stateoftheart linear kernelevaluation on the ace rdc corpora shows that our dynamic contextsensitive tree span is much more suitable for relation extraction than spt and our tree kernel outperforms the stateoftheart collins and duffy convolution tree kernelit also shows that our tree kernel achieves much better performance than the stateoftheart linear kernelsfinally it shows that featurebased and tree kernelbased methods much complement each other and the composite kernel can well integrate both flat and structured featuresour composite kernel depends partially on a full parse and partially on a collection of shallow syntactic features
chinese syntactic reordering for statistical machine translation syntactic reordering approaches are an effective method for handling wordorder differences between source and target lan guages in statistical machine translation systems this paper introduces a reordering approach for translation from chinese to english we describe a set of syntac tic reordering rules that exploit systematic differences between chinese and english word order the resulting system is used as a preprocessor for both training and test sentences transforming chinese sentences to be much closer to english in terms of their word order we evaluated the reordering approach within the moses phrasebased smt system the reordering approach improved the bleu score for the moses system from 2852 to 3086 on the nist 2006 evaluation data wealso conducted a series of experiments to an alyze the accuracy and impact of different types of reordering rules syntactic reordering approaches are an effective method for handling systematic differences in word order between source and target languages within the context of statistical machine translation systems in reordering approaches sentences in the source language are first parsed for example using a treebanktrained parsera series of transformations is then applied to the resulting parse tree with the goal of transforming the source language sentence into a word order that is closer to that of the targetlanguagethe reordering process is used to prepro cess both the training and test data used within an existing smt systemreordering approaches have given significant improvements in performance fortranslation from french to english and from german to english this paper describes a syntactic reordering approach for translation from chinese to englishfigure 1 gives an example illustrating some of the dif ferences in word order between the two languagesthe example shows a chinese sentence whose literal translation in english is this is french delegation at winterolympics on achieve dec best accom plishment and where a natural translation would be this is the best accomplishment that the french delegation achieved at the winter olympics as exemplified by this sentence chinese differsfrom english in several important respects for ex ample relative clauses appear before the noun being modified prepositional phrases often appear before the head they modify and so onit can be seen that some significant reordering of the input is requiredto produce a good english translationfor this ex ample application of reordering rules leads to a newchinese string whose wordbyword english para phrase is 737 before syntactic reordering after syntactic reordering ip np pn vp vc np cp ip np nr i nn l vp pp p 3 lcp np nn g nr lc vpa vv dec adjp jj npb nn 1 ip np pn vp vc np adjp jj npb nn 1 cp dec ip np nr i nn l vp vpa vv pp p 3 lcp lc np nn g nr figure 1 original and reordered parse trees for the chinese sentence il3 g 1which translates into this is the best accomplishment that the french delegation achieved at the winter olympicsin englishthis is best accomplishment dec french delegation achieve at on winter olympics this reordering is relatively easy to express using syntactic transformationsfor example it is simpleto move the entire relative clause french delegation at winter olympics on achieve dec to a posi tion that is after the noun phrase it modifies namely best accomplishmentphrasebased systems arequite limited in their ability to perform transformations of this typemore recently developed hier archical systems may be betterequipped to deal with reordering of this type how ever in this example they would effectively have to first identify the span of the relative clause and then move it into the correct position without any explicit representation of the source language syntaxin this paper we describe a set of syntactic reordering rules that exploit systematic differences between chinese and english word orderthe result ing system is used as a preprocessor for both training and test sentences transforming chinese sentences to be much closer to englishwe report results forthe method on the nist 2006 evaluation data us ing the moses phrasebased smt system the reordering rules give an improve ment in accuracy from 2852 to 3086 bleu scorea concern for methods that make use of chinese parsers is that these parsers are typically of relativelylow accuracy particularly given that chinese re quires a wordsegmentation step that is not required in languages such as englishour results show that chinese parses are useful in smt in spite of this problemwe report results showing the precisionof the reordering rulesessentially testing how of ten the chinese sentences are correctly reorderedto give more insight into this issuewe also report experiments which assess the impact of each type of reordering rule on translation accuracya number of researchers have described ap proaches that preprocess the source language input in smt systemswe are not however aware of work on this topic for translation from chinese to englishbrown et al describe an analysis component for french which moves phrases around so the source and target sentences are closer to each other in word orderberger et al describe an approach for french that reorders phrases of the form noun1 denoun2xia and mccord describe an approach for french where reordering rules that operate on contextfree rule productions are acquired au 738tomaticallyniessen and ney describe an ap proach for translation from german to english that combines verbs with associated particles and alsoreorders questionscollins et al also describe an approach for german concentrating on re ordering german clauses which have quite different word order from clauses in englishour approach is most similar to that of collins et al most smt systems employ some mechanism that allows reordering of the source language during translation themoses phrasebased system that we use has a rel atively simple reordering model which has a fixed penalty for reordering moves in the decodermoresophisticated models include reordering parameters that are sensitive to lexical information the model of chiang employsa synchronous contextfree grammar to allow hierarchical approaches to reorderingthe syntax based models of yamada and knight andmarcu et al build a full parse tree in the target language again effectively allowing hierarchi cal reordering based on synchronous grammarsitis worth noting that none of these approaches to re ordering make use of explicit syntactic information in the source languagefor example none of the methods make use of an existing sourcelanguage parser and marcu et al make use of a parser in the target language ie englishfinally note that a number of statistical mt systems make use of source language syntax in transducerstyle approaches see in contrast to the prepro cessing approach they attempt to incorporate syntax directly into the decoding stagewe used the penn chinese treebank guidelines in searching for a suitable set of reordering ruleswe examined all phrase types in the treebank potentially phrases of any type could be can didates for reordering rulestable 1 provides a list of treebank phrase tags for easy referencewe ruled out several phrase types as not requiring reordering adjp adjective phrase advp adverbial phrase headed by ad clp classifier phrase cp clause headed by c dnp phrase formed by xpdegdp determiner phrase dvp phrase formed by xpdevfrag fragment ip simple clause headed by i lcp phrase formed by xplclst list marker np noun phrase pp preposition phrase prn parenthetical qp quantifier phrase ucp unidentical coordination phrase vp verb phrase table 1 penn chinese treebank phrase tagsrulesfor example chinese adjps advps dpsqps and pps all have similar internal word order ing to their english counterpartsalso similar are a group of special structures such as lst frag and prnwe identified three categories that we consideredto be the most prominent candidates for reorder ingthese phrases include vps nps and lcps in the following we discuss each of the three main categories in more detail31 verb phrasesin chinese verb phrase modifiers typically occur in preverbal positionvp modifiers can be advps temporal and spatial nps qp pps cps ips dvps and lcpsthe advps are simple adverbs which can occur both preverbal and postverbal in an english verb phrase so we do not attempt to move themsimilarly the cp ip and dvp modifiers are typically adverbial phrases which do not have afixed position in english verb phrasesin the follow ing we only consider cases involving pps lcps temporal and spatial nps and qpspps and lcps figure 2 shows an example verb phrase with a pp modifier which translates literally 739 vp pp p 3 npa npb nn nn vpa vv qp od 1 figure 2 example vp with pp modifierthe phrase translates into ranks 10th in the eastern divisionvp np npb nt you nt vpa vv ul npa npb nn figure 3 example vp with temporal np modifierthe phrase translates into issued a statement that morninginto at eastern division rank 10threcognizingthat pps in english verb phrases almost always occur after the verb we use a simple vp re ordering rule which states that a pp in a parent vp needs to be repositioned after the sibling vplcps are similar to pps and typically map to prepositional phrases in englishthus they are handled similarly to pps ie lcps in a parent vp are repositioned after the sibling vpnps figure 3 gives an example of a verb phrasewith a temporal np modifier which literally trans lates into same day morning issue statementinenglish temporal phrases such as these almost al ways occur after the head verbconveniently the chinese treebank uses the part of speech tag nt for temporal nounsthus we use a rule whichstates that a preverbal np will be repositioned af ter the sibling vp if there is at least one nt in the np subtreea similar rule might apply to locative nps however there is no special pos tag in the treebank marking locations1 so we do not have a syntaxbased reordering rule to handle locative npsqps qp modifiers in verb phrases often corre spond to timerelated concepts such as duration and frequencyfigure 4 shows an example verb phrase with a qp modifier literally translating into many time injuredsince temporal phrases almost alwaysoccur after the verb in english verb phrases we han 1one can argue that nr in that context are likely to be placeshowever there also exist many exceptions and so we decided not to exploit the nr tagvp qp cd clp m g vpa vv figure 4 example vp with qp modifierthe phrase translates into injured many timesnpa dnp pp p npa npb nr 9n deg npb nn l nn figure 5 an example chinese np with a dnp mod ifier headed by a ppthe phrase translates into the financial aid to zimbabwein englishdle such cases by a simple rule which states that theqp in a parent vp will be repositioned after the sib ling vp32 noun phrasesnoun phrases in chinese can take several types of modifiers for example phrases of type qp dp adjp np dnp and cpthe placement of qp dp and adjp modifiers is somewhat similar to english in that these phrases typically occur before the noun they modifythe case of np modifiers in nps is very limited in the chinese treebank since most nounnoun sequences form compounds in a single nphence we only developed reordering rules to handle dnp and clausal modifiersdnps dnps are formed by xpdegwhere xp can be a phrase of the type adjp qp pp lcp or npwhen the xp is an adjp or a qp no reordering is needed because the word order is the same as that of englishwhen the xp is a pp or an lcp the dnp essentially corresponds to a prepositional phrase in en glish which almost always appears after the noun being modifiedfigure 5 shows an example where the xp in the dnp is a ppthe reordering rule to handle these two cases states that if a parent np has a child dnp which in turn has a child pp or lcp then the dnp is repositioned after the last sibling npfigure 6 shows an example noun phrase for which the xp in the dnp is npon the surface the chinesenp1 deg np2sequence is analogous to the en glish possessive structure of np1s np2and does 740 npa dnp np dp dt t clp m npb nn e deg npb nn figure 6 an example chinese np phrase with a dnp modifier headed by a npthe phrase translates into the mastery of this techniquein englishnot require reordering for example lin chinese and sues friendin en glishhowever the chinese possessive structurenp1 deg np2can express more sophisticated re lationships which are inappropriate for the np1snp2expressionfor example the phrase in fig ure 6 can only be translated into the mastery of this techniquebut not this techniques masterywe decide to reorder dnps of the npdegfor mat because they often can only map to the np2 of np1expression in englishadditionally the np2 of np1expression is more general and can replace np1s np2in many casesone exception is when the np is a pronoun eg iin which case the dnp acts simply like a possessive pronounour reordering rule thus states that if a parent np has a child dnpwhich in turn hasa child np that is not a pn then the dnp is reposi tioned after the last sibling npcps relative clauses correspond to the cp cate gory in the treebankfigure 7 shows an examplenoun phrase with two nested cp modifiersas illus trated in the figure relative clauses in chinese also occur before the noun they modify which makes the word order of this sentence quite different from that of the english translationsuch distortions in the word reordering will be quite difficult for the word or phrasebased alignment model to capturehowever with the application of a reordering ruleto reposition the child cp after its sibling np un der a parent np and the pp vp reordering rule for vp introduced previously the sentence can be easily transformed into french delegation participate 8th handicap people winter olympics hold at us salt lake citya sentence whose word order is much closer to that of englishcp is typically formed by ipdec in which decs only function is to mark the ip as a relative np cp ip vp vv np cp ip vp pp p 3 np nr i nr vp vv 1 dec qp od 1l clp m 3 npb nn nr dec npb nr i npb nn lfigure 7 an example with two nested cp modifiersthe phrase translates into the french delega tion participating in the 8th special winter olympics held in salt lake city uslcp ip npa npb nn vp vv you lc
D07-1077
chinese syntactic reordering for statistical machine translationsyntactic reordering approaches are an effective method for handling wordorder differences between source and target languages in statistical machine translation systemsthis paper introduces a reordering approach for translation from chinese to englishwe describe a set of syntactic reordering rules that exploit systematic differences between chinese and english word orderthe resulting system is used as a preprocessor for both training and test sentences transforming chinese sentences to be much closer to english in terms of their word orderwe evaluated the reordering approach within the moses phrasebased smt system the reordering approach improved the bleu score for the moses system from 2852 to 3086 on the nist 2006 evaluation datawe also conducted a series of experiments to analyze the accuracy and impact of different types of reordering rulesour rule set substantially decreases the total times of rule application about 60 compared with a constituentbased approachchinese ordering differs from english mainly in clause ordering
online largemargin training for statistical machine translation we achieved a state of the art performance in statistical machine translation by using a large number of features with an onlinelargemargin training algorithm the mil lions of parameters were tuned only on a small development set consisting of less than1k sentences experiments on arabicto english translation indicated that a modeltrained with sparse binary features outper formed a conventional smt system with a small number of features the recent advances in statistical machine translation have been achieved by discriminatively training a small number of realvalued features based ei ther on phrasebased translation orsyntaxbased translation how ever it does not scale well with a large number of features of the order of millionstillmann and zhang liang et al and bangalore et al introduced sparse binary features for statistical machine translation trained ona large training corpusin this framework the prob lem of translation is regarded as a sequential labeling problem in the same way as partofspeech tagging chunking or shallow parsinghowever the use of alarge number of features did not provide any signifi cannot improvements over a conventional small feature setbangalore et al trained the lexical choice model by using conditional random fields realized on a wfsttheir modeling was reduced tomaximum entropy markov model to han dle a large number of features which in turn faced the labeling bias problem tillmann and zhang trained their feature set using an online discriminative algorithmsince thedecoding is still expensive their online training approach is approximated by enlarging a merged k best list onebyone with a 1best outputlianget al introduced an averaged perceptron al gorithm but employed only 1best translationin watanabe et al binary features were trained only on a small development set using a variant of voted perceptron for reranking kbest translationsthus the improvement is merely relative to the baseline translation system namely whether or not there is a good translation in their kbestwe present a method to estimate a large num ber of parameters of the order of millions using an online training algorithmalthough itwas intuitively considered to be prone to overfit ting training on a small development set lessthan 1k sentences was sufficient to achieve improved performancein this method each train ing sentence is decoded and weights are updated atevery iteration when updating model parameters we employ a memorization variant of a local updating strategy in which parameters are optimized toward a set of good translations found in the kbest listacross iterationsthe objective function is an ap proximated bleu thatscales the loss of a sentence bleu to a document wise lossthe parameters are trained using the 764margin infused relaxed algorithm mira is successfully employed in dependency parsing or the jointlabelingchunking task experiments were carried out on an arabictoenglish translation task and we achieved significant improvements over conventional minimum er ror training with a small number of featuresthis paper is organized as follows first section 2 introduces the framework of statistical ma chine translationas a baseline smt system we use the hierarchical phrasebased translation with an efficient lefttoright generation originally proposed by chiang in section 3 a set of binary sparse features are defined including numeric features for our baseline systemsection 4 introduces an online largemargin training algorithm using mira with our key componentsthe experiments are presented in section 5 followed by discussion in section 6we use a loglinear approach in whicha foreign language sentence f is translated into an other language for example english e by seeking a maximum solution e argmax e wt h where h is a largedimension feature vectorw is a weight vector that scales the contribution from each featureeach feature can take any real value such as the log of the ngram language model to represent fluency or a lexicon model to capture the word or phrasewise correspondence21 hierarchical phrasebased smtchiang introduced the hierarchical phrase based translation approach in which nonterminalsare embedded in each phrasea translation is gener ated by hierarchically combining phrases using the nonterminalssuch a quasisyntactic structure can naturally capture the reordering of phrases that is notdirectly modeled by a conventional phrasebased approach the nonterminal them bedded phrases are learned from a bilingual corpuswithout a linguistically motivated syntactic struc turebased on hierarchical phrasebased modeling we adopted the lefttoright target generation method this method is able to generate translations efficiently first by simplifyingthe grammar so that the target side takes a phrase prefixed form namely a target normalized formsecond a translation is generated in a lefttoright manner similar to the phrasebased approach using earleystyle topdown parsing on the source sidecoupled with the target normalized form ngram language models are efficiently integrated during the search even with a higher order of n 22 target normalized formin chiang each production rule is restrictedto a rank2 or binarized form in which each rule contains at most two nonterminalsthe target normal ized form further imposes a constraint whereby the target side of the aligned righthand side is restricted to a greibach normal form like structure x b where x is a nonterminal is a source side string ofarbitrary terminals andor nonterminalsb is a corresponding target side where b is a string of termi nals or a phrase and is a stringof nonterminalsdefines onetoone mapping be tween nonterminals in and the use of phraseb as a prefix maintains the strength of the phrase base frameworka contiguous english side with a discontiguous foreign language side pre serves phrasebounded local word reorderingat the same time the target normalized framework stillcombines phrases hierarchically in a restricted man ner23 lefttoright target generationdecoding is performed by parsing on the source side and by combining the projected target sidewe applied an earleystyle topdown parsing approachthe basic idea is to perform topdown parsing so that the projected target side is generated in a lefttoright mannerthe search is guided with a pushdown automaton which keeps track of the span of uncovered source 765word positionscombined with the restcost esti mation aggregated in a bottomup way our decoder efficiently searches for the most likely translationthe use of a target normalized form further sim plifies the decoding proceduresince the rule formdoes not allow any holes for the target side the integration with an ngram language model is straightforward the prefixed phrases are simply concate nated and intersected with ngram31 baseline featuresthe hierarchical phrasebased translation system employs standard numeric value features ngram language model to capture the fluency of the target sidehierarchical phrase translation probabilities in both directions h and h estimated by relative counts countwordbased lexically weighted models ofhlex and hlexusing lexical transla tion modelswordbased insertiondeletion penalties that penalize through the low probabilities of the lexical translation models wordhierarchicalphrase length penaltiesbacktrackbased penalties inspired by the dis tortion penalties in phrasebased modeling 32 sparse featuresin addition to the baseline features a large number of binary features are integrated in our mt systemwe may use any binary features such as h 1 english word violateand arabicword tnthkappeared in e and f 0 otherwisethe features are designed by considering the decod ing efficiency and are based on the word alignmentstructure preserved in hierarchical phrase translation pairs when hierarchical phrases are extracted the word alignment is pre servedif multiple word alignments are observed ei1 ei ei1 ei2 ei3 ei4 f j1 f j f j1 f j2 f j3 figure 1 an example of sparse features for a phrase translationwith the same source and target sides only the fre quently observed word alignment is kept to reduce the grammar size321 word pair featuresword pair features reflect the word correspon dence in a hierarchical phrasefigure 1 illustratesan example of sparse features for a phrase trans lation pair f j f j2 and ei ei3 1from theword alignment encoded in this phrase we can ex tract word pair features of and the bigrams of word pairs are also used to capture the contextual dependencywe assumethat the word pairs follow the target side order ingfor instance we define and in dicated by the arrows in figure 1extracting bigram word pair features followingthe target side ordering implies that the corresponding source side is reordered according to the tar get sidethe reordering of hierarchical phrases is represented by using contextually dependent word pairs across their boundaries as with the feature in figure 1322 insertion features the above features are insufficient to capture the translation because spurious words are sometimesinserted in the target sidetherefore insertion fea tures are integrated in which no word alignment isassociated in the targetthe inserted words are asso ciated with all the words in the source sentence such as for the nonaligned word ei1 with the source sentence f j1 in figure 1in the 1for simplicity we show an example of phrase translation pairs but it is trivial to define the features over hierarchical phrases766 f j1 f j f j1 f j2 f j3 x 1 x 2 x 3 figure 2 example hierarchical featuressame way we will be able to include deletion fea tures where a nonaligned source word is associated with the target sentencehowever this would lead to complex decoding in which all the translated wordsare memorized for each hypothesis and thus not in tegrated in our feature set323 target bigram features target side bigram features are also included todirectly capture the fluency as in the ngram language model for instance bi gram features of are observed in figure 1324 hierarchical features in addition to the phrase motivated features weincluded features inspired by the hierarchical struc turefigure 2 shows an example of hierarchical phrases in the source side consisting of x 1 f j1x 2 f j3 x 2 f j f j1x 3 and x 3 f j2 hierarchical features capture the dependency of the source words in a parent phrase to the source words in child phrases such as and as in dicated by the arrows in figure 2the hierarchical features are extracted only for those source wordsthat are aligned with the target side to limit the fea ture size33 normalizationin order to achieve the generalization capability the following normalized tokens are introduced for each surface form word class or pos4letter prefix and suffixfor instance the word algorithm 1 online training algorithm training data t tt1 mbest oracles o tt1 i 0 1 for n 1 n do 2 for t 1 t do 3 ct bestk 4 ot oraclem 5 wi1 update wi using ct wrt ot 6 i i 1 7 end for 8 end for 9 return nt i1 w i nt violateis normalized to violand lateby taking the prefix and suffix respectivelydigits replaced by a sequence of for ex ample the word 2007627is represented as we consider all possible combination of those token typesfor example the word pair feature is normalized and expanded to etc using the 4letter prefix token typealgorithm 1 is our generic online training algo rithmthe algorithm is slightly different from other online training algorithms in that we keep and update oracle translations which is a set of good translations reachable by a decoder according to a met ric ie bleu in line 3a kbest list is generated by bestkusing the cur rent weight vector wi for the training instance ofeach training instance has multiple reference translations et for the source sentence f t using the kbest list mbest oracletranslations ot is updated by oraclemfor every it eration usually a decoder cannot generatetranslations that exactly match the reference transla tions due to its beam search pruning and oovthus we cannot always assign scores for each reference translationtherefore possible oracle translations are maintained according to an objective function 767 ie bleutillmann and zhang avoided the problem by precomputing the oracle translations inadvanceliang et al presented a similar up dating strategy in which parameters were updated toward an oracle translation found in ct but ignored potentially better translations discovered in the past iterationsnew wi1 is computed using the kbest list ct with respect to the oracle translations ot after n iterations the algorithm returns an averaged weight vector to avoid overfitting the key to thisonline training algorithm is the selection of the up dating scheme in line 541 margin infused relaxed algorithmthe margin infused relaxed algorithm is an online version of thelargemargin training algorithm for structured classification that has been suc cessfully used for dependency parsing and jointlabelingchunking the basic idea is to keep the norm of the updates to the weight vector as small as possible considering a margin at least as large as the loss of the incorrect classificationline 5 of the weight vector update procedure in algorithm 1 is replaced by the solution of wi1 argmin wi1 wi1 wi c ee subject to si1si1 l 0 e ote ct where si wit his a non negative slack variable and c 0 is a constant to control the influence to the objective functionalarger c implies larger updates to the weight vec torlis a loss function for instance difference of bleu that measures the difference between eand eaccording to the reference translations et in thisupdate a margin is created for each correct and in correct translation at least as large as the loss of the incorrect translationa larger error means a largerdistance between the scores of the correct and incor rect translationsfollowing mcdonald et al only kbest translations are used to form the margins in order to reduce the number of constraints in eq3in the translation task multiple translations are ac ceptablethus margins for moracle translation arecreated which amount to m k largemargin con straintsin this online training only active features constrained by eq3 are kept and updated unlike offline training in which all possible features have to be extracted and selected in advancethe lagrange dual form of eq3 is max0 1 2 ee h 2 ee l ee si subject to ee c with the weight vector update wi1 wi ee h equation 4 is solved using a qpsolver such as a co ordinate ascent algorithm by heuristically selecting and by updating iteratively max l si hh2 c is used to clip the amount of updatesa single oracle with 1best translation is analyti cally solved without a qpsolver and is represented as the following perceptronlike update max 0 min c l si hh2 intuitively the update amount is controlled by themargin and the loss between the correct and incorrect translations and by the closeness of two transla tions in terms of feature vectorsindeed liang et al employed an averaged perceptron algorithm in which value was always set to onetillmann and zhang used a different update style based on a convex loss function l max si 768 table 1 experimental results obtained by varying normalized tokens used with surface form features 2003 2004 2005 nist bleu nist bleu nist bleu surface form 492k 1132 5411 1057 4901 1077 4805 w prefixsuffix 4204k 1238 6387 1042 4874 1058 4718 w word class 2689k 1087 4959 1063 4955 1089 4879 w digits 576k 1101 5072 1066 4967 1084 4839 all token types 13759k 1124 5285 1066 4981 1085 4841 where 0 is a learning rate for controlling the convergence42 approximated bleuwe used the bleu score as the loss function computed by bleu exp 1 n n n1 log pn bp where pnis the ngram precision of hypothesized translations e ettt1 given reference translations e ettt1 and bp1 is a brevity penaltybleuis computed for a set of sentences not for a single sentenceour algorithm requires frequent up dates on the weight vector which implies higher cost in computing the documentwise bleutillmann and zhang and liang et al solved the problem by introducing a sentencewise bleuhowever the use of the sentencewise scoring does not translate directly into the documentwise score because of the ngram precision statistics and the brevity penalty statistics aggregated for a sentence setthus we use an approximated bleu score that basically computes bleu for a sentence set but accumulates the difference for a particular sentence the approximated bleu is computed as follows given oracle translations o for t we maintain the best oracle translations ot1 e1 et the approximated bleu for a hypothesized translation efor the training instance is computed over ot1 except for et which is replaced by e bleuthe loss computed by the approximated bleu measures the documentwise loss of substituting the cor rect translation et into an incorrect translation ethe score can be regarded as a normalization which scales a sentencewise score into a documentwise scorewe employed our online largemargin training pro cedure for an arabictoenglish translation taskthe training data were extracted from the ara bicenglish newsun bilingual corpora supplied by ldcthe data amount to nearly 38m sentencesthe arabic part of the bilingual data is tokenized by isolating arabic scripts and punctuation marksthedevelopment set comes from the mt2003 arabic english nist evaluation test set consisting of 663 sentences in the news domain with four reference translationsthe performance is evaluated by the news domain mt2004mt2005 test set consisting of 707 and 1056 sentences respectivelythe hierarchical phrase translation pairs are ex tracted in a standard way first the bilingual data are word alignment annotated byrunning giza in two di rectionssecond the word alignment is refined by a growdiagfinal heuristic third phrase translation pairs are extracted together with hierarchical phrases by considering holesin the last step the hierarchical phrases are constrainedso that they follow the target normalized form con strainta 5gram language model is trained on the english side of the bilingual data combined with the english gigaword from ldcfirst the use of normalized token types in sec tion 33 is evaluated in table 1in this setting all the structural features in section 32 are used but differentiated by the normalized tokens combinedwith surface formsour online largemargin train ing algorithm performed 50 iterations constrained 769 table 2 experimental results obtained by incrementally adding structural features features 2003 2004 2005 nist bleu nist bleu nist bleu word pairs 11042k 1105 5163 1043 4869 1073 4772 target bigram 11230k 1119 5349 1040 4860 1066 4747 insertion 13489k 1121 5220 1077 5033 1093 4808 hierarchical 13759k 1124 5285 1066 4981 1085 4841 table 3 experimental results for varying kbest and moracle translations features 2003 2004 2005 nist bleu nist bleu nist bleu baseline 1064 4647 1083 4933 1090 4703 1oracle 1best 8735k 1125 5263 1082 5077 1093 4811 1oracle 10best 10480k 1124 5345 1055 4910 1082 4849 10oracle 1best 8416k 1070 4763 1083 4888 1076 4600 10oracle 10best 13759k 1124 5285 1066 4981 1085 4841 sentencebleu 14587k 1110 5117 1082 4997 1086 4704 by 10oracle and 10best listwhen decoding a 1000best list is generated to achieve better oracle translationsthe training took nearly 1 day using 8cores of opteronthe translation quality is eval uated by casesensitive nist and bleu 2the table alsoshows the number of active features in which non zero values were assigned as weightsthe addition of prefixsuffix tokens greatly increased the number of active featuresthe setting severely overfit to the development data and therefore resulted in worse results in open teststhe word class3 with surfaceform avoided the overfitting problemthe digit sequence normalization provides a similar generaliza tion capability despite of the moderate increase in the active feature sizeby including all token types we achieved better nistbleu scores for the 2004and 2005 test setsthis set of experiments indi cates that a token normalization is useful especially trained on a small datasecond we used all the normalized token typesbut incrementally added structural features in ta ble 2target bigram features account for only the fluency of the target side without considering thesourcetarget correspondencetherefore the in 2we used the tool available at httpwwwnistgov speechtestsmt 3we induced 50 classes each for english and arabicclusion of target bigram features clearly overfit to the development datathe problem is resolved byadding insertion features which can take into ac count an agreement with the source side that is notdirectly captured by word pair featureshierarchi cal features are somewhat effective in the 2005 test set by considering the dependency structure of the source sidefinally we compared our online training algo rithm with sparse features with a baseline system in table 3the baseline hierarchical phrasebased system is trained using standard maxbleu training without sparse features table 3 shows the results obtained by varying the moracle and kbest size using all structural features and all token typeswe also experimentedsentencewise bleu as an objective function constrained by 10oracle and 10best listeven the 1oracle 1best configuration achieved significant i am provements over the baseline systemthe use ofa larger kbest list further optimizes to the devel opment set but at the cost of degraded translation quality in the 2004 test setthe larger moracle size seems to be harmful if coupled with the 1best listas indicated by the reduced active feature size 1 best translation seems to be updated toward worse translations in 10oracles that are closein terms of featureswe achieved significant improvements 770 table 4 twofold cross validation experimentsclosed test open test nist bleu nist bleu baseline 1071 4479 1068 4444 online 1158 5342 1090 4764 when the kbest list size was also increasedthe use of sentencewise bleu as an objective provides almost no improvement in the 2005 test set but is comparable for the 2004 test setas observed in three experiments the 20042005 test sets behaved differently probably because ofthe domain mismatchthus we conducted a two fold cross validation using the 200320042005 test sets to observe the effect of optimization as shown in table 44the mert baseline system performedsimilarly both in closed and open testsour online largemargin training with 10oracle and 10 best constraints and the approximated bleu lossfunction significantly outperformed the baseline sys tem in the open testthe development data is almost doubled in this settingthe mert approach seems to be confused with the slightly larger data and with the mixed domains from different epochsin this work the translation model consisting of millions of features are successfully integratedin or der to avoid poor overfitting features are limited to wordbased features but are designed to reflect the structures inside hierarchical phrasesone of the benefit of mira is its flexibilitywe may includeas many constraints as possible like moracle con straints in our experimentsalthough we describedexperiments on the hierarchical phrasebased trans lation the online training algorithm is applicable toany translation systems such as phrasebased trans lations and syntaxbased translationsonline discriminative training has already been studied by tillmann and zhang and lianget al in their approach training was per formed on a large corpus using the sparse features ofphrase translation pairs target ngrams andor bag ofword pairs inside phrasesin tillmann and zhang 4we split data by document not by sentence kbest list generation is approximated by a stepbystep onebest merging method that separates the decoding and training stepsthe weight vector update scheme is very similar to mira but basedon a convex loss functionour method directly them ploys the kbest list generated by the fast decoding method at every iterationone of the benefits is that we avoid the rather expen sive cost of merging the kbest list especially when handling millions of featuresliang et al employed an averaged percep tron algorithmthey decoded each training instance and performed a perceptron update to the weight vectoran incorrect translation was updated towardan oracle translation found in a kbest list but discarded potentially better translations in the past iter ationsan experiment has been undertaken using a small development set together with sparse features for the reranking of a kbest translation they relied on a variant of a voted perceptron and achieved significant improvementshow ever their work was limited to reranking thus the improvement was relative to the performance of the baseline system whether or not there was a good translation in a listin our work the sparse features are directly integrated into the dpbased searchthe design of the sparse features was inspired by zens and ney they exploited theword alignment structure inside the phrase translation pairs for discriminatively training a reordering model in their phrasebased translationthe re ordering model simply classifies whether to perform monotone decoding or notthe trained model is treated as a single feature function integrated in eq1our approach differs in that each sparse feature is individually integrated in eq1we exploited a large number of binary features for statistical machine translationthe model wastrained on a small development setthe optimiza tion was carried out by mira which is an onlineversion of the largemargin training algorithmmil lions of sparse features are intuitively considered prone to overfitting especially when trained on a small development sethowever our algorithm with 771millions of features achieved very significant i am provements over a conventional method with a small number of featuresthis result indicates that we can easily experiment many alternative features evenwith a small data set but we believe that our ap proach can scale well to a larger data set for furtherimproved performancefuture work involves scal ing up to larger data and more featuresacknowledgements we would like to thank reviewers and our colleagues for useful comment and discussion
D07-1080
online largemargin training for statistical machine translationwe achieved a state of the art performance in statistical machine translation by using a large number of features with an online largemargin training algorithmthe millions of parameters were tuned only on a small development set consisting of less than 1k sentencesexperiments on arabictoenglish translation indicated that a model trained with sparse binary features outperformed a conventional smt system with a small number of featureswe perform blue computations in the context of a set o of previouslytranslated sentenceswe find the possibility of overfitting in the dataset especially when domain differences are present
large language models in machine translation this paper reports on the benefits of largescale statistical language modeling in machine translation a distributed infrastruc ture is proposed which we use to train on up to 2 trillion tokens resulting in language models having up to 300 billion ngrams itis capable of providing smoothed probabilities for fast singlepass decoding we in troduce a new smoothing method dubbed stupid backoff that is inexpensive to train on large data sets and approaches the quality of kneserney smoothing as the amount of training data increases given a sourcelanguage sentence f the problem of machine translation is to automatically produce a targetlanguage translation ethe mathematics of the problem were for malized by and reformulated by in terms of the optimization e arg max e m m1 mhm where hm is a set of m feature functions andm a set of weightsone or more feature func tions may be of the form h h in which case it is referred to as a language modelwe focus on ngram language models which are trained on unlabeled monolingual textas a generalrule more data tends to yield better language mod elsquestions that arise in this context include how might one build a language model that allows scaling to very large amounts of training data how much does translation performance improve as the size of the language model increases is there a point of diminishing returns in performance as a function of language model sizethis paper proposes one possible answer to the first question explores the second by providinglearning curves in the context of a particular statis tical machine translation system and hints that thethird may yet be some time in answeringin particu lar it proposes a distributed language model training and deployment infrastructure which allows direct and efficient integration into the hypothesissearch algorithm rather than a followon rescoring phasewhile it is generally recognized that twopass de coding can be very effective in practice singlepass decoding remains conceptually attractive because it eliminates a source of potential information losstraditionally statistical language models have been designed to assign probabilities to strings of words let wl1 denote a string of l tokens over a fixed vocabularyan ngram language model assigns a probability to wl1 according to p l i1 p l i1 p where the approximation reflects a markov assumption that only the most recent n 1 tokens are rele vant when predicting the next word858 for any substring wji of wl1 let f denote the frequency of occurrence of that substring in another given fixed usually very long targetlanguage string called the training datathe maximumlikelihood probability estimates for the ngrams are given by their relative frequencies are f f while intuitively appealing eq is problematic because the denominator and or numerator mightbe zero leading to inaccurate or undefined probability estimatesthis is termed the sparse data problemfor this reason the ml estimate must be mod ified for use in practice see for a discussion of ngram models and smoothingin principle the predictive accuracy of the language model can be improved by increasing the order of the ngramhowever doing so further exac erbates the sparse data problemthe present work addresses the challenges of processing an amount of training data sufficient for higherorder ngram models and of storing and managing the resulting values for efficient use by the decodermodels the topic of large distributed language models is relatively newrecently a twopass approach hasbeen proposed wherein a lower order ngram is used in a hypothesisgeneration phase then later the kbest of these hypotheses are rescored using a largescale distributed language modelthe resulting translation performance was shown to improve appreciably over the hypothesis deemed best by the firststage systemthe amount of data used was 3 billion wordsmore recently a largescale distributed language model has been proposed in the contexts of speech recognition and machine translation the underlying architecture is similar tothe difference is that they integrate the distributed language model into their machine translation decoderhowever they do not re port details of the integration or the efficiency of the approachthe largest amount of data used in the experiments is 4 billion wordsboth approaches differ from ours in that they store corpora in suffix arrays one subcorpus per workerand serve raw countsthis implies that all work ers need to be contacted for each ngram requestin our approach smoothed probabilities are stored and served resulting in exactly one worker beingcontacted per ngram for simple smoothing tech niques and in exactly two workers for smoothing techniques that require contextdependent backofffurthermore suffix arrays require on the order of 8 bytes per tokendirectly storing 5grams is more efficient and allows applying count cutoffs further reducing the size of the modelstateoftheart smoothing uses variations of con textdependent backoff with the following scheme p if is found p otherwise where are precomputed and stored probabili ties and are backoff weightsas examples kneserney smoothing katz backoff and linear interpola tion can be expressed inthis scheme the recursion ends at either unigrams or at the uniform distri bution for zerogramswe introduce a similar but simpler schemenamed stupid backoff 1 that does not generate nor malized probabilitiesthe main difference is that we do not apply any discounting and instead directly use the relative frequencies s f f if f 0 s otherwise 1the name originated at a time when we thought that such a simple scheme cannot possibly be goodour view of the scheme changed but the name stuck859in general the backoff factor may be made to depend on k here a single value is used and heuris tically set to 04 in all our experiments2 the recursion ends at unigrams s f n with n being the size of the training corpusstupid backoff is inexpensive to calculate in a dis tributed environment while approaching the quality of kneserney smoothing for large amounts of datathe lack of normalization in eq does not affect the functioning of the language model in the presentsetting as eq depends on relative rather than ab solute featurefunction valueswe use the mapreduce programming model to train on terabytes of data and to generate terabytes of language modelsin this programming model a userspecified map function processes an input keyvalue pair to generate a set of intermediate keyvalue pairs and a reduce function aggregates all intermediate values associated withthe same keytypically multiple map tasks operate independently on different machines and on different parts of the input datasimilarly multiple re duce tasks operate independently on a fraction of the intermediate data which is partitioned according to the intermediate keys to ensure that the same reducer sees all values for a given keyfor additional detailssuch as communication among machines data struc tures and application examples the reader is referred to our system generates language models in three main steps as described in the following sections51 vocabulary generationvocabulary generation determines a mapping ofterms to integer ids so ngrams can be stored us ing idsthis allows better compression than theoriginal termswe assign ids according to term fre quency with frequent terms receiving small ids for efficient variablelength encodingall words that 2the value of 04 was chosen empirically based on good results in earlier experimentsusing multiple values depending on the ngram order slightly improves resultsoccur less often than a predetermined threshold are mapped to a special id marking the unknown wordthe vocabulary generation map function reads training text as inputkeys are irrelevant values are textit emits intermediate data where keys are terms and values are their counts in the current section of the texta sharding function determines which shard the pair is sent tothis ensures that all pairs withthe same key are sent to the same shardthe re duce function receives all pairs that share the same key and sums up the countssimplified the map sharding and reduce functions do the following map keydocid ignored valuedocument array words tokenize hash_map histo for i 1 words histowordsi for iter in histo emit int shardforkey return hash nshards reduce keyterm valuescounts int sum 0 for each v in values sum parseint emit note that the reduce function emits only the aggregated valuethe output key is the same as the intermediate key and automatically written by mapreducethe computation of counts in the map func tion is a minor optimization over the alternative of simply emitting a count of one for each tokenized word in the arrayfigure 1 shows an example for3 input documents and 2 reduce shardswhich re ducer a particular term is sent to is determined by ahash function indicated by text colorthe exact par titioning of the keys is irrelevant important is that all pairs with the same key are sent to the same reducer52 generation of ngramsthe process of ngram generation is similar to vo cabulary generationthe main differences are thatnow words are converted to ids and we emit n grams up to some maximum order instead of single 860 figure 1 distributed vocabulary generationwordsa simplified map function does the follow ing map keydocid ignored valuedocument array ids toids for i 1 ids for j 0 maxorder1 emit again one may optimize the map function by first aggregating counts over some section of the data and then emit the aggregated counts instead of emitting 1each time an ngram is encounteredthe reduce function is the same as for vocabu lary generationthe subsequent step of language model generation will calculate relative frequencies r in order to make that step efficient we use a sharding function that places the values needed for the numerator and denominator into the same shardcomputing a hash function on just the first wordsof ngrams achieves this goalthe required n grams wiin1 and wi1in1 always share the same first word win1 except for unigramsfor that we need to communicate the total count n to all shardsunfortunately sharding based on the first word only may make the shards very imbalancedsometerms can be found at the beginning of a huge num ber of ngrams eg stopwords some punctuation marks or the beginningofsentence markeras an example the shard receiving ngrams starting with the beginningofsentence marker tends to be several times the average sizemaking the shards evenly sized is desirable because the total runtime of the process is determined by the largest shardthe shards are made more balanced by hashing based on the first two words int shardforkey string prefix firsttwowords return hash nshards this requires redundantly storing unigram counts inall shards in order to be able to calculate relative fre quencies within shardsthat is a relatively smallamount of information 53 language model generationthe input to the language model generation step is the output of the ngram generation step ngramsand their countsall information necessary to calculate relative frequencies is available within individ ual shards because of the sharding functionthat is everything we need to generate models with stupid backoffmore complex smoothing methods require additional steps backoff operations are needed when the full n gram is not foundif r is not found then we will successively look for r r etc the language model generation step shards ngrams on their last two words so all backoff operations can be done within the same shard 54 other smoothing methodsstateoftheart techniques like kneserney smoothing or katz backoff require additional more expensive stepsat runtime the client needs to additionally request up to 4 backoff factors for each 5gram requested from the servers thereby multiplying network trafficwe are not aware of a method that always stores the history backoff factors on the same shard as the longer ngram without duplicating a large fraction of the entriesthis means one needs to contact two shards per ngram instead of just one for stupid backofftraining requires additional iterations over the data861 step 0 step 1 step 2 context counting unsmoothed probs and interpolweights interpolated probabilities input key wiin1 input value f intermediate key wiin1 wi1in1 win1i sharding wiin1 wi1in1 w in2 in1 unigrams duplicated intermediate value fkn wifkn fkn d fkn output value fkn wi fkn d fkn pkn table 1 extra steps needed for training interpolated kneserney smoothingkneserney smoothing counts lowerorder n grams differentlyinstead of the frequency of the gram it uses the number of unique single word contexts the gram appears inwe use fknto jointly denote original frequencies for the highest order and context counts for lower ordersafter the ngram counting step we process the n grams again to produce these quantitiesthis can be done similarly to the ngram counting using a mapreduce the most commonly used variant of kneserney smoothing is interpolated kneserney smoothing defined recursively as pkn max d 0 fkn pkn where d is a discount constant and are interpolation weights that ensure probabilities sumto onetwo additional major mapreduces are re quired to compute these values efficientlytable 1 describes their input intermediate and output keys and valuesnote that output keys are always the same as intermediate keysthe map function of mapreduce 1 emits ngramhistories as intermediate keys so the reduce func tion gets all ngrams with the same history at the same time generating unsmoothed probabilities and interpolation weightsmapreduce 2 computes theinterpolationits map function emits reversed n grams as intermediate keys all unigrams are duplicated in every reduce shardbecause the reducer function receives intermediate keys in sorted order it can com pute smoothed probabilities for all ngram orders with simple bookkeepingkatz backoff requires similar additional stepsthe largest models reported here with kneserney smoothing were trained on 31 billion tokensfor stupid backoff we were able to use more than 60 times of that amountour goal is to use distributed language models in tegrated into the first pass of a decoderthis mayyield better results than nbest list or lattice rescoring doing that for lan guage models that reside in the same machine as the decoder is straightforwardthe decoder accesses ngrams whenever necessarythis is inefficient in a distributed system because network latency causes aconstant overhead on the order of millisecondson board memory is around 10000 times fasterwe therefore implemented a new decoder archi tecturethe decoder first queues some number of requests eg 1000 or 10000 ngrams and thensends them together to the servers thereby exploit ing the fact that network requests with large numbers of ngrams take roughly the same time to complete as requests with single ngramsthe nbest search of our machine translation de coder proceeds as followsit maintains a graph of the search space up to some pointit then extends each hypothesis by advancing one word position inthe source language resulting in a candidate extension of the hypothesis of zero one or more addi tional targetlanguage words in a traditional setting with a local languagemodel the decoder immediately obtains the nec essary probabilities and then decides which hypotheses to keep in the search graphwhen using a distributed language model the decoder first tentatively extends all current hypotheses taking note of which ngrams are required to score themthese are queued up for transmission as a batch requestwhen the scores are returned the decoder revisits all of these tentative hypotheses assigns scores and reprunes the searchgraphit is then ready for the next round of exten sions again involving queuing the ngrams waiting for the servers and pruningthe process is illustrated in figure 2 assuming a trigram model and a decoder policy of pruning tothe four most promising hypothesesthe four ac tive hypotheses at time t are there is there may there are and there werethe decoder extends these to form eight new nodes at time t 1note that one of the arcs is labeled indicating that no targetlanguage word was gener ated when the sourcelanguage word was consumedthe ngrams necessary to score these eight hypothe ses are there is lots there is many there may be there are lots are lots of etc these are queued up and their languagemodel scores requested in a batch mannerafter scoring the decoder prunes this set as indicated by the four black disks at time t 1 then extends these to form five new nodes at time t 2the ngrams necessary to score these hypotheses are lots of people lots of reasons there are onlookers etc again these are sent to the server together and again after scoring the graph is pruned to four active hypothesesthe alternating processes of queuing waiting and scoringpruning are done once per word position in a source sentencethe average sentence length in our test data is 22 words thus wehave 23 rounds3 per sentence on averagethe num ber of ngrams requested per sentence depends onthe decoder settings for beam size reordering win dow etc as an example for larger runs reported in the experiments section we typically request around150000 ngrams per sentencethe average network latency per batch is 35 milliseconds yielding a total latency of 08 seconds caused by the dis tributed language model for an average sentence of22 wordsif a slight reduction in translation qual ity is allowed then the average network latency perbatch can be brought down to 7 milliseconds by reducing the number of ngrams requested per sen tence to around 10000as a result our system can efficiently use the large distributed language model at decoding timethere is no need for a second pass nor for nbest list rescoringwe focused on machine translation when describ ing the queued language model accesshowever it is general enough that it may also be applicable to speech decoders and optical character recognition systemswe trained 5gram language models on amounts of text varying from 13 million to 2 trillion tokensthe data is divided into four sets language mod els are trained for each set separately4 for eachtraining data size we report the size of the result ing language model the fraction of 5grams from the test data that is present in the language model and the bleu score obtainedby the machine translation systemfor smaller train ing sizes we have also computed testset perplexityusing kneserney smoothing and report it for com parison71 data setswe compiled four language model training data sets listed in order of increasing size 3one additional round for the sentence end marker4experience has shown that using multiple separately trained language models as feature functions in eq yields better results than using a single model trained on all data863 1e07 1e08 1e09 1e10 1e11 1e12 10 100 1000 10000 100000 1e06 01 1 10 100 1000 n um be r o f n g ra m s ap pr ox l m s ize in g b lm training data size in million tokens x18x2 x18x2 x18x2 x16x2 target ldcnews webnews web figure 3 number of ngrams for varying amounts of training datatarget the english side of arabicenglish parallel data provided by ldc5 ldcnews this is a concatenation of several english news data sets provided by ldc6 webnews data collected over several years up todecember 2005 from web pages containing predominantly english news articles web general web data which was collected in jan uary 2006 for testing we use the nistpart of the 2006 arabicenglish nist mt evaluation set which is not included in the training data listed above7it consists of 1797 sentences of newswire broadcastnews and newsgroup texts with 4 reference translations eachthe test set is used to calculate transla tion bleu scoresthe english side of the set is also used to calculate perplexities and ngram coverage72 size of the language modelswe measure the size of language models in total number of ngrams summed over all orders from 1 to 5there is no frequency cutoff on the ngrams5httpwwwnistgovspeechtestsmtdoc ldclicensemt06pdf contains a list of parallel resources provided by ldc6the bigger sets included are ldc2005t12 ldc93t3a and ldc2002t31 plus many smaller sets7the test data was generated after 1feb2006 all training data was generated before that datetarget webnews web tokens 237m 31g 18t vocab size 200k 5m 16m ngrams 257m 21g 300g lm size 2g 89g 18t time 20 min 8 hours 1 day time 25 hours 2 days machines 100 400 1500 table 2 sizes and approximate training times for 3 language models with stupid backoff and kneserney smoothing there is however a frequency cutoff on the vocabularythe minimum frequency for a term to be in cluded in the vocabulary is 2 for the target ldcnews and webnews data sets and 200 for the web data setall terms below the threshold are mapped to a spe cial term unk representing the unknown wordfigure 3 shows the number of ngrams for language models trained on 13 million to 2 trillion to kensboth axes are on a logarithmic scalethe right scale shows the approximate size of the served language models in gigabytesthe numbers above the lines indicate the relative increase in languagemodel size x18x2 means that the number of n grams grows by a factor of 18 each time we doublethe amount of training datathe values are simi lar across all data sets and data sizes ranging from 16 to 18the plots are very close to straight lines in the loglog space linear leastsquares regression finds r2 099 for all four data setsthe web data set has the smallest relative increasethis can be at least partially explained by the highervocabulary cutoffthe largest language model gen erated contains approx300 billion ngramstable 2 shows sizes and approximate training times when training on the full target webnews and web data setsthe processes run on standard currenthardware with the linux operating systemgen erating models with kneserney smoothing takes 6 7 times longer than generating models withstupid backoffwe deemed generation of kneser ney models on the web data as too expensive andtherefore excluded it from our experimentsthe es timated runtime for that is approximately one week on 1500 machines864 50 100 150 200 250 300 350 10 100 1000 10000 100000 1e06 0 01 02 03 04 05 06 pe rp le xit y fr ac tio n of c ov er ed 5 g ra m s lm training data size in million tokens 022x2 035x2 038x2 026x2 target kn pp ldcnews kn pp webnews kn pp target c5 ldcnews c5 webnews c5 web c5 figure 4 perplexities with kneserney smoothing and fraction of covered 5grams 73 perplexity and ngram coveragea standard measure for language model quality is perplexityit is measured on test data t wt 1 pp e 1t t
D07-1090
large language models in machine translationthis paper reports on the benefits of largescale statistical language modeling in machine translationa distributed infrastructure is proposed which we use to train on up to 2 trillion tokens resulting in language models having up to 300 billion ngramsit is capable of providing smoothed probabilities for fast singlepass decodingwe introduce a new smoothing method dubbed stupid backoff that is inexpensive to train on large data sets and approaches the quality of kneserney smoothing as the amount of training data increases5gram word language models in english are trained on a variety of monolingual corporain the case of language models we often have to remove lowfrequency words because of a lack of computational resources since the feature space of k grams tends to be so large that we sometimes need cutoffs even in a distributed environmentto scale lms to larger corpora with higherorder dependencies we consider distributed language models that scale more readilystupid back off smoothing is significantly more efficient to train and deploy in a distributed framework than a context dependent smoothing scheme such as kneserneywe show that each doubling of the training data from the news domain leads to improvements of approximately 05 bleu pointswe used 1500 machines for a day to compute the relative frequencies of ngrams from 18tb of web data
factored translation models we present an extension of phrasebased statistical machine translation models that enables the straightforward integration of additional annotation at the wordlevel may it be linguistic markup or automatically generated word classes in a num ber of experiments we show that factoredtranslation models lead to better translation performance both in terms of auto matic scores as well as more grammatical coherence the current stateoftheart approach to statistical machine translation socalled phrasebased modelsis limited to the mapping of small text chunks with out any explicit use of linguistic information may it be morphological syntactic or semanticsuch additional information has been demonstrated to bevaluable by integrating it in preprocessing or post processing stepshowever a tighter integration of linguistic infor mation into the translation model is desirable for two reasonstranslation models that operate on more gen eral representations such as lemmas instead of surface forms of words can draw on richer statistics and overcome the data sparseness problems caused by limited training datamany aspects of translation can be best explained on a morphological syntactic or semantic levelhaving such information avail able to the translation model allows the directmodeling of these aspectsfor instance re ordering at the sentence level is mostly driven word word partofspeech outputinput morphology partofspeech morphology word class lemma word class lemma figure 1 factored representations of input and out put words incorporate additional annotation into the statistical translation modelby general syntactic principles local agreement constraints show up in morphology etctherefore we extended the phrasebased ap proach to statistical translation to tightly integrate additional informationthe new approach allows additional annotation at the word levela word in our framework is not only a token but a vector of factors that represent different levels of annotation we report on experiments with factors such assurface form lemma partofspeech morphological features such as gender count and case automatic word classes true case forms of words shallow syntactic tags as well as dedicated factors to en sure agreement between syntactically related itemsthis paper describes the motivation the modeling aspects and the computationally efficient decoding methods of factored translation modelswe presentbriefly results for a number of language pairshow ever the focus of this paper is the description of theapproachdetailed experimental results will be de scribed in forthcoming papers868many attempts have been made to add richer in formation to statistical machine translation modelsmost of these focus on the preprocessing of the in put to the statistical system or the postprocessing of its outputour framework is more general and goes beyond recent work on models that back off to representations with richer statistics by keeping a more complex repre sentation throughout the translation processrich morphology often poses a challenge to sta tistical machine translation since a multitude of word forms derived from the same lemma fragmentthe data and lead to sparse data problemsif the input language is morphologically richer than the out put language it helps to stem or segment the input in a preprocessing step before passing it on to the translation system structural problems have also been addressed bypreprocessing collins et al reorder the in put to a statistical system to closer match the word order of the output languageon the other end of the translation pipeline addi tional information has been used in postprocessingoch et al report minor improvements with linguistic features on a chineseenglish task koehn and knight show some success in rerankingnoun phrases for germanenglishin their approaches first an nbest list with the best transla tions is generated for each input sentencethen the nbest list is enriched with additional features for instance by syntactically parsing each candidate translation and adding a parse scorethe additional features are used to rescore the nbest list resulting possibly in a better best translation for the sentencethe goal of integrating syntactic informationinto the translation model has prompted many re searchers to pursue treebased transfer models with increasingly encouraging resultsour goal is complementary to these effortswe are less interested in recursive syntactic struc ture but in richer annotation at the word levelin future work these approaches may be combinedlemma lemma partofspeech outputinput morphology partofspeech word word morphologyfigure 2 example factored model morphologi cal analysis and generation decomposed into three mapping steps one example to illustrate the shortcomings of thetraditional surface word approach in statistical machine translation is the poor handling of morphologyeach word form is treated as a token in it selfthis means that the translation model treats say the word house completely independent of the word housesany instance of house in the training data does not add any knowledge to the translation of housesin the extreme case while the translation of house may be known to the model the word housesmay be unknown and the system will not be able to translate itwhile this problem does not show up as stronglyin english due to the very limited morphological inflection in english it does constitute a sig nificant problem for morphologically rich languages such as arabic german czech etc thus it may be preferably to model translation between morphologically rich languages on the levelof lemmas and thus pooling the evidence for differ ent word forms that derive from a common lemmain such a model we would want to translate lemmaand morphological information separately and com bine this information on the output side to ultimately generate the output surface wordssuch a model can be defined straightforward as a factored translation modelsee figure 2 for an illustration of this model in our frameworknote that while we illustrate the use of factored translation models on such a linguistically motivated 869 example our framework also applies to models that incorporate statistically defined word classes or any other annotationthe translation of factored representations of input words into the factored representations of out put words is broken up into a sequence of mapping steps that either translate input factors into output factors or generate additional output factors from existing output factorsrecall the example of a factored model motivated by morphological analysis and generationin this model the translation process is broken up into the following three mapping steps 1translate input lemmas into output lemmas2translate morphological and pos factors3generate surface forms given the lemma andlinguistic factorsfactored translation models build on the phrase based approach that breaks up the translation of a sentence into the translation of small text chunks this approach implicitly defines a segmentation of the input and output sentences into phrasessee an example in figure 3our current implementation of factored translation models follows strictly the phrasebased ap proach with the additional decomposition of phrasetranslation into a sequence of mapping stepstrans lation steps map factors in input phrases to factors in output phrasesgeneration steps map outputfactors within individual output wordsto reiter ate all translation steps operate on the phrase level while all generation steps operate on the word levelsince all mapping steps operate on the same phrase segmentation of the input and output sentence into phrase pairs we call these synchronous factored modelslet us now take a closer look at one example thetranslation of the oneword phrase hauser into en glishthe representation of hauser in german is surfaceform hauser lemma haus partofspeech nn count plural case nominative gender neutralneue huser werden gebaut new houses are builtfigure 3 example sentence translation by a standard phrase modelfactored models extend this ap proachthe three mapping steps in our morphologicalanalysis and generation model may provide the fol lowing applicable mappings 1translation mapping lemmashaus house home building she will 2translation mapping morphologynnpluralnominativeneutral nnplural nnsingular 3generation generating surface formshousennplural houses housennsingular house homennplural homes we call the application of these mapping stepsto an input phrase expansiongiven the multiple choices for each step each input phrase may be ex panded into a list of translation optionsthe german hauserhausnnpluralnominativeneutral may be expanded as follows 1translation mapping lemmas house home building she will 2translation mapping morphology housennplural homennplural buildingnnplural she willnnplural housennsingular 3generation generating surface forms houseshousennplural homeshomennplural buildingsbuildingnnplural shellsshe willnnplural househousennsingular 870factored translation models follow closely the sta tistical modeling approach of phrasebased models the main difference lies in thepreparation of the training data and the type of mod els learned from the data51 trainingthe training data has to be anno tated with the additional factorsfor instance if we want to add partofspeech information on the input and output side we need to obtain partofspeech tagged training datatypically this involves runningautomatic tools on the corpus since manually anno tated corpora are rare and expensive to producenext we need to establish a wordalignmentfor all the sentences in the parallel training cor pushere we use the same methodology as in phrasebased models the word alignment methods may operate on the surface forms of words or on anyof the other factorsin fact some preliminary ex periments have shown that word alignment based on lemmas or stems yields improved alignment qualityeach mapping step forms a component of the overall modelfrom a training point of view thismeans that we need to learn translation and gener ation tables from the wordaligned parallel corpus and define scoring methods that help us to choose between ambiguous mappingsphrasebased translation models are acquired from a wordaligned parallel corpus by extracting allphrasepairs that are consistent with the word align mentgiven the set of extracted phrase pairs with counts various scoring functions are estimated such as conditional phrase translation probabilities based on relative frequency estimation or lexical translation probabilities based on the words in the phrasesin our approach the models for the translationsteps are acquired in the same manner from a word aligned parallel corpusfor the specified factors in the input and output phrase mappings are extractedthe set of phrase mappings is scored based on relative counts and wordbased translation probabilitiesthe generation distributions are estimated on the output side onlythe word alignment plays no role herein fact additional monolingual data may be usedthe generation model is learned on awordforword basisfor instance for a genera tion step that maps surface forms to partofspeech a table with entries such as is constructedone or more scoring functions may be defined overthis table in our experiments we used both condi tional probability distributions eg p andp obtained by maximum likelihood esti mationan important component of statistical machinetranslation is the language model typically an n gram model over surface forms of wordsin theframework of factored translation models such se quence models may be defined over any factor or any set of factorsfor factors such as partofspeech tags building and using higher order ngram models is straightforward52 combination of componentsas in phrasebased models factored translation models can be seen as the combination of several components these compo nents define one or more feature functions that are combined in a loglinear model p 1 z exp ni1 ihi z is a normalization constant that is ignored in practiceto compute the probability of a translation e given an input sentence f we have to evaluate eachfeature function hifor instance the feature func tion for a bigram language model component is hlm plm p pp let us now consider the feature functions introduced by the translation and generation steps of fac tored translation modelsthe translation of the input sentence f into the output sentence e breaks down to a set of phrase translations for a translation step component each feature function ht is defined over the phrase pairs 871 given a scoring function ht j for a generation step component each feature function hg given a scoring function is defined over the output words ek only hg k the feature functions follow from the scoring functions acquired during the training oftranslation and generation tablesfor instance re call our earlier example a scoring function for a generation model component that is a conditional probability distribution between input and output factors eg pthe feature weights i in the loglinear model are determined using a minimum error rate training method typically powells method 53 efficient decodingcompared to phrasebased models the decomposi tion of phrase translation into several mapping stepscreates additional computational complexityin stead of a simple table lookup to obtain the possible translations for an input phrase now multiple tables have to be consulted and their content combinedin phrasebased models it is easy to identify the entries in the phrase table that may be used for a specific input sentencethese are called translation optionswe usually limit ourselves to the top 20 translation options for each input phrasethe beam search decoding algorithm starts withan empty hypothesisthen new hypotheses are gen erated by using all applicable translation optionsthese hypotheses are used to generate further hypotheses in the same manner and so on until hypotheses are created that cover the full input sentencethe highest scoring complete hypothesis in dicates the best translation according to the modelhow do we adapt this algorithm for factored translation modelssince all mapping steps operate on the same phrase segmentation the expansions of these mapping steps can be efficiently precomputed prior to the heuristic beam search and stored astranslation optionsfor a given input phrase all pos sible translation options are thus computed before word word partofspeech outputinput 3 g r a m 7 g r a mfigure 4 syntactically enriched output by gener ating additional linguistic factors on the output sidehighorder sequence models over these factors sup port syntactical coherence of the outputdecoding this means that the fundamental search algorithm does not changehowever we need to be careful about combina torial explosion of the number of translation options given a sequence of mapping stepsin other wordsthe expansion may create too many translation op tions to handleif one or many mapping steps result in a vast increase of expansions thismay be become unmanageablewe currently ad dress this problem by early pruning of expansions and limiting the number of translation options per input phrase to a maximum number by default 50this is however not a perfect solutionwe are cur rently working on a more efficient search for the top50 translation options to replace the current brute force approachwe carried out a number of experiments using the factored translation model framework incorporatingboth linguistic information and automatically gener ated word classesthis work is implemented as part of the open source moses1 system we used the default settings for this system61 syntactically enriched outputin the first set of experiments we translate surfaceforms of words and generate additional output fac tors from them byadding morphological and shallow syntactic infor 1available at httpwwwstatmtorgmoses 872 englishgerman model bleu best published result 1815 baseline 1804 surface pos 1815 surface pos morph 1822 englishspanish model bleu baseline 2341 surface morph 2466 surface pos morph 2425 englishczech model bleu baseline 2582 surface all morph 2704 surface casenumbergender 2745 surface cngverbprepositions 2762table 1 experimental results with syntactically en riched output mation we are able to use highorder sequence mod els in order to support syntactic coherence of the outputtable 1 summarizes the experimental resultsthe englishgerman systems were trained on the full 751088 sentence europarl corpus and evaluated on the wmt 2006 test set adding partofspeech and morphological factors on the output side and exploiting them with 7gram sequence models results in minor improvements in bleuthe model that incorporates both pos and morphology ensures better local grammatical coherencethe baseline system produces often phrases such as zur zwischenstaatlichen methoden with a mismatch between the determiner and the noun while the adjective is ambiguousin a manual evaluation of intranp agreement we found that the factored model reduced the disagreement error within noun phrases of length 3 from 15 to 4englishspanish systems were trained on a 40000 sentence subset of the europarl corpusherewe also used morphological and partofspeech fac tors on the output side with an 7gram sequence model resulting in absolute improvements of 125 and 084 improve ments on the full europarl corpus are smallerenglishczech systems were trained on a 20000sentence wall street journal corpusmorphologi cal features were exploited with a 7gram languagemodelexperimentation suggests that it is beneficial to carefully consider which morphological fea tures to be usedadding all features results inlower performance than consider ing only case number and gender or additionally verbial and prepositional morphology all these models score well above the baseline of 2582 bleuan extended description of these experiments is in the jhu workshop report 62 morphological analysis and generationthe next model is the one described in our motivat ing example in section 4 instead of translating surface forms of words we translateword lemma and morphology separately and gener ate the surface form of the word on the output sidewe carried out experiments for the language pair germanenglish using the 52185 sentence newscommentary corpus2we report results on the de velopment test set which is also the outofdomain test set of the wmt06 workshop shared task german morphological analysis and pos tagging was done using lopar schmidt and schulte i am walde english pos tagging was done with brills tagger followed by a simple lemmatizer based on tagging resultsexperimental results are summarized in table 2for this data set we also see an improvement when using a partofspeech language model the bleuscore increases from 1819 to 1905 consis tent with the results reported in the previous sectionhowever moving from a surface word translation mapping to a lemmamorphology mapping leads to a deterioration of performance to a bleu score of 1446note that this model completely ignores the sur face forms of input words and only relies on the 2made available for the wmt07 workshop shared task httpwwwstatmtorgwmt07 873 germanenglish model bleu baseline 1819 pos lm 1905 pure lemmamorph model 1446 backoff lemmamorph model 1947 table 2 experimental results with morphological analysis and generation model using news commentary corpus more general lemma and morphology informationwhile this allows the translation of word forms withknown lemma and unknown surface form on balance it seems to be disadvantage to throw away sur face form informationto overcome this problem we introduce an al ternative path model translation options in this model may come either from the surface form modelor from the lemmamorphology model we just de scribedfor surface forms with rich evidence in the training data we prefer surface form mappings and for surface forms with poor or no evidence in the training data we decompose surface forms into lemma and morphology information and map theseseparatelythe different translation tables form dif ferent components in the loglinear model whose weights are set using standard minimum error rate training methodsthe alternative path model outperforms the sur face form model with pos lm with an bleu scoreof 1947 vs 1905the test set has 3276 un known word forms vs 2589 unknown lemmas hence the lemmamorph model is able to translate 687 additional words63 use of automatic word classesfinally we went beyond linguistically motivatedfactors and carried out experiments with automatically trained word classesby clustering words to gether by their contextual similarity we are able to find statistically similarities that may lead to more generalized and robust modelswe trained models on the iwslt 2006 task compared to a baseline englishchinese system adding word classes on theoutput side as additional factors 1954 surface word class 2110 table 3 experimental result with automatic word classes obtained by word clustering chineseenglish recase method bleu standard twopass smt recase 2065 integrated factored model 2108 outputinput mixedcased lowercased lowercasedtable 4 experimental result with integrated recas ing viously illustrated in figure 4 to be exploited by a 7gram sequence model we observe a gain 15 bleu absolutefor more on this experiment see 64 integrated recasingto demonstrate the versatility of the factored translation model approach consider the task of recas ing typically in statistical machine translation the training data islowercased to generalize over differently cased surface forms say the the the which neces sitates a postprocessing step to restore case in the outputwith factored translation models it is possible to integrate this step into the model by adding a generation stepsee table 4 for an illustration of this model and experimental results on the iwslt2006 task the integrated recas ing model outperform the standard approach with an bleu score of 2108 to 2065for more on this experiment see 874 65 additional experimentsfactored translation models have also been used for the integration of ccg supertags domain adaptation and for the improvement of englishczech translation we presented an extension of the stateoftheartphrasebased approach to statistical machine trans lation that allows the straightforward integration of additional information may it come from linguistic tools or automatically acquired word classeswe reported on experiments that showed gains over standard phrasebased models both in terms of automatic scores as well as a measure of grammatical coherencethese experiments demonstrate that within the frameworkof factored translation models additional informa tion can be successfully exploited to overcome someshortcomings of the currently dominant phrase based statistical approachthe framework of factored translation models is very generalmany more models that incorporatedifferent factors can be quickly built using the ex isting implementationwe are currently exploringthese possibilities for instance use of syntactic in formation in reordering and models with augmented input informationwe have not addressed all computational problems of factored translation modelsin fact compu tational problems hold back experiments with morecomplex factored models that are theoretically pos sible but too computationally expensive to carry outour current focus is to develop a more efficient i am plementation that will enable these experimentsmoreover we expect to overcome the constraints of the currently implemented synchronous factored models by developing a more general asynchronous framework where multiple translation steps mayoperate on different phrase segmentations acknowledgments this work was supported in part under the gale program of the defense advanced research projects agency contract no nr001106c0022 and inpart under the euromatrix project funded by the eu ropean commission we also benefited greatly from a 2006 summer workshop hosted by the johns hopkins uni versity and would like thank the other workshop participants for their support and insights namelynicola bertoldi ondrej bojar chris callison burch alexandra constantin brooke cowan chris dyer marcello federico evan herbst christine moran wade shen and richard zens
D07-1091
factored translation modelswe present an extension of phrasebased statistical machine translation models that enables the straightforward integration of additional annotation at the wordlevel may it be linguistic markup or automatically generated word classesin a number of experiments we show that factored translation models lead to better translation performance both in terms of automatic scores as well as more grammatical coherenceany way to enforce linguistic constraints will result in a reduced need for data and ultimately in more complete models given the same amount of datawe also propose frameworks for the simultaneous use of different wordlevel representationswe propose a tight integration of morpho syntactic information into the translation model where lemma and morphological information are translated separately and this information is combined on the output side to generate the translationwe generalise the phrasebased model representation of the word from a string to a vector allowing additional features such as partofspeech and morphology to be associated with or even to replace surface forms during searchfactored translation models facilitate a more dataoriented approach to agreement modeling
the conll 2007 shared task on dependency parsing the conference on computational natural language learning features a shared task inwhich participants train and test their learn ing systems on the same data sets in 2007 as in 2006 the shared task has been devoted to dependency parsing this year with both a multilingual track and a domain adaptation track in this paper we define the tasks of the different tracks and describe how the data sets were created from existing treebanks for ten languages in addition we characterize the different approaches of the participating systems report the test results and provide a first analysis of these results previous shared tasks of the conference on compu tational natural language learning havebeen devoted to chunking clause iden tification named entity recognition and semantic role labeling in 2006 the shared task was multilingual dependency parsing where participants had to train a single parser on data from thirteen different languages which enabled a comparison not only of parsing and learning methods but also of the performance that can be achieved for different languages in dependencybased syntactic parsing the task is to derive a syntactic structure for an input sentence by identifying the syntactic head of each word in the sentencethis defines a dependency graph where the nodes are the words of the input sentence and the arcs are the binary relations from head to dependentoften but not always it is assumed that all words except one have a syntactic head which means that the graph will be a tree with the single independent word as the rootin labeled dependency parsing we additionally require the parser to assign a specific type to each dependency relation holding between a head word and a dependent wordin this years shared task we continue to explore datadriven methods for multilingual dependencyparsing but we add a new dimension by also intro ducing the problem of domain adaptationthe way this was done was by having two separate tracks a multilingual track using essentially the same setup as last year but with partly different languages and a domain adaptation track where the task was to usemachine learning to adapt a parser for a single lan guage to a new domainin total test results weresubmitted for twentythree systems in the multilin gual track and ten systems in the domain adaptationtrack not everyone submitted papers describ ing their system and some papers describe more than one system which explains why there are only twentyone papers in the proceedingsin this paper we provide task definitions for the two tracks describe data sets extracted from available treebanks report results for all systems in both tracks give an overview of approaches used provide a first analysis of the results and conclude with some future directions 915in this section we provide the task definitions that were used in the two tracks of the conll 2007 shard task the multilingual track and the domain adaptation track together with some background and motivation for the design choices madefirst of all we give a brief description of the data format and evaluation metrics which were common to the two tracks21 data format and evaluation metricsthe data sets derived from the original treebanks were in the same columnbased format as for the 2006 shared task in this format sentences are separated by ablank line a sentence consists of one or more to kens each one starting on a new line and a token consists of the following ten fields separated by a single tab character 1id token counter starting at 1 for each newsentence2form word form or punctuation symbolunderscore if not available4cpostag coarsegrained partofspeech tagwhere the tagset depends on the language5postag finegrained partofspeech tagwhere the tagset depends on the language or identical to the coarsegrained partofspeech tag if not available6feats unordered set of syntactic andor morphological features separated by a vertical bar or an underscore if not available7head head of the current token which is either a value of id or zero note that depending on the original treebank annotation there may be multiple tokens with head08 deprel dependency relation to the headthe set of dependency relations depends on the particular languagenote that dependingon the original treebank annotation the dependency relation when head0 may be mean ingful or simply root9phead projective head of current tokenwhich is either a value of id or zero or an underscore if not available10pdeprel dependency relation to thephead or an underscore if not availablethe phead and pdeprel were not used at all in this years data sets but were maintained for compatibilitywith last years data setsthis means that in prac tice the first six columns can be considered as input to the parser while the head and deprel fields are the output to be produced by the parserlabeled training sets contained all ten columns blind test sets only contained the first six columns and gold standard test sets again contained all ten columnsall data files were encoded in utf8the official evaluation metric in both tracks wasthe labeled attachment score ie the per centage of tokens for which a system has predicted the correct head and deprel but results were also reported for unlabeled attachment score ie the percentage of tokens with correct head and the label accuracy ie the percentage oftokens with correct deprelone important difference compared to the 2006 shared task is that all to kens were counted as scoring tokens including inparticular all punctuation tokensthe official eval uation script eval07pl is available from the shared task website1 22 multilingual trackthe multilingual track of the shared task was organized in the same way as the 2006 task with an notated training and test data from a wide range of languages to be processed with one and the same parsing systemthis system must therefore be able to learn from training data to generalize to unseentest data and to handle multiple languages possibly by adjusting a number of hyperparameterspar ticipants in the multilingual track were expected to submit parsing results for all languages involved1httpdepparseuvtnldepparsewikisoftwarepage 916 one of the claimed advantages of dependency parsing as opposed to parsing based on constituent analysis is that it extends naturally to languages with free or flexible word orderthis explains the interest in recent years for multilingual evaluation of dependency parserseven before the 2006 shared task the parsers of collins and charniak originally developed for english had been adapted for dependency parsing of czech and theparsing methodology proposed by kudo and mat sumoto and yamada and matsumoto had been evaluated on both japanese and englishthe parser of mcdonald and pereira had been applied to english czech and danish and theparser of nivre et al to ten different languagesbut by far the largest evaluation of mul tilingual dependency parsing systems so far was the2006 shared task where nineteen systems were eval uated on data from thirteen languages one of the conclusions from the 2006 shared task was that parsing accuracy differed greatly between languages and that a deeper analysis of the factors involved in this variation was an important problem for future researchin order to provide an extended empirical foundation for such research we tried to select the languages and data sets for this years task based on the following desideratathe selection of languages should be typolog ically varied and include both new languages and old languages the creation of the data sets should involve as little conversion as possible from the original treebank annotation meaning that preference should be given to treebanks with dependency annotationthe training data sets should include at least 50000 tokens and at most 500000 tokens2 the final selection included data from arabic basque catalan chinese czech english greek hungarian italian and turkishthe treebanks from 2the reason for having an upper bound on the training set size was the fact that in 2006 some participants could not train on all the data for some languages because of time limitationssimilar considerations also led to the decision to have a smaller number of languages this year which the data sets were extracted are described in section 323 domain adaptation trackone well known characteristic of datadriven pars ing systems is that they typically perform muchworse on data that does not come from the training domain due to the large over head in annotating text with deep syntactic parse trees the need to adapt parsers from domains withplentiful resources to domains with little resources is an important problemthis prob lem is commonly referred to as domain adaptation where the goal is to adapt annotated resources from a source domain to a target domain of interestalmost all prior work on domain adaptation as sumes one of two scenariosin the first scenario there are limited annotated resources available in the target domain and many studies have shown thatthis may lead to substantial improvementsthis includes the work of roark and bacchiani flo rian et al chelba and acero daumeand marcu and titov and henderson of these roark and bacchiani and titov and henderson deal specifically with syntactic parsingthe second scenario assumes that there are no annotated resources in the target domainthis is a more realistic situation and is considerably more difficultrecent work by mcclosky et al and blitzer et al have shown that the exis tence of a large unlabeled corpus in the new domain can be leveraged in adaptationfor this sharedtaskwe are assuming the latter setting no annotated re sources in the target domainobtaining adequate annotated syntactic resourcesfor multiple languages is already a challenging prob lem which is only exacerbated when these resources must be drawn from multiple and diverse domainsas a result the only language that could be feasibly tested in the domain adaptation track was englishthe setup for the domain adaptation track was asfollowsparticipants were provided with a large an notated corpus from the source domain in this case sentences from the wall street journalparticipants were also provided with data from three different target domains biomedical abstracts chemical abstracts and parent child dialogues additionally a large 917unlabeled corpus for each data set was providedthe goal of the task was to use the annotated source data plus any unlabeled data to produce a parser that is accurate for each of the test sets from the target domains3 participants could submit systems in either the openor closedclass the closed classrequires a system to use only those resources provided as part of the shared taskthe open class al lows a system to use additional resources provided those resources are not drawn from the same domain as the development or test setsan example might be a partofspeech tagger trained on the entire penntreebank and not just the subset provided as train ing data or a parser that has been handcrafted or trained on a different training setin this section we describe the treebanks used in the shared task and give relevant information about the data sets created from them31 multilingual trackarabic the analytical syntactic annotation of the prague arabic dependency treebank can be considered a pure dependency annotationthe conversion done by otakar smrz from the original format to the columnbased format described in section 21 was therefore relatively straightforward although not all the information in the original annotation could be transfered to the new formatpadt was one of the treebanks used in the 2006 shared task but then only contained about 54000 tokenssince then the size of the treebank has more than doubled with around 112000 tokensin addition the morphological annotation has been made more informativeit is also worth noting that the parsing units in this treebank are in many cases larger than conventional sentences which partly explains the high average number of tokens per sentence3note that annotated development data for the target domainwas only provided for the development domain biomedical abstractsfor the two test domains chemical abstracts and parentchild dialogues the only annotated data sets were the gold stan dard test sets released only after test runs had been submittedbasque for basque we used the 3lb basquetreebank at present the tree bank consists of approximately 3700 sentences 334of which were used as test datathe treebank com prises literary and newspaper textsit is annotated in a dependency format and was converted to the conll format by a team led by koldo gojenolacatalan the catalan section of the cessecesyntactically and semantically annotated cor pora is annotated with among other things constituent structure and grammatical functionsa head percolation table was used for automatically converting the constituent trees into dependency treesthe original data only contains functions related to the verb and a function tablewas used for deriving the remaining syntactic func tionsthe conversion was performed by a team led by llus marquez and antonia martchinese the chinese data are taken from thesinica treebank which contains both syntactic functions and semantic func tionsthe syntactic head was used in the conversion to the conll format carried out by yuming hsieh and the organizers of the 2006 shared task and thesyntactic functions were used wherever it was pos siblethe training data used is basically the sameas for the 2006 shared task except for a few correc tions but the test data is new for this years shared taskit is worth noting that the parsing units in this treebank are sometimes smaller than conventionalsentence units which partly explains the low aver age number of tokens per sentenceczech the analytical syntactic annotation of the prague dependency treebank is a pure dependency annotation just as for padtit was also used in the shared task 2006 but there are two important changes compared tolast yearfirst version 20 of pdt was used in stead of version 10 and a conversion script wascreated by zdenek zabokrtsky using the new xml based format of pdt 20secondly due to the upper bound on training set size only sections 13 of pdt constitute the training data which amounts to some 450000 tokensthe test data is a small subset of the development test set of pdt918english for english we used the wall street jour nal section of the penn treebank in particular we used sections 211 for training and a subset of section 23 for testingas a pre processing stage we removed many functions tagsfrom the nonterminals in the phrase structure repre sentation to make the representations more uniformwith outofdomain test sets for the domain adapta tion track the resulting data set was then converted to dependency structures using the procedure described in johansson and nugues this work was done by ryan mcdonaldgreek the greek dependency treebank adopts a de pendency structure annotation very similar to those of pdt and padt which means that the conversionby prokopis prokopidis was relatively straightfor wardgdt is one of the smallest treebanks in this years shared task and contains sentences of modern greekjust like pdt and padt the treebank contains more than one level of annotation but we only used the analytical level of gdthungarian for the hungarian data the szegedtreebank was usedthe tree bank is based on texts from six different genres ranging from legal newspaper texts to fictiontheoriginal annotation scheme is constituentbased fol lowing generative principlesit was converted into dependencies by zoltan alexin based on heuristicsitalian the data set used for italian is a subsetof the balanced section of the italian syntactic semantic treebank and consists of texts from the newspaper cor riere della sera and from periodicalsa team led by giuseppe attardi simonetta montemagni and maria simi converted the annotation to the conllformat using information from two different anno tation levels the constituent structure level and the dependency structure levelturkish for turkish we used the metusabancturkish treebank which was also used in the 2006 shared taska new test set of about 9000 tokens was provided by gulsen eryigit who also handled the conversion to the conll format which means that we could use all the approximately 65000 tokens of the originaltreebank for trainingthe rich morphology of turkish requires the basic tokens in parsing to be inflec tional groups rather than wordsigs of a single word are connected to each other deterministically using dependency links labeled deriv referred to as wordinternal dependencies in the following and the form and the lemma fields may be empty sentences do not necessarily have a unique root most internal punctuation and a few foreign words also have head032 domain adaptation trackas mentioned previously the source data is drawn from a corpus of news specifically the wall street journal section of the penn treebank this data set is identical to the english train ing set from the multilingual track for the target domains we used three different labeled data setsthe first two were annotated as part of the pennbioie project and consist of sentences drawn from either biomedical or chemical research abstractslike the source wsj corpus this data is annotated using thepenn treebank phrase structure schemeto con vert these sets to dependency structures we used the same procedure as before additional care was taken to remove sen tences that contained nonwsj partofspeech tagsor nonterminals furthermore the annotation scheme for gaps and traces was made consistent with the penn treebank wherever possibleas already mentioned the biomedical data set was distributed as a development set for the training phase while the chemical data set was only used for final testingthe third target data set was taken from thechildes database in partic ular the eve corpus which has beenannotated with dependency structuresunfortu nately the dependency labels of the childes datawere inconsistent with those of the wsj biomedi cal and chemical data sets and we therefore opted to only evaluate unlabeled accuracy for this data setfurthermore there was an inconsistency in how main and auxiliary verbs were annotated for this data set relative to othersas a result of this submitting 919 multilingual domain adaptation ar ba ca ch cz en gr hu it tu pchem childes language family semisolromsinslagerhelfyouromturgerannotation d d cf cf d cf d cf cf d cf d training data development data tokens 112 51 431 337 432 447 65 132 71 65 5 sentences 29 32 150 570 254 186 27 60 31 56 02 tokenssentence 383 158 288 59 170 240 242 218 229 116 251 lemma yes yes yes no yes no yes yes yes yes no nocpostag 15 25 17 13 12 31 18 16 14 14 25 nopostag 21 64 54 294 59 45 38 43 28 31 37 nofeats 21 359 33 0 71 0 31 50 21 78 0 nodeprel 29 35 42 69 46 20 46 49 22 25 18 nodeprel h0 18 17 1 1 8 1 22 1 1 1 1 head0 87 97 35 169 116 42 83 46 54 128 40 head left 792 445 600 247 469 490 448 274 650 38 500 head right 121 458 365 584 415 469 469 680 296 834 460 head0sentence 33 15 10 10 20 10 20 10 12 15 10 nonprojarcs 04 29 01 00 19 03 11 29 05 55 04 nonprojsent101 262 29 00 232 67 203 264 74 333 80 puncattached s s a s s a s a a s a deprels for punc10 13 6 29 16 13 15 1 10 12 8 test data pchem childes tokens 5124 5390 5016 5161 4724 5003 4804 7344 5096 4513 5001 4999 sentences 131 334 167 690 286 214 197 390 249 300 195 666 tokenssentence 391 161 300 75 165 234 244 188 205 150 256 129 new words 1244 2498 435 970 1258 313 1243 2610 1507 3629 3133 610 new lemmas 282 1113 336 na 528 na 582 1480 824 995 na na table 1 characteristics of the data sets for the 10 languages of the multilingual track and the development set and the two test sets of the domain adaptation track920results for the childes data was considered op tionallike the chemical data set this data set was only used for final testingfinally a large corpus of unlabeled indomaindata was provided for each data set and made avail able for trainingthis data was drawn from thewsj pubmedcom and the childes data basethe data was tokenized to be as consistent as pos sible with the wsj training set33 overviewtable 1 describes the characteristics of the data setsfor the multilingual track we provide statistics over the training and test sets for the domain adaptationtrack the statistics were extracted from the develop ment setfollowing last years shared task practice we use the following definition of projectivity an arc is projective iff all nodes occurring between i and j are dominated by i in the table the languages are abbreviated to their first two letterslanguage families are semitic isolate romance sinotibetan slavic germanic hellenic finnougric and turkicthe type of the original annotation is either constituents plus functions or dependencies for the train ing data the number of words and sentences are given in multiples of thousands and the averagelength of a sentence in words the following rows contain information about whether lemmas are available the num ber of coarse and finegrained partofspeech tags the number of feature components and the number of dependency labelsthen information is given on how many different dependency labels can cooccurwith head0 the percentage of head0 depen dencies and the percentage of heads preceding or succeeding a token this is followed by the average numberof head0 dependencies per sentence and the per centage of nonprojective arcs and sentencesthe last two rows show whether punctuation tokens are attached as dependents of other tokens and specify the number of depen dency labels that exist for punctuation tokensnote that punctuation is defined as any token belonging to the utf8 category of punctuationthis means for example that any token having an underscore in the form field is also counted as punctuation herefor the test sets the number of words and sen tences as well as the ratio of words per sentence are listed followed by the percentage of new words and lemmas for the domain adaptation sets the percentage of new words is computed with regard to the training set as already stated in the introduction test runs weresubmitted for twentythree systems in the multilin gual track and ten systems in the domain adaptationtrack in the result tables below systems are identified by the last name of the teammember listed first when test runs were uploaded for evaluationin general this name is also the first author of a paper describing the system in the proceedings but there are a few exceptions and complicationsfirst of all for four out of twentyseven systems no paper was submitted to the proceedingsthis is the case for the systems of jia maes et al nash and zeman which is indicated by the fact that these names appear initalics in all result tablessecondly two teams sub mitted two systems each which are described in a single paper by each teamthus the systems called nilssonand hall jare both described in hall et al while the systems called duan and duan are both described in duan et al finally please pay attention to the fact that there are two teams where the first authors last name is halltherefore we use hall jand hall k to disambiguate between the teams involving johan hall and keith hall respectivelytables 2 and 3 give the scores for the multilingual track in the conll 2007 shared taskthe averagecolumn contains the average score for all ten lan guages which determines the ranking in this tracktable 4 presents the results for the domain adapta tion track where the ranking is determined based on the pchem results only since the childes data set was optionalnote also that there are no labeled 921 team average arabic basque catalan chinese czech english greek hungarian italian turkish nilsson 8032 7652 7694 8870 7582 7798 8811 7465 8027 8440 7979 nakagawa 8029 7508 7256 8790 8384 8019 8841 7631 7674 8361 7822 titov 7990 7412 7549 8740 8214 7794 8839 7352 7794 8226 7981 sagae 7990 7471 7464 8816 8469 7483 8901 7358 7953 8391 7591 hall j 7980 7475 7499 8774 8351 7722 8581 7421 7809 8248 7924 carreras 7909 7020 7575 8760 8086 7860 8961 7356 7542 8346 7585 attardi 7827 7266 6948 8686 8150 7737 8585 7392 7681 8134 7687 chen 7806 7465 7239 8666 8124 7369 8381 7442 7534 8204 7631 duan 7770 6991 7126 8495 8258 7534 8583 7429 7706 8075 7503 hall k 7691 7340 6981 8238 8277 7227 8193 7421 7420 8069 7742 schiehlen 7618 7008 6677 8575 8004 7386 8621 7229 7390 8046 7248 johansson 7578 7176 7508 8333 7630 7098 8029 7277 7131 7755 7846 mannem 7454 7155 6564 8447 7376 7068 8155 7169 7094 7867 7642 wu 7302 6616 7071 8144 7469 6672 7949 7063 6908 7879 7252 nguyen 7253 6358 5818 8323 7977 7254 8673 7042 6812 7506 6763 maes 7066 6512 6905 7921 7097 6738 6968 6859 6893 7363 7403 canisius 6699 5913 6317 7544 7045 5614 7727 6035 6431 7557 6809 jia 6300 6337 5761 2335 7636 5495 8293 6545 6661 7465 6468 zeman 5487 4606 5061 6294 5449 5021 5359 5529 5524 6213 5810 marinov 5455 5400 5124 6942 4987 5347 5211 5433 4447 5975 5688 duan 2462 8264 8669 7689 nash 865 8649 shimizu 720 7202 table 2 labeled attachment score for the multilingual track in the conll 2007 shared taskteams are denoted by the last name of their first member with italics indicating that there is no corresponding paper in the proceedingsthe number in parentheses next to each score gives the ranka star next to a score in the average column indicates a statistically significant difference with the next lower rankteam average arabic basque catalan chinese czech english greek hungarian italian turkish nakagawa 8655 8609 8104 9286 8888 8628 9013 8408 8249 8791 8577 nilsson 8571 8581 8284 9312 8452 8359 8893 8122 8355 8777 8577 titov 8562 8318 8193 9340 8791 8419 8973 8120 8218 8626 8622 sagae 8529 8404 8119 9334 8894 8127 8987 8037 8351 8768 8272 carreras 8479 8148 8111 9246 8620 8516 9063 8137 7992 8719 8241 hall j 8474 8421 8061 9220 8760 8235 8677 8066 8171 8626 8504 attardi 8396 8253 7688 9141 8673 8340 8699 8075 8181 8554 8356 chen 8322 8349 7865 9087 8591 8014 8491 8116 7925 8591 8192 hall k 8308 8345 7855 8780 8791 7847 8321 8204 7934 8481 8518 duan 8277 7904 7759 8971 8688 8082 8697 8077 8066 8420 8103 schiehlen 8242 8107 7330 9079 8545 8173 8891 8047 7861 8454 7933 johansson 8113 8091 8043 8834 8130 7739 8143 7958 7553 8155 8480 mannem 8030 8156 7288 8981 7884 7720 8281 7889 7539 8291 8274 nguyen 8000 7346 6915 8812 8405 8091 8801 7756 7813 8040 8019 jia 7846 7420 7024 9083 8339 7041 8437 7565 7719 8236 7596 wu 7844 7705 7577 8585 7971 7307 8169 7812 7239 8257 7815 maes 7660 7547 7527 8435 7657 7403 7162 7519 7293 7832 8221 canisius 7483 7689 7017 8164 7481 7212 7823 7246 6780 7908 7514 zeman 6202 5855 5742 6850 6293 5919 5833 6289 5978 6827 6430 marinov 6083 6427 5855 7422 5609 5957 5433 6118 5039 6552 6413 duan 2553 8694 8787 8053 nash 877 8771 shimizu 779 7791 table 3 unlabeled attachment scores for the multilingual track in the conll 2007 shared taskteams are denoted by the last name of their first member with italics indicating that there is no correspond ing paper in the proceedingsthe number in parentheses next to each score gives the ranka star next to a score in the average column indicates a statistically significant difference with the next lower rank922 las uas team pchemc pchemo pchemc pchemo childesc childeso sagae 8106 8342 attardi 8040 8308 5867 dredze 8022 8338 6137 nguyen 7950 8204 jia 7648 7892 5743 bick 7181 7848 7471 8162 5807 6249 shimizu 6415 6349 7125 7001 zeman 5061 5457 5889 schneider 6301 6653 6027 watson 5547 6279 4561 wu 5289 table 4 labeled and unlabeled attachment scores for the closed and open classes of the domain adaptation track in the conll 2007 shared taskteams are denoted by the last name of their first member with italics indicating that there is no corresponding paper in the proceedingsthe number in parentheses next to each score gives the ranka star next to a score in the pchem columns indicates a statistically significant difference with the next lower rankattachment scores for the childes data set for reasons explained in section 32the number in paren theses next to each score gives the ranka star next to a score indicates that the difference with the nextlower rank is significant at the 5 level using a z test for proportionsa more complete presentation of the results including the significance results for all the tasks and their pvalues can be found on the shared task website4 looking first at the results in the multilingual track we note that there are a number of systems performing at almost the same level at the top of the rankingfor the average labeled attachment score the difference between the top score andthe fifth score is no more than half a percentage point and there are generally very few significant differences among the five or six best sys tems regardless of whether we consider labeled or unlabeled attachment scorefor the closed class of the domain adaptation track we see a very similar pattern with the top system being followed very closely by two other systemsfor the open class the results are more spread out but then thereare very few results in this classit is also worth not ing that the top scores in the closed class somewhat unexpectedly are higher than the top scores in the 4httpnextensuvtnldepparsewikiallscores open classbut before we proceed to a more detailed analysis of the results we will make an attempt to characterize the approaches represented by the different systemsin this section we give an overview of the models inference methods and learning methods used in theparticipating systemsfor obvious reasons the dis cussion is limited to systems that are described bya paper in the proceedingsbut instead of describ ing the systems one by one we focus on the basic methodological building blocks that are often foundin several systems although in different combina tionsfor descriptions of the individual systems we refer to the respective papers in the proceedingssection 51 is devoted to system architectureswe then describe the two main paradigms for learning and inference in this years shared task as well as in last years which we call transitionbased parsers and graphbased parsers adopting the terminology of mcdonald and nivre 5 finally we give an overview of the domain adaptation methods that were used 5this distinction roughly corresponds to the distinction made by buchholz and marsi between stepwiseand allpairsapproaches923 51 architecturesmost systems perform some amount of pre andpostprocessing making the actual parsing compo nent part of a sequential workflow of varying lengthand complexityfor example most transition based parsers can only build projective dependencygraphsfor languages with nonprojective depen dencies graphs therefore need to be projectivized for training and deprojectivized for testing instead of assigning head and deprel in a single step some systems use a twostage approach for attaching and labeling dependencies in the first step unlabeled dependencies are generated in the second step these are labeledthis is particularly helpful for factored parsing models in which label decisions cannot be easily conditioned on larger parts of the structure due to the increased complexity of inferenceonesystem extends this twostage ap proach to a threestage architecture where the parser and labeler generate an nbest list of parses which in turn is reranked6 in ensemblebased systems several base parsers provide parsing decisions which are added together for a combined score for each potential dependencyarcthe tree that maximizes the sum of these com bined scores is taken as the final output parsethis technique is used by sagae and tsujii and in the nilsson system it is worthnoting that both these systems combine transition based base parsers with a graphbased method for parser combination as first described by sagae and lavie datadriven grammarbased parsers such as bick schneider et al and watson andbriscoe need pre and postprocessing in order to map the dependency graphs provided as train ing data to a format compatible with the grammar used and vice versa52 transitionbased parserstransitionbased parsers build dependency graphs by performing sequences of actions or transitionsboth learning and inference is conceptualized in 6they also flip the order of the labeler and the rerankerterms of predicting the correct transition based onthe current parser state andor historywe can fur ther subclassify parsers with respect to the model they adopt the inference method they use and the learning method they employ521 models the most common model for transitionbased parsers is one inspired by shiftreduce parsing where a parser state contains a stack of partially processed tokens and a queue of remaining input tokens and where transitions add dependency arcs and perform stack and queue operationsthis type of model is used by the majority of transitionbased parsers sometimes it is combined with an explicit probability model for transition sequences which may be conditional or generative an alternative model is based on the listbased parsing algorithm described by covington which iterates over the input tokens in a sequen tial manner and evaluates for each preceding token whether it can be linked to the current token or notthis model is used by marinov and in com ponent parsers of the nilsson ensemble system finally two systems use models based on lr parsing 522 inferencethe most common inference technique in transition based dependency parsing is greedy deterministic search guided by a classifier for predicting the next transition given the current parser state and historyprocessing the tokens of the sentence in sequen tial lefttoright order7 optionally multiple passes over the input are conducted until no tokens are left unattached as an alternative to deterministic parsing several parsers use probabilistic models and maintain a heap or beam of partial transition sequences in order to pick the most probable one at the end of the sentence 7for diversity in parser ensembles righttoleft parsers are also used924 one system uses as part of their parsing pipeline a neighborparserthat attaches adjacent words and a rootparserthat identifies the root word of asentence in the case of grammar based parsers a classifier is used to disambiguate in cases where the grammar leaves some ambiguity 523 learning transitionbased parsers either maintain a classifierthat predicts the next transition or a global proba bilistic model that scores a complete parseto train these classifiers and probabilitistic models several approaches were used svms modified finite newton svms maximum entropy models multiclassaveraged perceptron and max imum likelihood estimation in order to calculate a global score or probabil ity for a transition sequence two systems used a markov chain approach here probabilities from the output of a classifier are multiplied over the whole sequence of actionsthis results in a locally normalized modeltwo other entries used mira or online passiveaggressive learning to train a globally normalized modeltitov and henderson used an in cremental sigmoid bayesian network to model the probability of a transition sequence and estimated model parameters using neural network learning53 graphbased parserswhile transitionbased parsers use training data to learn a process for deriving dependency graphs graphbased parsers learn a model of what it meansto be a good dependency graph given an input sen tencethey define a scoring or probability function over the set of possible parsesat learning timethey estimate parameters of this function at pars ing time they search for the graph that maximizes this functionthese parsers mainly differ in the type and structure of the scoring function the search algorithm that finds the best parse and the method to estimate the functions pa rameters 531 models the simplest type of model is based on a sum oflocal attachment scores which themselves are cal culated based on the dot product of a weight vector and a feature representation of the attachmentthistype of scoring function is often referred to as a first order model8 several systems participating in this years shared task used firstorder models canisius and tjong kim sang cast the same type ofarcbased factorization as a weighted constraint sat isfaction problemcarreras extends the firstorder model to incorporate a sum over scores for pairs of adjacent arcs in the tree yielding a secondorder modelin contrast to previous work where this was constrained to sibling relations of the dependent here headgrandchild relations can be taken into accountin all of the above cases the scoring function isdecomposed into functions that score local proper ties of the graphby contrast the model of nakagawa considersglobal properties of the graph that can take multi ple arcs into account such as multiple siblings and children of a node532 inference searching for the highest scoring graph in a model depends on the factorization cho sen and whether we are looking for projective ornonprojective treesmaximum spanning tree algorithms can be used for finding the highest scor ing nonprojective tree in a firstorder model while eisners dynamic programming algorithm solves the problem for a firstorder factoriza tion in the projective case carreras employs his own exten sion of eisners algorithm for the case of projectivetrees and secondorder models that include head grandparent relations8it is also known as an edgefactored model925 the methods presented above are mostly efficient and always exacthowever for models that takeglobal properties of the tree into account they can not be appliedinstead nakagawa uses gibbssampling to obtain marginal probabilities of arcs be ing included in the tree using his global model and then applies a maximum spanning tree algorithm to maximize the sum of the logs of these marginals and return a valid cyclefree parse533 learning most of the graphbased parsers were trained usingan online inferencebased method such as passive aggressive learning averaged perceptron or mira while some systems instead used methods based on maximum conditional likelihood 54 domain adaptation541 featurebased approachesone way of adapting a learner to a new domain without using any unlabeled data is to only include fea tures that are expected to transfer well in structural correspondence learning a transformation from features in the source domain to features of the target domain is learnt the original source features along with their transformed versions are then used to train a discriminative parser542 ensemblebased approaches dredze et al trained a diverse set of parsers in order to improve crossdomain performance byincorporating their predictions as features for an other classifiersimilarly two parsers trained with different learners and search directions were used in the colearning approach of sagae and tsujii unlabeled target data was processed with both parserssentences that both parsers agreed on were then added to the original training datathis combined data set served as training data for one of the original parsers to produce the final systemin a similar fashion watson and briscoe used a variant of selftraining to make use of the unlabeled target data543 other approaches attardi et al learnt tree revision rules for the target domain by first parsing unlabeled target data using a strong parser this data was then combined with labeled source data a weak parser was applied to this new dataset finally tree correction rules are collected based on the mistakes of the weak parser with respect to the gold data and the output of the strong parseranother technique used was to filter sentences of the outofdomain corpus based on their similarity to the target domain as predicted by a classifier only if a sentence was judged similar to target domain sentences was it included in the training setbick used a hybrid approach where a data driven parser trained on the labeled training data was given access to the output of a constraint grammar parser for english run on the same datafinallyschneider et al learnt collocations and rela tional nouns from the unlabeled target data and used these in their parsing algorithmhaving discussed the major approaches taken in the two tracks of the shared task we will now return tothe test resultsfor the multilingual track we com pare results across data sets and across systems and report results from a parser combination experiment involving all the participating systems for the domain adaptation track we sum up the most important findings from the test results 61 multilingual track611 across data sets the average las over all systems varies from 6807 for basque to 8095 for englishtop scores varyfrom 7631 for greek to 8961 for englishin gen eral there is a good correlation between the top scores and the average scoresfor greek italian and turkish the top score is closer to the average score than the average distance while for czech the distance is higherthe languages that produced themost stable results in terms of system ranks with re spect to las are hungarian and italianfor uas catalan also falls into this groupthe language that 926 setup arabic chinese czech turkish 2006 without punctuation 669 900 802 657 2007 without punctuation 755 849 800 716 2006 with punctuation 670 900 802 738 2007 with punctuation 765 847 802 798 table 5 a comparison of the las top scores from 2006 and 2007official scoring conditions in boldfacefor turkish scores with punctuation also include wordinternal dependenciesproduced the most unstable results with respect to las is turkishin comparison to last years languages the lan guages involved in the multilingual track this year can be more easily separated into three classes with respect to top scores low arabic basque greek medium czech hungarian turkish high catalan chinese english italian it is interesting to see that the classes are more easilydefinable via language characteristics than via char acteristics of the data setsthe split goes across training set size original data format sentence length percentage of unknown words number of dependency labels and ra tio of postags and dependency labelsthe class with the highest top scores contains languages with a rather impoverished morphologymediumscores are reached by the two agglutinative lan guages hungarian and turkish as well as by czechthe most difficult languages are those that combinea relatively free word order with a high degree of in flectionbased on these characteristics one would expect to find czech in the last classhowever theczech training set is four times the size of the train ing set for arabic which is the language with the largest training set of the difficult languageshowever it would be wrong to assume that train ing set size alone is the deciding factora closer look at table 1 shows that while basque and greekin fact have small training data sets so do turkish and italiananother factor that may be associated with the above classification is the percent age of new words in the test setthus theexpectation would be that the highly inflecting lan guages have a high pnw while the languages with little morphology have a low pnwbut again thereis no direct correspondencearabic basque cata lan english and greek agree with this assumption catalan and english have the smallest pnw and arabic basque and greek have a high pnwbut the pnw for italian is higher than for arabic and greek and this is also true for the percentage of new lemmasadditionally the highest pnw can be found in hungarian and turkish which reach higherscores than arabic basque and greekthese con siderations suggest that highly inflected languages with free word order need more training data a hypothesis that will have to be investigated furtherthere are four languages which were included inthe shared tasks on multilingual dependency parsing both at conll 2006 and at conll 2007 arabic chinese czech and turkishfor all four lan guages the same treebanks were used which allows a comparison of the resultshowever in some cases the size of the training set changed and at least one treebank turkish underwent a thorough correction phasetable 5 shows the top scores for lassince the official scores excluded punctuation in 2006 but includes it in 2007 we give results both with and without punctuation for both yearsfor arabic and turkish we see a great improve ment of approximately 9 and 6 percentage pointsfor arabic the number of tokens in the training set doubled and the morphological annotation was made more informativethe combined effect ofthese changes can probably account for the substan tial improvement in parsing accuracyfor turkish the training set grew in size as well although only by600 sentences but part of the improvement for turkish may also be due to continuing efforts in error cor 927 rection and consistency checkingwe see that the choice to include punctuation or not makes a large difference for the turkish scores since nonfinal igs of a word are counted as punctuation which means that wordinternal dependency links are included if punctuation is included9 however regardless of whether we compare scores with or without punctuation we see a genuine improvement of approximately 6 percentage pointsfor chinese the same training set was usedtherefore the drop from last years top score to thisyears is surprisinghowever last years top scor ing system for chinese which did not participate this year had a score that wasmore than 3 percentage points higher than the sec ond best system for chinesethus if we comparethis years results to the second best system the dif ference is approximately 2 percentage pointsthis final difference may be attributed to the properties of the test setswhile last years test set was taken from the treebank this years test set contains texts from other sourcesthe selection of the textual basis also significantly changed average sentence length the chinese training set has an average sentence lengthof 59last years test set alo had an average sen tence length of 59however this year the average sentence length is 75 tokens which is a significant increaselonger sentences are typically harder to parse due to the increased likelihood of ambiguous constructionsfinally we note that the performance for czech is almost exactly the same as last year despite the fact that the size of the training set has been reduced to approximately one third of last years training setit is likely that this in fact represents a relative i am provement compared to last years results612 across systems the las over all languages ranges from 8032 to5455the comparison of the system ranks averaged over all languages with the ranks for single lan 9the decision to include wordinternal dependencies in thisway can be debated on the grounds that they can be parsed de terministicallyon the other hand they typically correspond toregular dependencies captured by function words in other lan guages which are often easy to parse as wellit is thereforeunclear whether scores are more inflated by including word internal dependencies or deflated by excluding themguages show considerably more variation than last years systemsbuchholz and marsi report that for most parsers their ranking differs at most a few places from their overall rankingthis yearfor all of the ten best performing systems with re spect to las there is at least one language for which their rank is at least 5 places different from theiroverall rankthe most extreme case is the top per forming nilsson system which reached rank 1 for five languages and rank 2 fortwo more languagestheir only outlier is for chi nese where the system occupies rank 14 with a las approximately 9 percentage points below the top scoring system for chinese however hall et al point out that the official results for chinese contained a bug and the true performance of their system was actuallymuch higherthe greatest improvement of a system with respect to its average rank occurs for en glish for which the system by nguyen et al improved from the average rank 15 to rank 6twomore outliers can be observed in the system of jo hansson and nugues which improves from its average rank 12 to rank 4 for basque and turkishthe authors attribute this high performance to their parsers good performance on small training setshowever this hypothesis is contradicted by their re sults for greek and italian the other two languages with small training setsfor these two languages the systems rank is very close to its average rank613 an experiment in system combination having the outputs of many diverse dependencyparsers for standard data sets opens up the interest ing possibility of parser combinationto combine the outputs of each parser we used the method of sagae and lavie this technique assigns to each possible labeled dependency a weight that isequal to the number of systems that included the de pendency in their outputthis can be viewed as an arcbased voting schemeusing these weightsit is possible to search the space of possible depen dency trees using directed maximum spanning tree algorithms the maximum spanning tree in this case is equal to the tree that on average contains the labeled dependencies that most systems voted forit is worth noting that variants of this scheme were used in two of the participating 928 5 10 15 20number of systems 80 82 84 86 88 accu racy unlabeled accuracylabeled accuracy figure 1 system combination systems the nilsson system and the system of sagae and tsujii figure 1 plots the labeled and unlabeled accuracies when combining an increasing number of sys temsthe data used in the plot was the output of allcompeting systems for every language in the multilingual trackthe plot was constructed by sorting the systems based on their average labeled accuracy scores over all languages and then incremen tally adding each system in descending order10 we can see that both labeled and unlabeled accuracy are significantly increased even when just the top three systems are includedaccuracy begins to degrade gracefully after about ten different parsers have been addedfurthermore the accuracy never falls below the performance of the top three systems62 domain adaptation trackfor this task the results are rather surprisinga lookat the las and uas for the chemical research ab stracts shows that there are four closed systems that outperform the best scoring open systemthe best system reaches an las of 8106 consider ing that approximately one third of the words of the chemical test set are new the results are noteworthythe next surprise is to be found in the relatively low uas for the childes dataat a first glance this data set has all the characteristics of an easy 10the reason that there is no data point for two parsers is that the simple voting scheme adopted only makes sense with at least three parsers votingset the average sentence is short and the percentage of new words is also small despite these characteristics the top uas reaches 6249 and is thus more than 10 percentage points below the top uas for the chemical data setone major reason for this is that auxiliary and main verb dependencies are annotated differently in the childes data than in the wsj training setas aresult of this discrepancy participants were not re quired to submit results for the childes datathe best performing system on the childes corpus is an open system but the distance tothe top closed system is approximately 1 percent age pointin this domain it seems more feasible touse general language resources than for the chemi cal domainhowever the results prove that the extra effort may be unnecessarytwo years of dependency parsing in the conll shared task has brought an enormous boost to thedevelopment of dependency parsers for multiple lan guages but even though nineteen languages have been covered by almost as many different parsing and learn ing approaches we still have only vague ideas about the strengths and weaknesses of different methodsfor languages with different typological characteris ticsincreasing our knowledge of the multicausal relationship between language structure annotation scheme and parsing and learning methods probablyremains the most important direction for future re search in this areathe outputs of all systems for alldata sets from the two shared tasks are freely avail able for research and constitute a potential gold mine for comparative error analysis across languages and systemsfor domain adaptation we have barely scratched the surface so farbut overcoming the bottleneckof limited annotated resources for specialized do mains will be as important for the deployment of human language technology as being able to handle multiple languages in the futureone result fromthe domain adaptation track that may seem surprising at first is the fact that closed class systems outperformed open class systems on the chemical ab stractshowever it seems that the major problem in 929 adapting preexisting parsers to the new domain was not the domain as such but the mapping from the native output of the parser to the kind of annotationprovided in the shared task data setsthus find ing ways of reusing already invested development efforts by adapting the outputs of existing systemsto new requirements without substantial loss in ac curacy seems to be another line of research that may be worth pursuingacknowledgmentsfirst and foremost we want to thank all the peo ple and organizations that generously provided us with treebank data and helped us prepare the data sets and without whom the shared task would have been literally impossible otakar smrz charlesuniversity and the ldc maxux aranzabe kepa bengoetxea larraitz uria koldo gojenola and the university of the basque coun try maantonia martantonn llus marquez manuel bertran mariona taule difdamonterde eli comelles and clicub shihmin li kehjiann chen yuming hsieh and academia sinica jan hajic zdenek zabokrtsky charles university and the ldc brian macwhinney eric davis the childes project the penn bioie project and the ldc prokopis prokopidis and ilsp csirik janos and zoltan alexin giuseppe attardi simonetta montemagni maria simi isidoro barraco patrizia topi kiril ribarov alessandro lenci nicoletta calzolari ilc and elra gulsen eryigit kemal oflazer and ruket cakcsecondly we want to thank the organizers of last years shared task sabine buchholz amit dubey erwin marsi and yuval krymolowski who solved all the really hard problems for us and answered all our questions as well as our colleagues who helped review papers jason baldridge sabine buchholzjames clarke gulsen eryigit kilian evang ju lia hockenmaier yuval krymolowski erwin marsi beata megyesi yannick versley and alexander yehspecial thanks to bertjan busser and erwin marsi for help with the conll shared task website and many other things and to richard johansson for letting us use his conversion tool for englishthirdly we want to thank the program chairs for emnlpconll 2007 jason eisner and taku kudo the publications chair eric ringger the signll officers antal van den bosch hwee tou ng and erik tjong kim sang and members of the ldc staff tony castelletto and ilya ahtaridis for great cooperation and supportfinally we want to thank the following peoplewho in different ways assisted us in the organi zation of the conll 2007 shared task giuseppe attardi eckhard bick matthias buchkromannxavier carreras tomaz erjavec svetoslav mari nov wolfgang menzel xue nianwen gertjan van noord petya osenova florian schiel kiril simov zdenka uresova and heike zinsmeister
D07-1096
the conll 2007 shared task on dependency parsingthe conference on computational natural language learning features a shared task in which participants train and test their learning systems on the same data setsin 2007 as in 2006 the shared task has been devoted to dependency parsing this year with both a multilingual track and a domain adaptation trackin this paper we define the tasks of the different tracks and describe how the data sets were created from existing treebanks for ten languagesin addition we characterize the different approaches of the participating systems report the test results and provide a first analysis of these resultswe note that languages with free word order and high morphological complexity are the most difficult for dependency parsingmorphologically rich languages present new challenges as the use of state of the art parsers for more configurational and noninflected languages like english does not reach similar performance levels in languages like basque greek or turkish
single malt or blended a study in multilingual parser optimization we describe a twostage optimization of the maltparser system for the ten languages in the multilingual track of the conll 2007 shared task on dependency parsing the first stage consists in tuning a singleparsersystem for each language by optimizing parameters of the parsing algorithm the fea ture model and the learning algorithm thesecond stage consists in building an ensemble system that combines six different parsing strategies extrapolating from the opti mal parameters settings for each language when evaluated on the official test sets the ensemble system significantly outperforms the singleparser system and achieves the highest average labeled attachment score in the multilingual track of the conll 2007 shared task on dependency parsing a single parser must be trained to handle data from ten different languages arabic basque catalan chinese czech english greek hungarian italian and turkish 1 our contribution is a study in multilingual parser optimization using the freely available maltparser system which performs 1for more information about the task and the data sets see nivre et al deterministic classifierbased parsing with history based feature models and discriminative learning and which was one of the top performing systems in the conll 2006 shared task in order to maximize parsing accuracy optimiza tion has been carried out in two stages leading to two different but related parsersthe first of these is a singleparser system similar to the one describedin nivre et al which parses a sentence deterministically in a single lefttoright pass with post processing to recover nonprojective dependencies and where the parameters of the maltparser system have been tuned for each language separatelywe call this system single malt to emphasize the fact that it consists of a single instance of maltparserthe second parser is an ensemble system which combines the output of six deterministic parsers each of which is a variation of the single malt parser with parameter settings extrapolated from the first stage of optimizationit seems very natural to call this system blendedsection 2 summarizes the work done to optimize the single malt parser while section 3 explains how the blended parser was constructed from the single malt parsersection 4 gives a brief analysis of the experimental results and section 5 concludesthe parameters available in the maltparser system can be divided into three groups parsing algorithmparameters feature model parameters and learn ing algorithm parameters2 our overall optimization 2for a complete documentation of these parameters see httpw3msivxuseusersnivreresearchmaltparserhtml933 strategy for the single malt parser was as follows 1define a good baseline system with the sameparameter settings for all languages2tune parsing algorithm parameters once andfor all for each language parameters in an interleaved fashion for each languagewe used ninefold crossvalidation on 90 of the training data for all languages with a training set sizesmaller than 300000 tokens and an 8010 train devtest split for the remaining languages the remaining 10 of the data was in both cases saved for a final dry run where the parser was trained on 90 of the data for each language and tested on the remaining 10we consistently used the labeled attachment score as the single optimization criterionbelow we describe the most important parameters in each group define baseline settings and report notable improvements for different languages during developmentthe improvements for each language from step 1 to step 2 and step 3 can be tracked in table 13 21 parsing algorithmmaltparser implements several parsing algorithms but for the single malt system we stick to the one used by nivre et al which performs labeled projective dependency parsing in linear time using a stack to store partially processed tokens and an input queue of remaining tokensthere are three basic parameters that can be varied for this algorithm 1arc order the baseline algorithm is arceager in the sense that right dependents are attached to their head as soon as possible but there is also an arcstandard version where theattachment of right dependents has to be postponed until they have found all their own de pendentsthe arcstandard order was found 3complete specifications of all parameter settings for all languages for both single malt and blended are available at httpw3msivxuseusersjhaconll07to improve parsing accuracy for chinese while the arceager order was maintained for all other languages2stack initialization in the baseline versionthe parser is initialized with an artificial root node on the stack so that arcsoriginating from the root can be added explicitly during parsingbut it is also possible to ini tialize the parser with an empty stack in whichcase arcs from the root are only added implicitly empty stack initialization led to improved accuracy for cata lan chinese hungarian italian and turkish4a single lefttoright pass over the input but it is possible to allow a second pass where onlyunattached tokens are processed5 such post processing was found to improve results for basque catalan czech greek and hungariansince the parsing algorithm only produces projective dependency graphs we may use pseudoprojective parsing to recover nonprojective dependencies ie projectivize training data and encode information about these transformations in extended arc labels to support deprojectivization of the parser outputpseudoprojective parsing was found to have a positive effect on over all parsing accuracy only for basque czech greek and turkishthis result can probably be explainedin terms of the frequency of nonprojective depen dencies in the different languagesfor basque czech greek and turkish more than 20 of the sentences have nonprojective dependency graphs for all the remaining languages the corresponding4for arabic basque czech and greek the lack of i am provement can be explained by the fact that these data sets allow more than one label for dependencies from the artificial rootwith empty stack initialization all such dependencies are assigned a default label which leads to a drop in labeled attach ment scorefor english however empty stack initialization did not improve accuracy despite the fact that dependencies from the artificial root have a unique label5this technique is similar to the one used by yamada and matsumoto but with only a single postprocessing pass parsing complexity remains linear in string length934 attributes tokens form lemma cpostag postag feats deprel s top s top1 i next i next1 i next2 i next3 g head of top g leftmost dependent of top g rightmost dependent of top g leftmost dependent of next figure 1 baseline feature model figure is 10 or less6 the cumulative improvement after optimization of parsing algorithm parameters was a modest 032 percentage points on average over all ten languages with a minimum of 000 and a maximum of 083 22 feature modelmaltparser uses a historybased feature model for predicting the next parsing actioneach feature of this model is an attribute of a token defined relative to the current stack s input queue i or partially built dependency graph g where the attribute can be anyof the symbolic input attributes in the conll for mat form lemma cpostag postag and feats as well as the deprel attribute of tokens in the graph g the baseline feature model is depicted in figure 1 where rows denote tokens columns denote attributes and each cell containing a plus sign represents a model feature7 this model is an extrapolation from many previous experiments on different languages and usually represents a good starting point for further optimizationthe baseline model was tuned for each of the ten languages using both forward and backward feature 6in fact for arabic which has about 10 sentences with nonprojective dependencies it was later found that with anoptimized feature model it is beneficial to projectivize the train ing data without trying to recover nonprojective dependencies in the parser outputthis was also the setting that was used for arabic in the dry run and final test7the names top and next refer to the token on top of the stack s and the first token in the remaining input i respectivelyselectionthe total number of features in the tuned models varies from 18 to 56 but is typically between 20 and 30this feature se lection process constituted the major development effort for the single malt parser and also gave the greatest improvements in parsing accuracy but since feature selection was to some extent interleaved with learning algorithm optimization we only report the cumulative effect of both together in table 123 learning algorithmmaltparser supports several learning algorithms butthe best results have so far been obtained with sup port vector machines using the libsvm package we use a quadratic kernel k 2 and libsvms builtin oneversusone strategy for multiclass classifica tion converting symbolic features to numerical ones using the standard technique of binarizationas our baseline settings we used 02 and are 0 forthe kernel parameters c 05 for the penalty para meter and 10 for the termination criterionin order to reduce training times during development we also split the training data for each language intosmaller sets and trained separate multiclass classi fiers for each set using the postag of next as the defining feature for the splitthe time spent on optimizing learning algorithm parameters varies between languages mainly due to lack of timefor arabic basque and catalan the baseline settings were used also in the dry run and final testfor chinese greek and hungarian 935 development dry run test test uas language base pa fl sm b sm b sm b arabic 7031 7031 7167 7093 7309 7475 7652 8421 8581 basque 7386 7444 7699 7718 8012 7497 7692 8061 8284 catalan 8543 8551 8688 8665 8800 8774 8870 9220 9312 chinese 8385 8439 8764 8761 8861 8351 8467 8760 8870 czech 7500 7583 7774 7791 8217 7722 7798 8235 8359 english 8544 8544 8635 8635 8874 8581 8811 8677 8893 greek 7267 7304 7442 7489 7817 7421 7465 8066 8122 hungarian 7462 7464 7740 7781 8004 7809 8027 8171 8355 italian 8142 8164 8250 8337 8516 8248 8440 8626 8777 turkish 7512 7580 7649 7587 7709 7924 7979 8504 8577 average 7778 7810 7981 7986 8212 7980 8120 8474 8613 table 1 development results for single malt dry run and test results for single malt and blended all scores are labeled attachment scores except the last two columns which report unlabeled attachment scores on the test setsslightly better results were obtained by not splittingthe training data into smaller sets for the remain ing languages accuracy was improved by using the cpostag of next as the defining feature for the split with respect to the svmparameters arabic basque catalan greek and hungarian retain the baseline settings while the other languages have slightly dif ferent values for some parametersthe cumulative improvement after optimization of feature model and learning algorithm parameters was 171 percentage points on average over all ten languages with a minimum of 069 and a maximum of 325 3 the blended parserthe blended parser is an ensemble system based on the methodology proposed by sagae and lavie given the output dependency graphs gi of m different parsers for an input sen tence x we construct a new graph containing all the labeled dependency arcs proposed by some parser and weight each arc a by a score s reflecting its popularity among the m parsersthe output of the ensemble system for x is the maximum spanning tree of this graph which can be extracted using the chuliuedmonds algorithm as shown by mcdonald et al following sagae and lavie we let s m i1 w c iai where wci is the average labeled attachment score of parser i for the word class c8 of the dependent of a and ai is 1 if a gi and 0 otherwisethe blended parser uses six component parsers with three different parsing algorithms each of which is used to construct one lefttoright parser and one righttoleft parserthe parsing algorithmsused are the arceager baseline algorithm the arc standard variant of the baseline algorithm and the incremental nonprojective parsing algorithm first described by covington and recently used for deterministic classifierbased parsing by nivre all of which are available in maltparserthus the six component parsers for each language were instances of the following 1arceager projective lefttoright2arceager projective righttoleft3arcstandard projective lefttoright6covington nonprojective righttoleft8we use cpostag to determine the part of speech936 root 1 2 36 7 parser r p r p r p r p r p single malt 8701 8036 9508 9487 8628 8667 7797 8023 6898 7106 blended 9209 7420 9571 9492 8755 8812 7866 8302 6529 7814 table 2 recall and precision of single malt and blended for dependencies of different length averaged over all languages the final blended parser was constructed by reusingthe tuned single malt parser for each language and training five additional parsers with the same parameter settings except for the following mechanical adjustments 1pseudoprojective parsing was not used for thetwo nonprojective parsers2feature models were adjusted with respect tothe most obvious differences in parsing strategy 3learning algorithm parameters were adjustedto speed up training having trained all parsers on 90 of the training data for each language the weights wci for each parser i and coarse part of speech c was determined by the labeled attachment score on the remaining 10 of the datathis means that the results obtained in the dry run were bound to be overly optimistic for the blended parser since it was then evaluated on the same data set that was used to tune the weightsfinally we want to emphasize that the time for developing the blended parser was severely limited which means that several shortcuts had to be taken such as optimizing learning algorithm parametersfor speed rather than accuracy and using extrapolation rather than proper tuning for other important parametersthis probably means that the per formance of the blended system can be improved considerably by optimizing parameters for all six parsers separately4 results and discussiontable 1 shows the labeled attachment score results from our internal dry run and the official test runs for both of our systemsit should be pointed out that the test score for the blended parser on chinese is different from the official one which was much lower than expected dueto a corrupted specification file required by malt parserrestoring this file and rerunning the parser on the chinese test set without retraining the parser or changing any parameter settings resulted in thescore reported herethis also improved the aver age score from 8032 to 8120 the former being the highest reported official scorefor the single malt parser the test results are on average very close to the dry run results indicating that models have not been overfitted for the blended parser there is a drop of almost one percentage point which can be explained by the fact that weights could not be tuned on heldout data for the dry run comparing the results for different languages we see a tendency that languages with rich morphology usually accompanied by flexible word order get lower scoresthus the labeled attachment score isbelow 80 for arabic basque czech greek hungarian and turkishby comparison the more con figurational languages all have scores above 80linguis tic properties thus seem to be more important than for example training set size which can be seen by comparing the results for italian with one of the smallest training sets and czech with one of the largestthe development of parsing methods that are better suited for morphologically rich languages with flexible word order appears as one of the most important goals for future research in this areacomparing the results of our two systems we see that the blended parser outperforms the singlemalt parser for all languages with an average i am 937 provement of 140 percentage points a minimum of 044 and a maximum of 240 as shown by mcdonald and nivre the single malt parser tends to suffer from two problems errorpropagation due to the deterministic parsing strat egy typically affecting long dependencies more thanshort ones and low precision on dependencies orig inating in the artificial root node due to fragmented parses9 the question is which of these problems isalleviated by the multiple views given by the compo nent parsers in the blended systemtable 2 throwssome light on this by giving the precision and recall for dependencies of different length treating de pendents of the artificial root node as a special caseas expected the single malt parser has lower preci sion than recall for root dependents but the blendedparser has even lower precision indicating that the fragmentation is even more severe in this case10 by contrast we see that precision and recall for other dependencies improve across the board especially for longer dependencieswhich probably means that the effect of error propa gation is mitigated by the use of an ensemble systemeven if each of the component parsers is determinis tic in itselfwe have shown that deterministic classifierbased dependency parsing with careful optimization can give highly accurate dependency parsing for a wide range of languages as illustrated by the performanceof the single malt parserwe have also demonstrated that an ensemble of deterministic classifier based dependency parsers built on top of a tuned singleparser system can give even higher accuracy as shown by the results of the blended parser whichhas the highest labeled attachment score for five lan guages roots to the artificial root nodehence children of the root node in the final output may not have been predicted as such by the treebankinduced classifier10this conclusion is further supported by the observation that the single most frequent frame confusionof the blended parser over all languages is to attach two dependents with the label root to the root node instead of only onethe frequency of this error is more than twice as high for the blended parser as for the single malt parser italian as well as the highest multilingual average scoreacknowledgements we want to thank all treebank providers for making the data available for the shared task and the organizers for their efforts in organizing itspecial thanks to ryan mcdonald for fruitful discussions and assistance with the error analysis and to kenji sagae for showing us how to produce a good blendthanks also to two reviewers for useful comments
D07-1097
single malt or blended a study in multilingual parser optimizationwe describe a twostage optimization of the maltparser system for the ten languages in the multilingual track of the conll 2007 shared task on dependency parsingthe first stage consists in tuning a singleparser system for each language by optimizing parameters of the parsing algorithm the feature model and the learning algorithmthe second stage consists in building an ensemble system that combines six different parsing strategies extrapolating from the optimal parameters settings for each languagewhen evaluated on the official test sets the ensemble system significantly outperforms the singleparser system and achieves the highest average labeled attachment scorewe extend the twostage approach to a threestage architecture where the parser and labeler generate an nbest list of parses which in turn is rerankedwe point out that the official results for chinese contained a bug and the true performance of our system is actually much higherwe implement a lefttoright arceager parsing model in a way that the parser scan through an input sequence from left to right and the right dependents are attached to their heads as soon as possible
experiments with a higherorder projective dependency parser we present experiments with a dependency parsing model defined on rich factors ourmodel represents dependency trees with factors that include three types of relations be tween the tokens of a dependency and theirchildren we extend the projective pars ing algorithm of eisner for our caseand train models using the averaged perceptron our experiments show that considering higherorder information yields signifi cannot improvements in parsing accuracy but comes at a high cost in terms of both timeand memory consumption in the multi lingual exercise of the conll2007 shared task our system obtains the best accuracy for english and the second best accuracies for basque and czech structured prediction problems usually involve models that work with factored representations of structuresthe information included in the factors determines the type of features that the model can exploithowever richer representations translate into higher complexity of the inference algorithms associated with the modelin dependency parsing the basic firstorder modelis defined by a decomposition of a tree into head modifier dependenciesprevious work extended this basic model to include secondorder relationsiedependencies that are adjacent to the main depen dency of the factorspecifically these approaches considered sibling relations of the modifier token in this paper we extend the parsing model with other typesof secondorder relationsin particular we incorpo rate relations between the head and modifier tokens and the children of the modifierone paradigmatic case where the relations we consider are relevant is ppattachmentfor examplein they sold 1210 cars in the yous the ambigu ity problem is to determine whether the preposition inis modifying soldor cars the former being correct in this caseit isgenerally accepted that to solve the attachment deci sion it is necessary to look at the head noun withinthe prepositional phrase which has a grandparental relation with the two candidate tokens that the phrase may attachsee egother ambigu ities in language may also require consideration of grandparental relations in the dependency structurewe present experiments with higherorder models trained with averaged perceptronthe secondorderrelations that we incorporate in the model yield significant improvements in accuracyhowever the inference algorithms for our factorization are very ex pensive in terms of time and memory consumptionand become impractical when dealing with many la bels or long sentencesa dependency parser receives a sentence x of n to kens and outputs a labeled dependency tree y in the tree a labeled dependency is a triple h m l where h 0 n is the index of the head token 957 lh m ccch mi mo figure 1 a factor in the higherorder parsing modelm 1 n is the index of the modifier token and l 1 l is the label of the dependencythe value h 0 is used for dependencies where the head is a special rootsymbol of the sentencewe denote by t the set of all possible dependency structures for a sentence x in this paper we restrict to projective dependency treesthe dependency tree computed by the parser for a given sentence is y arg max yt fy score the parsing model represents a structure y as a set offactors f y and scores each factor using param eters w in a firstorder model a factor corresponds to a single labeled dependency ie f h m lthe features of the model are defined through a feature function 1 which maps a sentence to gether with an unlabeled dependency to a feature vector in rd1 the parameters of the model are acollection of vectors wl1 rd1 one for each pos sible labelthe firstorder model scores a factor as score1 1 wl1the higherorder model defined in this paper de composes a dependency structure into factors that include children of the head and the modifierin particular a factor in our model is represented by the signature f h m l ch cmi cmowhere as in the firstorder model h m and l are respectively the head modifier and label of the main dependency of the factor ch is the child of h in h m that is closest to m cmi is child of m inside h mthat is furthest from m cmo is the child of m outside h m that is furthest from m figure 1 de picts a factor of the higherorder model and table 1 lists the factors of an example sentencenote that a factor involves a main labeled dependency and threeadjacent unlabeled dependencies that attach to chil dren of h and m special values are used when either of these children are nullthe higherorder model defines additional m h ch cmi cmothey 1 2 sold 2 0 1 51200 3 4 cars 4 2 3 in 5 2 4 7the 6 7 yous 7 5 6 table 1 higherorder factors for an example sentencefor simplicity labels of the factors have been omitteda firstordermodel considers only h mthe secondorder model of mcdonald and pereira considers h m chfor the pp attachment decision the higherorder model allows us to define features that relate the verb with the content word of the prepositional phrase secondorder features through a function 2 which maps a head a modifierand a child in a feature vector in rd2 the param eters of the model are a collection of four vectors for each dependency label wl1 rd1 as in the firstorder model and wlhwlmi and wlmo all three in rd2 and each associated to one of the adjacent dependencies in the factorthe score of a factor is score2 1 wl1 2 wlh 2 wlmi 2 wlmonote that the model uses a common feature func tion for secondorder relations but features could be defined specifically for each type of relationnote also that while the higherorder factors includefour dependencies our modelling choice only ex ploits relations between the main dependency andsecondary dependenciesconsidering relations between secondary dependencies would greatly in crease the cost of the associated algorithms21 parsing algorithmin this section we sketch an extension of the projective dynamic programming algorithm of eisner for the higherorder model defined abovethe time complexity of the algo rithm is o and the memory requirements areoas in the eisner approach our algo rithm visits sentence spans in a bottom up fashion and constructs a chart with two types of dynamic programming structures namely open and closedstructuressee figure 2 for a diagramthe dy namic programming structures are 958 h m h m ecmo l micr1rhc lfigure 2 dynamic programming structures used in the pars ing algorithmthe variables in boldface constitute the index of the chart entry for a structure the other variables constitute the backpointer stored in the chart entryleft an open structure for the chart entry h m lo the algorithm looks for the r ch and cmi that yield the optimal score for this structureright a closed structure for the chart entry h e mc the algorithm looks for the l and cmo that yield the optimal scoreopen structures for each span from s to e and each label l the algorithm maintains a chart entry s e lo associated to the dependency s e lfor each entry the algorithm looks for the optimal splitting point r sibling ch and grandchild cmi using parameters wl1 wlh and wlmithis can be done in o because our features do not consider interactions betweench and cmisimilar entries e s lo are main tained for dependencies headed at e closed structures for each span from s to e and each token m s e the algorithm maintains an entry s e mc associated to a partial dependency tree rooted at s in which m is the last modifier of s the algorithm chooses the optimal dependency label l and grandchild cmo in o using parameters wlmosimilarentries e s mc are maintained for dependen cies headed at e we implemented two variants of the algorithmthe first forces the root token to participate in exactly one dependencythe second allows many dependencies involving the root tokenfor the singleroot case it is necessary to treat the root token dif ferently than other tokensin the experiments weused the singleroot variant if sentences in the train ing set satisfy this propertyotherwise we used the multiroot variant22 featuresthe firstorder features 1 are the exactsame implementation as in previous conll sys tem in turn those featureswere inspired by successful previous work in first order dependency parsing the most basic feature patterns consider the surface form partofspeech lemma and other morpho syntactic attributes of the head or the modifier of adependencythe representation also considers com plex features that exploit a variety of conjunctionsof the forms and partofspeech tags of the follow ing items the head and modifier the head modifier and any token in between them the head modifier and the two tokens following or preceding themas for the secondorder features we again base our features with those of mcdonald and pereira who reported successful experiments with secondorder modelswe add some patterns to their featureslet dir be rightif h m and leftotherwise let form and cpos return the surface form and coarse partofspeech of token xi respectivelythe definition of 2 is dir cpos cpos cpos dir cpos cpos dir cpos cpos dir form form dir form form dir cpos form dir cpos form dir form cpos dir form cposwe report experiments with higherorder models for the ten languages in the multilingual track of the conll2007 shared task 1in all experiments we trained our models us ing the averaged perceptron following the extension of collins for structured prediction problemsto train models weused projectivizedversions of the training depen dency trees21we are grateful to the providers of the treebanks that con stituted the data for the shared task 2we obtained projective trees for training sentences by run ning the projective parser with an oracle model 959 catalan czech english firstorder no averaging 8207 6898 8375 firstorder 8615 7596 8754 higherorder ch 8750 7715 8870 higherorder ch cmo 8768 7762 8928 higherorder ch cmi cmo 8804 7809 8959 table 2 labeled attachment scores on validation data for different models that ex ploit increasing orders of factorizations31 impact of higherorder factorizationour first set of experiments looks at the performanceof different factorizationswe selected three lan guages with a large number of training sentences namely catalan czech and englishto evaluate models we held out the training sentences that cover the first 10000 tokens the rest was used for trainingwe compared four models at increasing orders of factorizationsthe first is a firstorder modelthe second model is similar to that of mcdonald and pereira a factor consists of a main labeleddependency and the head child closest to the modifier the third model incorporates the modifier child outside the main dependency in the factorization finally the last model incorpo rates the modifier child inside the dependency span thus corresponding to the complete higher order model presented in the previous sectiontable 2 shows the accuracies of the models on validation dataeach model was trained for up to 10 epochs and evaluated at the end of each epoch we report the best accuracy of these evaluationsclearly the accuracy increases as the factors in clude richer information in terms of secondorderrelationsthe richest model obtains the best accu racy in the three languages being much better than that of the firstorder modelthe table also reportsthe accuracy of an unaveraged firstorder model il lustrating the benefits of parameter averaging32 results on the multilingual trackwe trained a higherorder model for each language using the averaged perceptronin the experiments presented above we observed that the algorithmdoes not overfit and that after two or three train ing epochs only small variations in accuracy occurbased on this fact we designed a criterion to train models we ran the training algorithm for up to three training test sentminmemuas las arabic 121 18gb 8148 7020 basque 3315 12gb 8108 7573 catalan 550 17gb 9246 8760 chinese 146166 60mb 8620 8086 czech 1819 18gb 8516 7860 english 1557 10gb 9063 8961 greek 810 250mb 8137 7356 hungarian 565 16gb 7992 7542 italian 1244 900mb 8719 8346 turkish 11655 600mb 8241 7585 average 8479 7909 table 3 performance of the higherorder projective models on the multilingual track of the conll2007 taskthe first twocolumns report the speed and mem ory requirements of the training algorithmthese evaluationswere made on the first 1000 training sentences with a dual core amd opterontm processor 256 at 18ghz with 4gb of memorythe last two columns report unlabelled and labelled attachment scores on test datadays of computation or a maximum of 15 epochsfor basque chinese and turkish we could complete the 15 epochsfor arabic and catalan we couldonly complete 2 epochstable 3 reports the perfor mance of the higherorder projective models on the ten languages of the multilingual trackwe have presented dependency parsing models that exploit higherorder factorizations of treessuch factorizations allow the definition of secondorder features associated with sibling and grandparental relationsfor some languages our models obtain stateoftheart resultsone drawback of our approach is that the inference algorithms for higherorder models are very expensivefor languages with many dependency la bels or long sentences training and parsing becomes impractical for current machinesthus a promising line of research is the investigation of methods toefficiently incorporate higherorder relations in dis criminative parsingacknowledgments i am grateful to terry koo amir globerson and michael collins for their helpful comments relating this work and to the anonymous reviewers for their suggestionsa significant part of the system and the code was based on my previous system in the conllx task developed with mihai surdeanu and llus marquez at the upcthe author was supported by the catalan ministry of innovation universities and enterprise960
D07-1101
experiments with a higherorder projective dependency parserwe present experiments with a dependency parsing model defined on rich factorsour model represents dependency trees with factors that include three types of relations between the tokens of a dependency and their childrenwe extend the projective parsing algorithm of eisner for our case and train models using the averaged perceptronour experiments show that considering higherorder information yields significant improvements in parsing accuracy but comes at a high cost in terms of both time and memory consumptionin the multilingual exercise of the conll2007 shared task our system obtains the best accuracy for english and the second best accuracies for basque and czechwe extend the firstorder model to incorporate a sum over scores for pairs of adjacent arcs in the tree yielding a secondorder modelour secondorder models include head grandparent relationsour second order algorithm uses the child of the dependent occurring in the sentence between the head and the dependent as well as the edge from the dependents to a grandchildwe introduce the leftmost and rightmost grandchild as factors
improving translation quality by discarding most of the phrasetable it is possible to reduce the bulk of phrasetables for statistical machine translation us ing a technique based on the significance testing of phrase pair cooccurrence in the parallel corpus the savings can be quitesubstantial and because no reduction in bleu score in some cases an i am provement in bleu is obtained at the same time although the effect is less pronounced if stateoftheart phrasetable smoothing is employed an important part of the process of statistical ma chine translation involves inferring a large table of phrase pairs that are translations of each other from a large corpus of aligned sentencesthese phrase pairs together with estimates of con ditional probabilities and useful feature weights called collectively a phrasetable are used to match a source sentence to produce candidate translationsthe choice of the best translation is made based on the combination of the probabilities and feature weights and much discussion has been made of how to make the estimates of probabilites how to smooth these estimates and what features are most useful for discriminating among the translationshowever a cursory glance at phrasetables pro duced often suggests that many of the translations are wrong or will never be used in any translationon the other hand most obvious ways of reducing the bulk usually lead to a reduction in translation quality as measured by bleu scorethis has led toan impression that these pairs must contribute some thing in the grand scheme of things and certainly more data is better than lessnonetheless this bulk comes at a costlarge ta bles lead to large data structures that require moreresources and more time to process and more i am portantly effort directed in handling large tablescould likely be more usefully employed in more fea tures or more sophisticated searchin this paper we show that it is possible to prune phrasetables using a straightforward approach based on significance testing that this approach does notadversely affect the quality of translation as mea sured by bleu score and that savings in terms ofnumber of discarded phrase pairs can be quite substantialeven more surprising pruning can actu ally raise the bleu score although this phenomenon is less prominent if state of the art smoothing of phrasetable probabilities is employedsection 2 reviews the basic ideas of statisticalmachine translation as well as those of testing sig nificance of associations in two by two contingency tables departing from independencefrom this a filtering algorithm will be described that keeps only phrase pairs that pass a significance testsection 3 outlines a number of experiments that demonstratethe phenomenon and measure its magnitudesec tion 4 presents the results of these experimentsthe paper concludes with a summary of what has been learned and a discussion of continuing work that builds on these ideas96721 our approach to statistical machinetranslationwe define a phrasetable as a set of source phrases sand their translations t along with associated translation probabilities p and pthese conditional distributions are derivedfrom the joint frequencies cof source tar get nmgrams observed in a wordaligned parallel corpusthese joint counts are estimated using the phrase induction algorithm described in with symmetrized word alignments gen erated using ibm model 2 phrases are limited to 8 tokens in length given a source sentence s our phrasebased smt system tries to find the target sentence tthat is the most likely translation of s to make search more efficient we use the viterbi approximation and seek the most likely combination of t and its alignment a with s rather than just the most likely t t argmax t p argmax ta pwhere a tk are tar get phrases such that t t1tk sk are sourcephrases such that s sj1 sjk and sk is the trans lation of the kth target phrase tk to model p we use a standard loglinear approach p exp i ifi where each fi is a feature function and weights i are set using ochs algorithm to maximize the systems bleu score on a development corpusthe features used are the length of t a singleparameterdistortion penalty on phrase reordering in a as de scribed in phrase translation model probabilities and 4gram language modelprobabilities log p using kneserney smooth ing as implemented in the srilm toolkit phrase translation model probabilities are features of the form log p k k1 log p ie we assume that the phrases sk specified by a are conditionally independent and depend only on their aligned phrases tk the forwardphrase probabilities p are not used as features but only as a filter on the set of possible translations for each source phrase sthat matches some ngram in s only the 30 topranked translations taccording to p are retainedoneof the reviewers has pointed out correctly that tak ing only the top 30 translations will interact with the subject under study however this pruning technique has been used as a way of controlling the width of our beam search and rebalancing search parameters would have complicated this study and taken it away from our standard practicethe phrase translation model probabilities are smoothed according to one of several techniques as described in and identified in the discussion below22 significance testing using two by twocontingency tables each phrase pair can be thought of as am nmgram where sis an ngram from the source side of the corpus and it is an mgram from the target side of the corpuswe then define cas the number of parallel sentences that contain one or more occurrences of son the source side and ton the target side c the number of parallel sentences that contain one or more occurrences of son the source side and c the number of parallel sentences that contain one or more occurrences of ton the target sidetogether with n the number of parallel sentences we haveenough information to draw up a two by two contingency table representing the unconditional relation ship between sand tthis table is shown in table 1a standard statistical technique used to assess theimportance of an association represented by a con tingency table involves calculating the probability that the observed table or one that is more extremecould occur by chance assuming a model of independencethis is called a significance testintro ductory statistics texts describe one such test called the chisquared testthere are other tests that more accurately apply to our small tables with only two rows and columns968 table 1 two by two contingency table for sand tcccc ccn cc cn c c n c nin particular fishers exact test calculates probabil ity of the observed table using the hypergeometric distibutionph c cc the pvalue associated with our observed table is then calculated by summing probabilities for tables that have a larger cpvalue kc ph this probability is interpreted as the probability of observing by chance an association that is at least as strong as the given one and hence its significanceagresti provides an excellent introduction tothis topic and the general ideas of significance test ing in contingency tablesfishers exact test of significance is considered agold standard since it represents the precise proba bilities under realistic assumptionstests such as the chisquared test or the loglikelihoodratio test depend on asymptotic assumptions that are often not valid for small countsnote that the count ccan be larger or smaller than cdiscussed abovein most cases it will be larger because it counts all cooccurrences of swith trather than just those that respect the word alignmentit can be smaller though because multiple cooccurrences can occur within a single aligned sentence pair and be counted multiple times in con the other hand cwill not count all of the possible ways that an nmgrammatch can occur within a single sentence pair it will count the match only once per sentence pair in which it occursmoore discusses the use of significance testing of word associations using the log likelihoodratio test and fishers exact testhe shows that fishers exact test is often a practical method if a number of techniques are followed1approximating the logarithms of factorials using commonly available numerical approxima tions to the log gamma function2using a wellknown recurrence for the hyperge ometic distribution 3noting that few terms usually need to be summed and 4observing that convergence is usually rapid23 significance pruningthe idea behind significance pruning of phrasetables is that not all of the phrase pairs in a phrasetable are equally supported by the data and that many of the weakly supported pairs could be removed because 1the chance of them occurring again might be low and 2their occurrence in the given corpus may be the result of an artifact this concept is usually re ferred to as overfit since the model fits aspects of the training data that do not lead to improved predictionphrase pairs that cannot stand on their own bydemonstrating a certain level of significance are sus pect and removing them from the phrasetable may 969 be beneficial in terms of reducing the size of data structuresthis will be shown to be the case in rather general termsnote that this pruning may and quite often will remove all of the candidate translations for a source phrasethis might seem to be a bad idea but it mustbe remembered that deleting longer phrases will al low combinations of shorter phrases to be used and these might have more and better translations from the corpushere is part of the intuition about how phrasetable smoothing may interact with phrasetable pruning both are discouraging longer but infrequent phrases from the corpus in favour of combinations of more frequent shorter phrasesbecause the probabilities involved below will beso incredibly tiny we will work instead with the neg ative of the natural logs of the probabilitiesthus instead of selecting phrase pairs with a pvalue less than exp we will select phrase pairs with a negativelogpvalue greater than 20this has the advantage of working with ordinarysized numbers and the happy convention that bigger means more pruning24 c 1 111 tables and the threshold an important special case of a table occurs when a phrase pair occurs exactly once in the corpus and each of the component phrases occurs exactly once in its side of the parallel corpusthese phrase pairs will be referred to as 111 phrase pairs and the corresponding tables will be called 111 contingency tables because c 1 c 1 and c 1moore comments that the pvalue for these tables under fishers exact test is 1n since we areusing thresholds of the negative logarithm of the p value the value log is a useful threshold to considerin particular is the smallest threshold thatresults in none of the 111 phrase pairs being in cludedsimilarly is the largest threshold that results in all of the 111 phrase pairs being includedbecause 111 phrase pairs can make up a large part of the phrase table this is important observation for its own sakesince the contingency table with c 1 hav ing the greatest significance is the 111 table using the threshold of can be used to exclude all of the phrase pairs occurring exactly once 1the common strategy of deleting all of the 1 count phrase pairs is very similar in effect to the use of the threshold31 wmt06the corpora used for most of these experiments arepublicly available and have been used for a number of comparative studies provided as part of the materials for the shared task are parallel corpora for frenchenglish spanishenglish and germanenglish as well as language models for english french spanish and germanthese are all based on the europarl resources the only change made to these corpora was to convert them to lowercase and to unicode utf8phrasetables were produced by symmetrizing ibm2 conditional probabilities as described abovethe phrasetables were then used as a list of nmgrams for which counts c c and c were obtainednegativelogpvalues under fishers exact test were computed for each of the phrase pairs in the phrasetable and the entry was censored if the negativelogpvalue for the test was below the pruning thresholdthe entries that are kept are ones that are highly significanta number of combinations involving many differ ent pruning thresholds were considered no pruning10 15 20 25 50 100 and 1000in ad dition a number of different phrasetable smoothing algorithms were used no smoothing goodturing smoothing kneserney 3 parameter smoothing and the loglinear mixture involving two features called zensney 32 chineseto test the effects of significance pruning on larger corpora a series of experiments was run on a much larger corpus based on that distributed for mt06chineseenglish since the ob jective was to assess how the method scaled we used our preferred phrasetable smoothing technique of 970 1000100101 bleu by pruning threshold no smoothing 3 3 333 3 3 3 3 gt kn3 2 2222 2 2 2 2 zn 107 106 105 1000100101 phrasetable size by pruning threshold size3 3 333 3 3 3 3 107106105 bleu by phrasetable size no smoothing 3 3 3333 3 3 3 gt kn3 222222 2 2 2 zn figure 1 wmt06 results for french englishto separate the curves graphs for smoothed meth ods are shifted by 1 2 or 3 bleu points table 2 corpus sizes and values number of parallel sentences wmt06 fren 688031 134415892 wmt06 esen 730740 13501813 wmt06 deen 751088 135292781 chineseenglish best 3164228 149674197 chineseenglish unv2 4979345 154208089zensney and separated our corpus into two phrase tables one based on the un corpus and the other based on the best of the remaining parallel corpora available to usdifferent pruning thresholds were considered no pruning 14 16 18 20 and 25in addition another more aggressive method of pruning was attemptedmoore points out correctly that phrase pairs that oc cur in only one sentence pair 1 are less reliable and might require more special treatmentthese are all pruned automatically at thresholds of16 and above but not at threshold of 14a spe cial series of runs was done for threshold 14 with all of these singletons removed to see whether at thesethresholds it was the significance level or the prun ing of phrase pairs with 1 that was more importantthis is identified as 14in the resultsthe results of the experiments are described in ta bles 2 through 6table 2 presents the sizes of the various parallel corpora showing the number of parallel sentences n for each of the experiments together with the thresholds table 3 shows the sizes of the phrasetables that result from the various pruning thresholds described for the wmt06 datait is clear that this is extremely aggressive pruning at the given levelstable 4 shows the corresponding phrasetable sizes for the large corpus chineseenglish datathe pruning is not as aggressive as for the wmt06 data but still quite sizeabletables 5 and 6 show the main results for thewmt06 and the chineseenglish large corpus experimentsto make these results more graphic fig ure 1 shows the french english data from the wmt06 results in the form of three graphsnote 971 table 3 wmt06 distinct phrase pairs by pruning threshold threshold fren esen deen none 9314165 100 11591013 100 6954243 100 10 7999081 859 10212019 881 5849593 841 6014294 646 7865072 679 4357620 627 1435576 154 1592655 137 1163296 167 15 1377375 148 1533610 132 1115559 160 20 1152780 124 1291113 111 928855 134 25 905201 97 1000264 86 732230 105 50 446757 48 481737 42 365118 53 100 235132 25 251999 22 189655 27 1000 22873 02 24070 02 16467 02 table 4 chineseenglish distinct phrase pairs by pruning threshold threshold best unv2 none 18858589 100 20228273 100 14 7666063 407 13276885 656 16 4280845 227 7691660 380 18 4084167 217 7434939 368 20 3887397 206 7145827 353 25 3403674 180 6316795 312 also pruning c 1 144477920 237 7917062 391 that an artificial separation of 1 bleu point has been introduced into these graphs to separate themwithout this they lie on top of each other and hide the essential pointin compensation the scale for the bleu coordinate has been removedthese results are summarized in the following subsections41 bleu as a function of thresholdin tables 5 and 6 the largest bleu score for each set of runs has been marked in bold fontin addition to highlight that there are many near ties for largest bleu all bleu scores that are within 01 of the best are also marked in boldwhen this is done it becomes clear that pruningat a level of 20 for the wmt06 runs would not re duce bleu in most cases and in many cases wouldactually increase ita pruning threshold of 20 corresponds to discarding roughly 90 of the phrase tablefor the chineseenglish large corpus runs a levelof 16 seems to be about the best with a small in crease in bleu and a 60 70 reduction in the size of the phrasetable42 bleu as a function of depth of pruninganother view of this can be taken from tables 5 and 6the fraction of the phrasetable retained is a more or less simple function of pruning thresholdas shown in tables 3 and 4by including the per centages in tables 5 and 6 we can see that bleu goes up as the fraction approaches between 20 and 30this seems to be a relatively stable observation across the experimentsit is also easily explained by its strong relationship to pruning threshold43 large corporatable 6 shows that this is not just a small corpus phenomenonthere is a sizeable benefit both in phrase table reduction and a modest improvement to bleu even in this case44 is this just the same as phrasetablesmoothingone question that occurred early on was whether this improvement in bleu is somehow related to the improvement in bleu that occurs with phrasetable smoothing972it appears that the answer is in the main yes al though there is definitely something else going onit is true that the benefit in terms of bleu is less ened for better types of phrasetable smoothing but the benefit in terms of the reduction in bulk holdsit is reassuring to see that no harm to bleu is done by removing even 80 of the phrasetable45 comment about c 1another question that came up is the role of phrasepairs that occur only once c 1in particu lar as discussed above the most significant of these are the 111 phrase pairs whose components also only occur once c 1 and c 1thesephrase pairs are amazingly frequent in the phrasetables and are pruned in all of the experiments ex cept when pruning threshold is equal to 14the chineseenglish large corpus experiments give us a good opportunity to show that significance level seems to be more an issue than the case that c 1note that we could have kept the phrase pairs whose marginal counts were greater than one but most of these are of lower significance and likelyare pruned already by the thresholdthe given con figuration was considered the most likely to yield a benefit and its poor performance led to the whole idea being put asideto sum up the main conclusions are five in number 1phrasetables produced by the standard diagandmethod can be aggressively pruned using significance pruning with out worsening bleu2if phrasetable smoothing is not done the bleuscore will improve under aggressive signifi cance pruning3if phrasetable smoothing is done the improvement is small or negligible but there is still no loss on aggressive pruning4the preservation of bleu score in the presence of largescale pruning is a strong effect insmall and moderate size phrasetables but oc curs also in much larger phrasetables5in larger phrasetables based on larger corporathe percentage of the table that can be dis carded appears to decreasethis is plausible since a similar effect has been noted with phrasetable smoothing together these results suggest that for these corpus sizes the increase in the number of strongly supported phrase pairs is greater than the increase in the number of poorly supported pairs which agrees with intuitionalthough there may be other approaches to prun ing that achieve a similar effect the use of fishers exact test is mathematically and conceptually one of the simplest since it asks a question separately for each phrase pair considering this phase pair in isolation of any other analysis on the corpus could it have occurred plausibly by purely random processes inherent in the corpus constructionif the answer is yes then it is hard to argue that the phrase pair is an association of general applicability from the evidence in this corpus alonenote that the removal of 1count phrase pairs is subsumed by significance pruning with a thresholdgreater than and many of the other simple ap proaches are more difficult to justify as simply as the above significance testnonetheless there remains workto do in determining if computationally simpler ap proaches do as wellmoores work suggests that loglikelihoodratio would be a cheaper and accurate enough alternative for examplewe will now return to the interaction of the se lection in our beam search of the top 30 candidates based on forward conditional probabilitiesthis will affect our results but most likely in the following manner 1for very small thresholds the beam will become much wider and the search will take much longerin order to allow the experiments to complete in a reasonable time other means will need to be employed to reduce the choicesthis reduction will also interact with the sig nificance pruning but in a less understandable manner2for large thresholds there will not be 30973 choices and so there will be no effect3for intermediate thresholds the extra pruning might reduce bleu score but by a small amount because most of the best choices are included in the searchusing thresholds that remove most of the phrase table would no doubt qualify as large thresholds so the question is addressing the true shape of the curvefor smaller thresholds and not at the expected operat ing levelsnonetheless this is a subject for further study especially as we consider alternatives to our filter 30approach for managing beam widththere are a number of important ways that this work can and will be continuedthe code base fortaking a list of nmgrams and computing the re quired frequencies for signifance evaluation can beapplied to related problemsfor example skipngrams may be studied better using this approachleading to insight about methods that weakly ap proximate patternsthe original goal of this work was to better understand the character of phrasetables and it remains a useful diagnostic techniqueit will hope fully lead to more understanding of what it takes to make a good phrasetable especially for languages that require morphological analysis or segmentation to produce good tables using standard methodsthe negativelogpvalue promises to be a useful feature and we are currently evaluating its meritsthis material is based upon work supported by the defense advanced research projects agency under contract nohr001106c0023any opinions findings and conclusions or recom mendations expressed in this material are those of the authors and do not necessarily reflect the views of the defense advanced research projects agency
D07-1103
improving translation quality by discarding most of the phrasetableit is possible to reduce the bulk of phrasetables for statistical machine translation using a technique based on the significance testing of phrase pair cooccurrence in the parallel corpusthe savings can be quite substantial and because no reduction in bleu scorein some cases an improvement in bleu is obtained at the same time although the effect is less pronounced if stateoftheart phrase table smoothing is employedwe use fisher exact testwe filter out statistically unreliable translation pairs
hierarchical phrasebased translation with suffix arrays a major engineering challenge in statistical machine translation systems is the efficient representation of extremely large translationrulesets in phrasebased models this prob lem can be addressed by storing the training data in memory and using a suffix array asan efficient index to quickly lookup and extract rules on the fly hierarchical phrasebased translation introduces the added wrin kle of source phrases with gaps lookup algorithms used for contiguous phrases nolonger apply and the best approximate pat tern matching algorithms are much too slow taking several minutes per sentence wedescribe new lookup algorithms for hierar chical phrasebased translation that reduce the empirical computation time by nearly two orders of magnitude making onthefly lookup feasible for source phrases with gaps current statistical machine translation systems rely on very large rule setsin phrasebased systems rules are extracted from parallel corpora containingtens or hundreds of millions of wordsthis can result in millions of rules using even the most conser vative extraction heuristicsefficient algorithms for rule storage and access are necessary for practical decoding algorithmsthey are crucial to keeping up with the everincreasing size of parallel corpora as well as the introduction of new data sources such as webmined and comparable corporauntil recently most approaches to this probleminvolved substantial tradeoffsthe common practice of test set filtering renders systems impracti cal for all but batch processingtight restrictions on phrase length curtail the power of phrasebasedmodelshowever some promising engineering so lutions are emergingzens and ney use a diskbased prefix tree enabling efficient access to phrase tables much too large to fit in main memoryan alternative approach introduced independently by both callisonburch et al and zhang and vogel is to store the training data itself inmemory and use a suffix array as an efficient in dex to look up extract and score phrase pairs on the flywe believe that the latter approach has several important applications so far these techniques have focused on phrase based models using contiguous phrases some recent models permit discontiguous phrases of particular in terest to us is the hierarchical phrasebased model ofchiang which has been shown to be supe rior to phrasebased modelsthe ruleset extractedby this model is a superset of the ruleset in an equivalent phrasebased model and it is an order of magnitude largerthis makes efficient rule representa tion even more criticalwe tackle the problem using the online rule extraction method of callisonburch et al and zhang and vogel the problem statement for our work is given an input sentence efficiently find all hierarchical phrasebased translation rules for that sentence in the training corpus976 we first review suffix arrays and hierarchicalphrasebased translation we show that the obvious approach using stateoftheart pattern match ing algorithms is hopelessly inefficient we then describe a series of algorithms to address thisinefficiency our algorithms reduce computa tion time by two orders of magnitude making the approach feasible we close with a discussion that describes several applications of our work a suffix array is a data structure representing all suf fixes of a corpus in lexicographical order formally for a text t the ith suffix of t is the substring of the text beginning atposition i and continuing to the end of t this suf fix can be uniquely identified by the index i of itsfirst wordthe suffix array sat of t is a permuta tion of 1 t arranged by the lexicographical order of the corresponding suffixesthis representationenables fast lookup of any contiguous substring us ing binary searchspecifically all occurrences of a lengthm substring can be found in o time 1 callisonburch et al and zhang and vogel use suffix arrays as follows1load the source training text f the suffix arraysaf the target training text e and the align ment a into memory2for each input sentence look up each substring fof the sentence in the suffix arrayaligned phrase eusing the phrase extraction method of koehn et al 4compute the relative frequency score p ofeach pair using the count of the extracted pair and the marginal count of f5compute the lexical weighting score of thephrase pair using the alignment that gives the best score1abouelhoda et al show that lookup can be done in optimal o time using some auxiliaray data structuresforour purposes o is practical since for the 27m word corpus used to carry out our experiments log t 256use the scored rules to translate the input sentence with a standard decoding algorithma difficulty with this approach is step 3 which canbe quite slowits complexity is linear in the num ber of occurrences of the source phrase fboth callisonburch et al and zhang and vogel solve this with samplingif a source phraseappears more than k times they sample only k oc currences for rule extractionboth papers reportthat translation performance is nearly identical to ex tracting all possible phrases when k 1002we consider the hierarchical translation model ofchiang formally this model is a syn chronous contextfree grammarthe lexicalizedtranslation rules of the grammar may contain a sin gle nonterminal symbol denoted x we will use a b c and d to denote terminal symbols and you v andw to denote sequences of these ter minalswe will additionally use and to denote sequences containing both termi nals and nonterminalsa translation rule is written x this rule states that a span of the input matching is replacedby in translationwe require that and con tain an equal number of coindexed nonterminalsan example rule with coindexes is x ux 1 vx 2wyou x 2 v x 1w when discussing only the source side of such rules we will leave out the coindexesfor instance the source side of the above rule will be written uxvxw3 for the purposes of this paper we adhere to therestrictions described by chiang for rules ex tracted from the training datarules can contain at most two nonterminalsrules can contain at most five terminalsrules can span at most ten words2a sample size of 100 is actually quite small for many phrases some of which occur tens or hundreds of thousands of timesit is perhaps surprising that such a small sample size works as well as the full datahowever recent work by och and federico and bertoldi has shown that the statistics used by phrasebased systems are not very precise3in the canonical representation of the grammar sourcesidecoindexes are always in sorted order making them unambigu ous977 nonterminals must span at least two wordsadjacent nonterminals are disallowed in the source side of a ruleexpressed more economically we say that our goal is to search for source phrases in the form you uxv or uxvxw where 1 uvw 5 and v 0 in the final casenote that the model also allows rules in the form xu ux xux xuxv and uxvx however these rules are lexically identical to other rules and thus will match the same locations in the source textonthefly lookup using suffix arrays involves an added complication when the rules are in form uxv or uxvxwbinary search enables fast lookup of contiguous substringshowever it cannot be used for discontiguous substringsconsider the rule axbxcif we search for this rule in the followinglogical suffix array fragment we will find the bold faced matches a c a c b a d c a d a c a d b a a d b d a d d b a a d a b c a d d b d a a b b a a d d b d d c a a a even though these suffixes are in lexicographicalorder matching suffixes are interspersed with non matching suffixeswe will need another algorithmto find the source rules containing at least onex surrounded by nonempty sequences of terminal sym bols41 baseline approachin the patternmatching literature words spanned by the nonterminal symbols of chiangs grammar are called do not cares and a nonterminal symbol in a query pattern that matches a sequence of do not caresis called a variable length gapthe search prob lem for patterns containing these gaps is a variant of approximate pattern matching which has receivedsubstantial attention the best algo rithm for pattern matching with variablelength gaps in a suffix array is a recent algorithm by rahman et al it works on a pattern w1xw2xwi consisting of i contiguous substrings w1 w2 wi each separated by a gapthe algorithm is straight forwardafter identifying all ni occurrences of each wi in o time collocations thatmeet the gap constraints are computed using an ef ficient data structure called a stratified tree 4 although we refer the reader to the source text for a full description of this data structure its salient characteristic is that it implements priority queue operations insert and nextelement in o timetherefore thetotal running time for an algorithm to find all con tiguous subpatterns and compute their collocations is owe can improve on the algorithm of rahman et al using a variation on the idea of hashingwe exploit the fact that our large text is actually acollection of relatively short sentences and that col located patterns must occur in the same sentence in order to be considered a ruletherefore we can use the sentence id of each subpattern occurrence as a kind of hash keywe create a hash table whosesize is exactly the number of sentences in our train ing corpuseach location of the partially matched pattern w1xxwi is inserted into the hash bucket with the matching sentence id to find collocated patterns wi1 we probe the hash table with each of the ni1 locations for that subpatternwhen amatch is found we compare the element with all el ements in the bucket to see if it is within the windowimposed by the phrase length constraintstheoreti cally the worst case for this algorithm occurs when all elements of both sets resolve to the same hash bucket and we must compare all elements of one set with all elements of the other setthis leads to a worst case complexity of ohowever for real language data the per formance for sets of any significant size will be o since most patterns will occur once in any given sentence42 analysisit is instructive to compare this with the complex ity for contiguous phrasesin that case total lookup time is o for a contiguous pattern w 4often known in the literature as a van emde boas tree or van emde boas priority queue978 the crucial difference between the contiguous and discontiguous case is the added term i i1 niforeven moderately frequent subpatterns this term dom inates complexityto make matters concrete consider the training corpus used in our experiments which contains27m source wordsthe three most frequent uni grams occur 148m 116m and 688k times thefirst two occur on average more than once per sen tencein the worst case looking up a contiguous phrase containing any number and combination ofthese unigrams requires no more than 25 compari son operationsin contrast the worst case scenario for a pattern with a single gap bookended on either side by the most frequent word requires over two million operations using our baseline algorithm and over thirteen million using the algorithm of rahman et al a single frequent word in an input sentence is enough to cause noticeable slowdowns since it can appear in up to 530 hierarchical rulesto analyze the cost empirically we ran our base line algorithm on the first 50 sentences of the nist chineseenglish 2003 test set and measured the cpu time taken to compute collocationswe foundthat on average it took 224125 seconds per sentence just to compute all of the needed collocationsby comparison decoding time persentence is roughly 10 seconds with moderately ag gressive pruning using the python implementation of chiang clearly looking up patterns in this way is not prac ticalto analyze the problem we measured the amount of cpu time per computationcumulative lookup time was dominated by a very small fraction of the computations as expected further analysis showed that these expensive computations all involved one or more very frequent subpatternsin the worst cases a single collocation took severalseconds to computehowever there is a silver liningpatterns follow a zipf distribution so the number of pattern types that because the problem is actu ally quite smallthe vast majority of patterns areraretherefore our solution focuses on computa tions where one or more of the component patternsis frequentassume that we are computing a collo computations c you m you l a t i v e t i m e 300k 150k figure 1 ranked computations vs cumulative timea small fraction of all computations account for most of the computational timecation of pattern w1xxwi and pattern wi1 and we know all locations of eachthere are three casesif both patterns are frequent we resort to a precomputed intersection we were notaware of any algorithms to substantially i am prove the efficiency of this computation when it is requested on the fly but precomputation can be done in a single pass over the text at decoder startupif one pattern is frequent and the other is rarewe use an algorithm whose complexity is de pendent mainly on the frequency of the rare pattern it can also be used for pairs of rare patterns when one pattern is much rarer than the otherif both patterns are rare no special algorithms are neededany linear algorithm will sufficehowever for reasons described in 53 our other collocation algorithms depend on sorted sets so we use a merge algorithmfinally in order to cut down on the number of un necessary computations we use an efficient method to enumerate the phrases to lookup this method also forms the basis of various caching strategies for additional speedupswe analyze the memory use of our algorithms in 5551 precomputationprecomputation of the most expensive collocationscan be done in a single pass over the textas in put our algorithm requires the identities of the k 979 most frequent contiguous patterns5 it then iterates over the corpuswhenever a pattern from the list is seen we push a tuple consisting of its identity and current location onto a queuewhenever the oldest item on the queue falls outside the maximum phrase length window with respect to the current positionwe compute that items collocation with all succeed ing patterns and pop it from the queuewe repeat this step for every item that falls outside the windowat the end of each sentence we compute collocations for any remaining items in the queue and then empty itour precomputation includes the most frequent ngram subpatternsmost of these are unigrams but in our experiments we found 5grams among the 1000 most frequent patternswe precompute the locations of source phrase uxv for any pair you and v that both appear on this listthere is alsoa small number of patterns uxv that are very frequentwe cannot easily obtain a list of these in ad vance but we observe that they always consist of apair you and v of patterns from near the top of the frequency listtherefore we also precompute the loca tions uxvxw of patterns in which both you and v are among these superfrequent patterns treating this as the collocation of the frequent pattern uxv and frequent pattern w we also compute the analagous case for you and vxw52 fast intersectionfor collocations of frequent and rare patterns we use a fast set intersection method for sorted sets called double binary search 6 it is based on the intuition that if one set in a pair of sorted sets is much smaller than the other thenwe can compute their intersection efficiently by per forming a binary search in the larger data set d for each element of the smaller query set q double binary search takes this idea a step furtherit performs a binary search in d for the median ele ment of q whether or not the element is found the 5these can be identified using a single traversal over alongest common prefix array an auxiliary data struc ture of the suffix array described by manber and myers since we do not need the lcp array at runtime we chose to do this computation once offline6minor modifications are required since we are computing collocation rather than intersectiondue to space constraints details and proof of correctness are available in lopez search divides both sets into two pairs of smaller sets that can be processed recursivelydetailed analysis and empirical results on an information retrieval task are reported in baezayates and baezayates and salinger if q log d d then theperformance is guaranteed to be sublinearin practice it is often sublinear even if q log d is somewhat larger than din our implementation we sim ply check for the condition q log d d to decide whether we should use double binary search or the merge algorithmthis check is applied in the recursive cases as well as for the initial inputsthevariable can be adjusted for performancewe de termined experimentally that a good value for this parameter is 0353 obtaining sorted setsdouble binary search requires that its input sets be in sorted orderhowever the suffix array returnsmatchings in lexicographical order not numeric or derthe algorithm of rahman et al deals with this problem by inserting the unordered items into a stratified treethis requires o time for n itemsif we used the same strategy our algorithm would no longer be sublinearan alternative is to precompute all ngram occur rences in order and store them in an inverted indexthis can be done in one pass over the data7 this approach requires a separate inverted index for each n up to the maximum n used by the modelthe memory cost is one lengtht array per indexin order to avoid the full nt cost in memory our implementation uses a mixed strategywe keep a precomputed inverted index only for unigramsfor bigrams and larger ngrams we generate the in dex on the fly using stratified treesthis results in a superlinear algorithm for intersectionhoweverwe can exploit the fact that we must compute col locations multiple times for each input ngram by caching the sorted set after we create it subsequent computations involving this ngram can then be done in lin ear or sublinear timetherefore the cost of building the inverted index on the fly is amortized over a large number of computations7we combine this step with the other precomputations that require a pass over the data thereby removing a redundant o term from the startup cost980 54 efficient enumerationa major difference between contiguous phrase based models and hierarchical phrasebased models is the number of rules that potentially apply to an input sentenceto make this concrete on our data with an average of 29 words per sentence there were on average 133 contiguous phrases of length 5 orless that appliedby comparison there were on av erage 7557 hierarchical phrases containing up to 5wordsthese patterns are obviously highly overlap ping and we employ an algorithm to exploit this factwe first describe a baseline algorithm used for con tiguous phrases we then introduce some improvements and describe a data structureused by the algorithm finally we dis cuss some special cases for discontiguous phrases 541 the zhangvogel algorithmzhang and vogel present a clever algorithm for contiguous phrase searches in a suffix ar rayit exploits the fact that for eachmlength source phrase that we want to look up we will also want to look up its length prefixthey observe that the region of the suffix array containing all suffixes prefixed by ua is a subset of the region containingthe suffixes prefixed by you therefore if we enumer ate the phrases of our sentence in such a way that we always search for you before searching for ua wecan restrict the binary search for ua to the range con taining the suffixes prefixed by you if the search for you fails we do not need to search for ua at allthey show that this approach leads to some time savings for phrase search although the gains are relatively modest since the search for contiguous phrases is not very expensive to begin withhowever the potential savings in the discontiguous case are much greater542 improvements and extensions we can improve on the zhangvogel algorithman mlength contiguous phrase aub depends not only on the existence of its prefix au but also on the existence of its suffix ubin the contiguous case we cannot use this information to restrict the starting range of the binary search but we can check for the existence of ub to decide whether we even need to search for aub at allthis can help us avoid searches that are guaranteed to be fruitlessnow consider the discontiguous caseas in the analogous contiguous case a phrase ab will onlyexist in the text if its maximal prefix aand maxi mal suffix b both exist in the corpus and overlap at specific positions8 searching for ab is potentially very expensive so we put all available information to workbefore searching we require that both aand b existadditionally we compute the location of ab using the locations of both maximal sub phrasesto see why the latter optimization is useful consider a phrase abxcdin our baseline algorithm we would search for ab and cd and then perform a computation to see whether these subphrases were collocated within an elastic windowhowever if weinstead use abxc and bxcd as the basis of the com putation we gain two advantagesfirst the number elements of each set is likely to be smaller then in the former casesecond the computation becomes simpler because we now only need to check to see whether the patterns exactly overlap with a starting offset of one rather than checking within a window of locationswe can improve efficiency even further if we con sider cases where the same substring occurs morethan once within the same sentence or even in mul tiple sentencesif the computation required to look up a phrase is expensive we would like to performthe lookup only oncethis requires some mecha nism for cachingdepending on the situation we might want to cache only certain subsets of phrases based on their frequency or difficulty to computewe would also like the flexibility to combine on thefly lookups with a partially precomputed phrase table as in the onlineoffline mixture of zhang and vogel we need a data structure that provides this flex ibility in addition to providing fast access to both the maximal prefix and maximal suffix of any phrase that we might consider543 prefix trees and suffix links our search optimizations are easily captured in a prefix tree data structure augmented with suffix linksformally a prefix tree is an unminimized determin istic finitestate automaton that recognizes all of thepatterns in some seteach node in the tree repre8except when x in which case a and b must be collo cated within a window defined by the phrase length constraints981 ab b c cx x d d a b b c cx x d d a b b c cx x d d a b b c cx x d d x e a c d case 1 case 2 figure 2 illustration of prefix tree construction showing a partial prefix tree including suffix linkssuppose we are interested in pattern abxcd represented by node its prefix is represented by node and node s suffix is represented by node therefore node s suffix is represented by the node pointed to by the dedge from node which is node there are two casesin case 1 node is inactive so we can mark node inactive and stopin case 2 node is active so we compute the collocation of abxc and bxcd with information stored at nodes and using either a precomputed intersection double binary search or merge depending on the size of the setsif the result is empty we mark the node inactiveotherwise we store the results at node and add its successor patterns to the frontier for the next iterationthis includes all patterns containing exactly one more terminal symbol than the current patternsents the prefix of a unique pattern from the set that is specified by the concatenation of the edge labels along the path from the root to that nodea suffix link is a pointer from a node representing path ato the node representing path we will use this data structure to record the set of patterns that we have searched for and to cache information for those that were found successfullyour algorithm generates the tree breadthsearch along a frontierin the mth iteration we only searchfor patterns containingm terminal symbolsregardless of whether we find a particular pattern we cre ate a node for it in the treeif the pattern was found in the corpus its node is marked activeotherwise it is marked inactivefor found patterns we storeeither the endpoints of the suffix array range con taining the phrase or the list oflocations at which the phrase is found we can also store the extracted rules9 whenever a pattern is successfully found we add all patterns with m 1 terminals that are prefixed by it 9conveniently the implementation of chiang uses aprefix tree grammar encoding as described in klein and manning our implementation decorates this tree with addi tional information required by our algorithmsto the frontier for processing in the next iterationto search for a pattern we use location infor mation from its parent node which represents its maximal prefixassuming that the node representsphrase b we find the node representing its max imal suffix by following the bedge from the node pointed to by its parent nodes suffix linkif the node pointed to by this suffix link is inactive we can mark the node inactive without running a searchwhen a node is marked inactive we discontinue search for phrases that are prefixed by the path it representsthe algorithm is illustrated in figure 2544 special cases for phrases with gapsa few subtleties arise in the extraction of hierarchical patternsgaps are allowed to occur at the be ginning or end of a phrasefor instance we mayhave a source phrase xu or ux or even xux al though each of these phrases requires its own path in the prefix tree they are lexically identical to phrase you an analogous situation occurs with the patterns xuxv uxvx and uxvthere are two cases that we are concerned withthe first case consists of all patterns prefixed with x the paths to nodes representing these patterns 982 will all contain the xedge originating at the rootnodeall of these paths form the shadow subtreepath construction in this subtree proceeds dif ferentlybecause they are lexically identical to theirsuffixes they are automatically extended if their suffix paths are active and they inherit location infor mation of their suffixesthe second case consists of all patterns suffixedwith x whenever we successfully find a new pat tern we automatically extend it with an x edgeprovided that x is allowed by the model con straintsthe node pointed to by this edge inheritsits location information from its parent node note that both special cases occur for patterns in the form xux 55 memory requirementsas shown in callisonburch et al we must keep an array for the source text f its suffix arraythe target text e and alignment a in memoryas suming that a and e are roughly the size of f thecost is 4t if we assume that all data use vocabu laries that can be represented using 32bit integers then our 27m word corpus can easily be represented in around 500mb of memoryadding the inverted index for unigrams increases this by 20the main additional cost in memory comes from the storage of the precomputed collocationsthis is dependentboth on the corpus size and the number of colloca tions that we choose to precomputeusing detailed timing data from our experiments we were able to simulate the memoryspeed tradeoff if we include a trigram model trained on our bitext and the chinese gigaword corpus the overall storage costs for our system are approximately 2gball of our experiments were performed on chineseenglish in the news domainwe used a large train ing set consisting of over 1 million sentences from various newswire corporathis corpus is roughly the same as the one used for largescale experiments by chiang et al to generate alignmentswe used giza we symmetrized bidirectional alignments using the grow diagfinal heuristic 0 0 0 1000 0 number of frequent subpatterns insert text here 41 secsent 41 seconds 405 secsent 0 mb725mb figure 3 effect of precomputation on memory useand processing timehere we show only the mem ory requirements of the precomputed collocationswe used the first 50 sentences of the nist 2003test set to compute timing resultsall of our algo rithms were implemented in python 2410 timingresults are reported for machines with 8gb of mem ory and 4 3ghz xeon processors running red hat linux 269in order to understand the contributions of various improvements we also ran the system with with various ablationsin the default setting the prefix tree is constructed for each sentence to guide phrase lookup and then discardedto showthe effect of caching we also ran the algorithm without discarding the prefix tree between sentences re sulting in full intersentence cachingthe results are shown in table 111it is clear from the results that each of the op timizations is needed to sufficiently reduce lookuptime to practical levelsalthough this is still rela tively slow it is much closer to the decoding time of 10 seconds per sentence than the baseline10python is an interpreted language and our implementations do not use any optimization featuresit is therefore reasonable to think that a more efficient reimplementation would result in acrosstheboard speedups11the results shown here do not include the startup time re quired to load the data structures into memoryin our python implementation this takes several minutes which in principle should be amortized over the cost for each sentencehoweverjust as zens and ney do for phrase tables we could com pile our data structures into binary memorymapped files whichcan be read into memory in a matter of secondswe are cur rently investigating this option in a c reimplementation983 algorithms secssent collocations baseline 224125 325548 prefix tree 157877 69994 prefix tree precomputation 69635 69994 prefix tree double binary 40502 69994 prefix tree precomputation double binary 4077 69994 prefix tree with full caching precomputation double binary 3070 67712 table 1 timing results and number of collocations computed for various combinations of algorithmsthe runs using precomputation use the 1000 most frequent patternsour work solves a seemingly intractable problemand opens up a number of intriguing potential ap plicationsboth callisonburch et al and zhang and vogel use suffix arrays to relax the length constraints on phrasebased modelsourwork enables this in hierarchical phrasebased modelshowever we are interested in additional appli cationsrecent work in discriminative learning for manynatural language tasks such as partofspeech tagging and information extraction has shown that feature engineering plays a critical role in these approacheshowever in machine translation most fea tures can still be traced back to the ibm models of 15 years ago recently lopez and resnik showed that most of the features used in standard phrasebased models do not help very muchour algorithms enable us to look up phrasepairs in context which will allow us to compute interesting contextual features that can be used in discriminative learning algorithms to improve transla tion accuracyessentially we can use the training data itself as an indirect representation of whateverfeatures we might want to computethis is not pos sible with tablebased architecturesmost of the data structures and algorithms discussed in this paper are widely used in bioinformatics including suffix arrays prefix trees and suf fix links as discussed in 41 our problem is a variant of the approximate patternmatching problema major application of approx imate pattern matching in bioinformatics is queryprocessing in protein databases for purposes of se quencing phylogeny and motif identificationcurrent mt models including hierarchical mod els translate by breaking the input sentence intosmall pieces and translating them largely independentlyusing approximate pattern matching algo rithms we imagine that machine translation could be treated very much like search in a protein databasein this scenario the goal is to select training sentences that match the input sentence as closely as possible under some evaluation function that accounts for both matching and mismatched sequences as well as possibly other data featuresonce we have found the closest sentences we cantranslate the matched portions in their entirety re placing mismatches with appropriate word phrase or hierarchical phrase translations as neededthis model would bring statistical machine translation closer to convergence with socalled examplebased translation following current trends we intend to explore these ideas in fu ture workacknowledgementsi would like to thank philip resnik for encour agement thoughtful discussions and wise counsel david chiang for providing the source code for his translation system and nitin madnani smaranda muresan and the anonymous reviewers for very helpful comments on earlier drafts of this paperany errors are my ownthis research was supported in part by onr muri contract fcpo810548265 and the gale program of the defense advancedresearch projects agency contract nohr0011 062001any opinions findings conclusions or recommendations expressed in this paper are those of the author and do not necessarily reflect the view of darpa984
D07-1104
hierarchical phrasebased translation with suffix arraysa major engineering challenge in statistical machine translation systems is the efficient representation of extremely large translation rulesetsin phrasebased models this problem can be addressed by storing the training data in memory and using a suffix array as an efficient index to quickly lookup and extract rules on the flyhierarchical phrasebased translation introduces the added wrinkle of source phrases with gapslookup algorithms used for contiguous phrases no longer apply and the best approximate pattern matching algorithms are much too slow taking several minutes per sentencewe describe new lookup algorithms for hierarchical phrasebased translation that reduce the empirical computation time by nearly two orders of magnitude making onthefly lookup feasible for source phrases with gapsthe basis of our method is to look for the occurrences of continuous substrings using a suffix array and then intersect them to find the occurrences of discontinuous substrings
a topic model for word sense disambiguation we develop latent dirichlet alocation with wordnet an unsupervised probabilistic topic model that includes word sense as a hidden variable we develop a probabilistic posterior inference algorithm for simultaneously disambiguating a corpusand learning the domains in which to consider each word using the wordnet hierarchy we embed the construction of ab ney and light in the topic model and show that automatically learned domainsimprove wsd accuracy compared to alter native contexts word sense disambiguation is the task of determining the meaning of an ambiguous word in its contextit is an important problem in natural language processing because effective wsd can improve systems for tasks such as information retrieval machine translation and summarizationin this paper we develop latent dirichlet alocation with wordnet a generative prob abilistic topic model for wsd where the sense of the word is a hidden random variable that is inferred from datathere are two central advantages to this approachfirst with ldawn we automatically learn the con text in which a word is disambiguatedrather than disambiguating at the sentencelevel or the documentlevel our model uses the other words that share the same hidden topic across many documentssecond ldawn is a fullyfledged generative modelgenerative models are modular and can beeasily combined and composed to form more complicated modelsthus developing a gen erative model for wsd gives other generative nlp algorithms a natural way to take advantage of the hidden senses of wordsin general topic models are statistical models of text that posit a hidden space of topics in which the corpus is embedded given a corpus posterior inference in topic models amounts to automatically discovering the underlying themesthat permeate the collectiontopic models have re cently been applied to information retrieval text classification and dialogue segmentation while topic models capture the polysemous use of words they do not carry the explicit notion of sense that is necessary for wsdldawn extends the topic modeling framework to include a hidden meaning in the word generation processin this case posterior inference discovers both the topics of the corpus and the meanings assigned to each of its wordsafter introducing a disambiguation scheme basedon probabilistic walks over the wordnet hierar chy we embed the wordnetwalk in a topic model where each topic is associated withwalks that prefer different neighborhoods of wordnet then we describe a gibbs sam pling algorithm for approximate posterior inference that learns the senses and topics that best explain a corpus finally we evaluate our system on realworld wsd data discuss the properties of the topics and disambiguation accuracy results and draw connections to other wsd algorithms from the research literature1024 1740 entity 1930 3122object 20846 15024 animal 1304946 1305277 artifact male 2354808 2354559 foalcolt 3042424 colt 4040311 revolver synset id word sixgun sixshooter 000 025 058 000 004 002 001016 005 004 069000 000 038100042 000 000057 100 038 007 figure 1 the possible paths to reach the word coltin wordnetdashed lines represent omitted linksall words in the synset containing revolverare shown but only one word from other synsets is shownedge labels are probabilities of transitioningfrom synset i to synset j note how this favors frequent terms such as revolverover ones like six shooterthe wordnetwalk is a probabilistic process ofword generation that is based on the hyponomy relationship in wordnet word net a lexical resource designed by psychologistsand lexicographers to mimic the semantic organiza tion in the human mind links synsets with myriad connectionsthe spe cific relation were interested in hyponomy points from general concepts to more specific ones and is sometimes called the isarelationshipas first described by abney and light we imagine an agent who starts at synset entity which points to every noun in wordnet 21 by some sequence of hyponomy relations and then chooses the next node in its random walk from the hyponyms of its current positionthe agent repeatsthis process until it reaches a leaf node which corre sponds to a single word for an example of all the paths that might generate the word coltsee figure 1the wordnet walk is parameterized by a set of distributions over children for each synset s in wordnet s symbol meaning k number of topics ks multinomial probability vector over the successors of synset s in topic k s scalar that when multiplied by s gives the prior for ks s normalized vector whose ith entry when multiplied by s gives the prior probability for going from s to i d multinomial probability vector over the topics that generate document d prior for z assignment of a word to a topic a path assignment through wordnet ending at a wordij one link in a path going from syn set i to synset jtable 1 a summary of the notation used in the paperbold vectors correspond to collections of vari ables 21 a topic model for wsdthe wordnetwalk has two important proper tiesfirst it describes a random process for word generationthus it is a distribution over words and thus can be integrated into any generative model of text such as topic modelssecond the synsetthat produces each word is a hidden random vari ablegiven a word assumed to be generated by a wordnetwalk we can use posterior inference to predict which synset produced the wordthese properties allow us to develop ldawn which is a fusion of these wordnetwalks and latent dirichlet alocation a probabilistic model of documents that is an i am provement to plsi lda assumes that there are k topicsmultinomial distributionsover words which describe a collectioneach docu ment exhibits multiple topics and each word in each document is associated with one of themalthough the term topicevokes a collection of ideas that share a common theme and although the topics derived by lda seem to possess semantic coherence there is no reason to believe this would 1025 be true of the most likely multinomial distributions that could have created the corpus given the assumed generative modelthat semantically similar words are likely to occur together is a byproduct of how language is actually usedin ldawn we replace the multinomial topic dis tributions with a wordnetwalk as described aboveldawn assumes a corpus is generated bythe following process 1for each topic k 1 k for each synset s randomly choose transition prob abilities ks dir2for each document d 1 d select a topic distribution d dir for each word n 1 nd i select a topic z mult iicreate a path dn starting with 0 as the root nodeiiifrom children of i a choose the next node in the walk i1 multb if i1 is a leaf node generate the associ ated wordotherwise repeatevery element of this process including thesynsets is hidden except for the words of the doc umentsthus given a collection of documents our goal is to perform posterior inference which is the task of determining the conditional distribution of the hidden variables given the observationsin thecase of ldawn the hidden variables are the parameters of the k wordnetwalks the topic assign ments of each word in the collection and the synset path of each wordin a sense posterior inference reverses the process described abovespecifically given a document collection w1d the full posterior is p d d1 p nd n1 pp where the constant of proportionality is the marginal likelihood of the observed datanote that by encoding the synset paths as a hid den variable we have posed the wsd problem asa question of posterior probabilistic inferencefur ther note that we have developed an unsupervised modelno labeled data is needed to disambiguate a corpuslearning the posterior distribution amounts to simultaneously decomposing a corpus into topics and its words into their synsetsthe intuition behind ldawn is that the words in a topic will have similar meanings and thus share paths within wordnetfor example wordnet has two senses for the word coltone referring to a young male horse and the other to a type of handgun although we have no a priori way of know ing which of the two paths to favor for a document we assume that similar concepts will also appear in the documentdocuments with unambiguous nouns such as sixshooterand smoothborewould make paths that pass through the synset firearm piecesmallarm more likely than those go ing through animal animate being beast brute creature faunain practice we hope to see a wordnetwalk that looks like figure 2 which points to the right sense of cancer for a medical contextldawn is a bayesian framework as each vari able has a prior distributionin particular the dirichlet prior for s specified by a scaling factor s and a normalized vector s fulfills two functionsfirst as the overall strength of s increases we place a greater emphasis on the priorthis is equivalent to the need for balancing as noted by abney and light the other function that the dirichlet prior serves is to enable us to encode any information we have about how we suspect the transitions to childrennodes will be distributedfor instance we might ex pect that the words associated with a synset will beproduced in a way roughly similar to the token prob ability in a corpusfor example even though mealmight refer to both ground cereals or food eaten ata single sitting and repastexclusively to the lat ter the synset meal repast food eatenat a single sitting still prefers to transi tion to mealover repastgiven the overall corpus counts by setting si the prior probability of transitioning from synset s to node i proportional to the to tal number of observed tokens in the children of i 1026we introduce a probabilistic variation on information content as in resniks defini tion this value for nonword nodes is equal to thesum of all the frequencies of hyponym wordsun like resnik we do not divide frequency among all senses of a word each sense of a word contributes its full frequency to as described above the problem of wsd corresponds to posterior inference determining the probability distribution of the hidden variables given ob served words and then selecting the synsets of themost likely paths as the correct sensedirectly com puting this posterior distribution however is not tractable because of the difficulty of calculating the normalizing constant in equation 1to approximate the posterior we use gibbs sampling which has proven to be a successful approx imate inference technique for lda in gibbs sampling like all markov chain monte carlo methods we repeatedly sample from amarkov chain whose stationary distribution is the posterior of interest even though we do not know the full posterior the samples can be used to form an empirical estimate of the target distributionin ldawn the samples contain a configuration of the latent semantic states of the system revealing the hidden topics and paths that likely led to the observed datagibbs sampling reproduces the posterior distri bution by repeatedly sampling each hidden variable conditioned on the current state of the other hidden variables and observationsmore precisely the state is given by a set of assignments where each wordis assigned to a path through one of k wordnet walk topics uth word wu has a topic assignment zu and a path assignment you we use zyou and you to represent the topic and path assignments of all words except for you respectivelysampling a new topic for the word wu requires us to consider all of the paths that wu can take in each topic and the topics of the other words in the document you is inthe probability of wu taking on topic i is proportional to p p1wu which is the probability of selecting z from d times the probability of a path generating wu from a path in the ith wordnetwalkthe first term the topic probability of the uth word is based on the assignments to the k topics for words other than you in this document p nyoui i j n youj k j1 j where nyouj is the number of words other than you in topic j for the document d that you appears inthe second term in equation 2 is a sum over the probabilities of every path that could have generatedthe word wuin practice this sum can be com puted using a dynamic program for all nodes that have unique parent although the probability ofa path is specific to the topic as the transition prob abilities for a synset are different across topics we will omit the topic index in the equation p l1 i1 you ii1 31 transition probabilitiescomputing the probability of a path requires us to take a product over our estimate of the probability from transitioning from i to j for all nodes i and j in the path the other path assignments within this topic however play an important role in shaping the transition probabilitiesfrom the perspective of a single node i only paths that pass through that node affect the probability of you also passing through that nodeits convenient tohave an explicit count of all of the paths that tran sition from i to j in this topics wordnetwalk so we use tuij to represent all of the paths that go from i to j in a topic other than the path currently assigned to you given the assignment of all other words to paths calculating the probability of transitioning from i to j with word you requires us to consider the prior and the observations tij in our estimate of the expected value of the probability of transitioning from i to j uij tuij siij si k t you ik 1027 as mentioned in section 21 we paramaterize the prior for synset i as a vector i which sums to one and a scale parameter s the next step once we have selected a topic is to select a path within that topicthis requires the computation of the path probabilities as specified in equation 4 for all of the paths wu can take in thesampled topic and then sampling from the path prob abilitiesthe gibbs sampler is essentially a randomized hill climbing algorithm on the posterior likelihood as a function of the configuration of hidden variablesthe numerator of equation 1 is proportional to that posterior and thus allows us to track the samplers progresswe assess convergence to a local mode of the posterior by monitoring this quantityin this section we describe the properties of the topics induced by running the previously described gibbs sampling method on corpora and how these topics improve wsd accuracyof the two data sets used during the course of our evaluation the primary dataset was semcor which is a subset of the brown corpus with many nouns manually labeled with the correct wordnet sensethe words in this dataset are lemmatized and multiword expressions that are present in wordnet are identifiedonly the wordsin semcor were used in the gibbs sampling pro cedure the synset assignments were only used for assessing the accuracy of the final predictionswe also used the british national corpus whichis not lemmatized and which does not have multi word expressionsthe text was first run through a lemmatizer and then sequences of words which matched a multiword expression in wordnet were joined together into a single wordwe took nouns that appeared in semcor twice or in thebnc at least 25 times and used the bnc to compute the informationcontent analog for individ ual nouns 41 topicslike the topics created by structures such as lda the topics in table 2 coalesce around reasonable themesthe word list was compiled by summingover all of the possible leaves that could have gen erated each of the words and sorting the words by decreasing probabilityin the vast majority of cases a single synsets high probability is responsible for the wordspositions on the listreassuringly many of the top senses for the present words correspond to the most frequent sense in semcorfor example in topic 4 the senses for spaceand functioncorrespond to the top sensesin semcor and while the top sense for setcorresponds to an abstract collection of numbers or symbolsrather than a group of the same kind that be long together and are so usedit makes sense given the mathbased words in the topicpointhowever corresponds to the sense used in the phrase i got to the point of boiling the waterwhich is neither the top semcor sense nor a sense which makes sense given the other words in the topicwhile the topics presented in table 2 resemble the topics one would obtain through models likelda they are not identicalbe because of the lengthy process of gibbs sampling we initially thought that using lda assignments as aninitial state would converge faster than a random initial assignmentwhile this was the case it con verged to a state that less probable than the randomlyinitialized state and no better at sense disambigua tion the topics presented in 2 represent words both that cooccur together in a corpus and cooccur on paths through wordnetbecause topics created through lda only have the first property they usually do worse in terms of both total probability and disambiguation accuracy another interesting property of topics in ldawn is that with higher levels of smoothing words that do not appear in a corpus but are in similar parts of wordnet might have relatively high probability in a topicfor example maturityin topic two in table 2 is sandwiched between footand centerboth of which occur about five timesmore than maturitythis might improve lda based information retrieval schemes 1028 1740 1930 023 076 3122 042 001 2236 010 000 000 000 000 7626 someone 000 9609711 000 9120316 1743824 000 cancer 7998922 genus004 004 8564599 star_sign 006 8565580 006 cancer 05 9100327 cancer 1 constellation 001 001 cancer 05 crab 05 13875408 058 019 14049094 14046733 tumor 097 14050958 000 malignancy 006 094 14051451 090 cancer 096 synset id transition prob word 1957888 10 figure 2 the possible paths to reach the word cancerin wordnet along with transition probabilities from the medicallythemed topic 2 in table 2 with the most probable path highlightedthe dashed lines represent multiple links that have been consolidated and synsets are represented by their offsets within wordnet 21some words for immediate hypernyms have also been included to give contextin all other topics the person animal or constellation senses were preferredtopic 1 topic 2 topic 3 topic 4 topic 5 topic 6 topic 7 president growth material point water plant music party age object number house change film city treatment color value road month work election feed form function area worker life administration day subject set city report time official period part square land mercer world office head self space home requirement group bill portion picture polynomial farm bank audience yesterday length artist operator spring farmer play court level art component bridge production thing meet foot patient corner pool medium style police maturity communication direction site petitioner year service center movement curve interest relationship show table 2 the most probable words from six randomly chosen wordnetwalks from a thirtytwo topic model trained on the words in semcorthese are summed over all of the possible synsets that generate the wordshowever the vast majority of the contributions come from a single synset1029 0275 028 0285 029 0295 03 0305 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 accuracy iteratio n unsee ded seede d with lda 96000 94000 92000 90000 88000 86000 84000 82000 80000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 model probability iteratio n unsee ded seede d with lda figure 3 topics seeded with lda initially have a higher disambiguation accuracy but are quickly matched by unseeded topicsthe probability for the seeded topics starts lower and remains lower42 topics and the weight of the priorbecause the dirichlet smoothing factor in partdetermines the topics it also affects the disambiguationfigure 4 shows the modal disambigua tion achieved for each of the settings of s 01 1 5 10 15 20each line is one setting of k and each point on the line is a setting of s each data point is a run for the gibbs sampler for 10000 iterationsthe disambiguation taken at the modeimproved with moderate settings of s which sug gests that the data are still sparse for many of thewalks although the improvement vanishes if s dom inates with much larger valuesthis makes sense as each walk has over 100000 parameters there are fewer than 100000 words in semcor and each 024 026 028 03 032 034 036 038 s20 s15 s10 s5 s1 s01 accuracy smoot hing f actor 64 top ics 32 top ics 16 top ics 8 topic s 4 topic s 2 topic s 1 topic rando m figure 4 each line represents experiments with a setnumber of topics and variable amounts of smooth ing on the semcor corpusthe random baselineis at the bottom of the graph and adding topics i am proves accuracyas smoothing increases the prior becomes strongerac curacy is the percentage of correctly disambiguated polysemous words in semcor at the modeword only serves as evidence to at most 19 parame ters generally a greater number of topics increased the accuracy of the mode but after around sixteen topics gains became much smallerthe effect of is also related to the number of topics as a value of s for a very large number of topics might overwhelm the observed data while the same value of s might be the perfect balance for a smaller number of topicsfor comparison the method of using a wordnet walk applied to smaller contexts such as sentences or documents achieves an accuracy of between 26 and 30 depending on the level of smoothingthis method works well in cases where the delineation can be readily determined from the over all topic of the documentwords such as kidmayshearcoachincidentfencebeeand coltwere all perfectly disambiguated by this methodfigure 2 shows the wordnetwalk corresponding to a medical topic that correctly disambiguates cancerproblems arose however with highly frequent 1030 words such as man and timethat have many senses and can occur in many types of documentsfor example man can be associated with manypossible meanings island game equipment ser vant husband a specific mammal etc although we know that the adult malesense should be preferred the alternative meanings will also be likely if they can be assigned to a topicthat shares common paths in wordnet the doc uments contain however many other places jobs and animals which are reasonable explanations of how man was generatedunfortunately man is such a ubiquitous term that top ics which are derived from the frequency of wordswithin an entire document are ultimately uninfor mative about its usagewhile mistakes on these highly frequent terms significantly hurt our accuracy errors associated with less frequent terms reveal that wordnetsstructure is not easily transformed into a probabilis tic graphfor instance there are two senses ofthe word quarterbacka player in american footballone is position itself and the other is a per son playing that positionwhile one would expect cooccurrence in sentences such as quarterback is a easy position so our quarterback is happythe paths to both terms share only the root node thus making it highly unlikely a topic would cover both sensesbecause of wordnets breadth rare senses also impact disambiguationfor example the metonymical use of doorto represent a wholebuilding as in the phrase girl next dooris under the same parent as sixty other synsets contain ing bridgebalconybodyarchfloorand cornersurrounded by such common terms thatare also likely to cooccur with the more conventional meanings of door this very rare sense be comes the preferred disambiguation of doorabney and lights initial probabilistic wsd ap proach was further developed into a bayesian network model by ciaramita and johnson who likewise used the appearance of monosemous terms close to ambiguous ones to explain awaythe usage of ambiguous terms in selectional restrictionswe have adapted these approaches and put them into the context of a topic modelrecently other approaches have created ad hoc connections between synsets in wordnet and then considered walks through the newly created graphgiven the difficulties of using existing connections in wordnet mihalcea proposed creating links between adjacent synsets that might comprise a sentence initially setting weights to be equal to the lesk overlap between the pairs and then using the pagerank algorithm to determine the stationary distribution over synsets61 topics and domainsyarowsky was one of the first to contend that there is one sense for discoursethis has lead to the approaches like that of magnini that attempt to find the category of a text select the most appropriate synset and then assign the selected sense using domain annotation attached to wordnetldawn is different in that the categories are notan a priori concept that must be painstakingly annotated within wordnet and require no augmenta tion of wordnetthis technique could indeed be used with any hierarchyour concepts are the ones that best partition the space of documents and do the best job of describing the distinctions of diction that separate documents from different domains62 similarity measuresour approach gives a probabilistic method of using information content as a start ing point that can be adjusted to cluster words ina given topic together this is similar to the jiang conrath similarity measure which has beenused in many applications in addition to disambigua tionpatwardhan offers a broad evaluation of similarity measures for wsdour technique for combining the cues of topicsand distance in wordnet is adjusted in a way sim ilar in spirit to buitelaar and sacaleanu but we consider the appearance of a single term to be evidence for not just that sense and its immediate neighbors in the hyponomy tree but for all of the senses children and ancestorslike mccarthy our unsupervised system acquires a single predominant sense for a domain based on a synthesis of information derived from a 1031textual corpus topics and wordnetderived sim ilarity a probabilistic information content measureby adding syntactic information from a thesaurusderived from syntactic features mccarthy achieved 48 accuracy in a similar evaluation onsemcor ldawn is thus substantially less effec tive in disambiguation compared to stateoftheartmethodsthis suggests however that other meth ods might be improved by adding topics and that ourmethod might be improved by using more informa tion than word countsthe ldawn model presented here makes two contributions to research in automatic word sense disambiguationfirst we demonstrate a method for au tomatically partitioning a document into topics that includes explicit semantic informationsecond we show that at least for one simple model of wsdembedding a document in probabilistic latent struc ture ie a topiccan improve wsdthere are two avenues of research with ldawn that we will explorefirst the statistical nature ofthis approach allows ldawn to be used as a com ponent in larger models for other language tasksother probabilistic models of language could insert the ability to query synsets or paths of wordnetsimilarly any topic based information retrieval scheme could employ topics that include se mantically relevant termsincorporating this model in a larger syntactically aware model which could benefit from the local context as well as the document level context is an important component of future researchsecond the results presented here show a marked improvement in accuracy as more topics are added to the baseline model although the final result is not comparable to stateoftheart techniquesas most errors were attributable to the hyponomy structure of wordnet incorporating the novel use of topicmodeling presented here with a more mature unsu pervised wsd algorithm to replace the underlyingwordnetwalk could lead to advances in state oftheart unsupervised wsd accuracy
D07-1109
a topic model for word sense disambiguationwe develop latent dirichlet allocation with wordnet an unsupervised probabilistic topic model that includes word sense as a hidden variablewe develop a probabilistic posterior inference algorithm for simultaneously disambiguating a corpus and learning the domains in which to consider each wordusing the wordnet hierarchy we embed the construction of abney and light in the topic model and show that automatically learned domains improve wsd accuracy compared to alternative contextswe use the documentlevel topics extracted with latent dirichlet allocation as indicators of meanings for word sense disambiguationwe describe a related topic model ldawn for word sense disambiguation that adds an intermediate layer of latent variables z on which the markov model parameters are conditionedwe integrate semantics into the topic model framework
dependency parsing and domain adaptation with lr models and parser ensembles we present a datadriven variant of the lr algorithm for dependency parsing and extend it with a bestfirst search for probabil istic generalized lr dependency parsing parser actions are determined by a classifier based on features that represent the current state of the parser we apply this pars ing framework to both tracks of the conll 2007 shared task in each case taking ad vantage of multiple models trained with different learners in the multilingual track we train three lr models for each of the ten languages and combine the analyses obtained with each individual model with a maximum spanning tree voting scheme in the domain adaptation track we use two models to parse unlabeled data in the target domain to supplement the labeled outof domain training set in a scheme similar to one iteration of cotraining there are now several approaches for multilingual dependency parsing as demonstrated in the conll 2006 shared task the dependency parsing approach pre sented here extends the existing body of work mainly in four ways 1although stepwise 1 dependency parsing hascommonly been performed using parsing algo 1 stepwise parsing considers each step in a parsing algo rithm separately while allpairs parsing considers entire rithms designed specifically for this task such as those described by nivre and yamada and matsumoto we show that this can also be done using the well known lr parsing algorithm providing a connec tion between current research on shiftreduce dependency parsing and previous parsing work using lr and glr models wise framework to probabilistic parsing with the use of a bestfirst search strategy similar to the one employed in constituent parsing by rat naparkhi and later by sagae and lavie 3we provide additional evidence that the parserensemble approach proposed by sagae and lavie can be used to improve parsing accuracy even when only a single parsing algorithm is used as long as variation can be ob tained for example by using different learning techniques or changing parsing direction from forward to backward and finally 4we present a straightforward way to performparser domain adaptation using unlabeled data in the target domainwe entered a system based on the approach de scribed in this paper in the conll 2007 shared treesfor a more complete definition see the conll x shared task description paper 1044 task which differed from the 2006 edition by featuring two separate tracks one in multilingual parsing and a new track on domain adaptation for dependency parsersin the multi lingual parsing track participants train dependency parsers using treebanks provided for ten languages arabic basque catalan chinese czech eng lish greek hungarian italian and turkish in the domain adaptation track participants were pro vided with english training data from the wall street journal portion of the penn treebank converted to dependencies to train parsers to be evaluated on material in the biological and chemical domains and optionally on text from the childes database our systems accuracy was the highest in the domain adaptation track and only 043 below the top scoring system in the multilingual parsing track we first describe our approach to multilingual dependency parsing fol lowed by our approach for domain adaptationwe then provide an analysis of the results obtained with our system and discuss possible improve mentspendency parsing our overall parsing approach uses a bestfirst probabilistic shiftreduce algorithm based on the lr algorithm as such it follows a bottomup strategy or bottomuptrees as defined in buchholz and marsi in contrast to the shiftreduce dependency parsing algorithm described by nivre which is a bottomuptop down hybrid or bottomupspansit is unclear whether the use of a bottomuptrees algorithm has any advantage over the use of a bottomupspans algorithm in practice but the avail ability of different algorithms that perform the same parsing task could be advantageous in parser ensemblesthe main difference between our pars er and a traditional lr parser is that we do not use an lr table derived from an explicit grammar to determine shiftreduce actionsinstead we use a classifier with features derived from much of the same information contained in an lr table the top few items on the stack and the next few items of lookahead in the remaining input stringaddition ally following sagae and lavie we extend the basic deterministic lr algorithm with a bestfirst search which results in a parsing strategy sim ilar to generalized lr parsing except that we do not perform tomitas stackmerging operationsthe resulting algorithm is projective and nonprojectivity is handled by pseudoprojective trans formations as described in we use nivre and nilssons path scheme2for clarity we first describe the basic variant of the lr algorithm for dependency parsing which is a deterministic stepwise algorithmwe then show how we extend the deterministic parser into a best first probabilistic parser21 dependency parsing with a datadrivenvariant of the lr algorithm the two main data structures in the algorithm are a stack s and a queue q s holds subtrees of the fi nal dependency tree for an input sentence and q holds the words in an input sentences is initia lized to be empty and q is initialized to hold every word in the input in order so that the first word in the input is in the front of the queue3 the parser performs two main types of actions shift and reducewhen a shift action is taken a word is shifted from the front of q and placed on the top of s when a reduce action is taken the 2 the path scheme was chosen because it does not result in a potentially qua dratic increase in the number of dependency label types as observed with the head and headpath schemesunfortunately experiments comparing the use of the different pseudoprojectivity schemes were not performed due to time constraints3 we append a virtual rootword to the beginning of every sentence which is used as the head of every word in the dependency structure that does not have a head in the sentence1045 two top items in s are popped and a new item is pushed onto s this new item is a tree formed by making the root s1 of a dependent of the root of s2 or the root of s2 a dependent of the root of s1depending on which of these two cases oc cur we call the action reduceleft or reduceright according to whether the head of the new tree is to the left or to the right its new dependentin addi tion to deciding the direction of a reduce action the label of the newly formed dependency arc must also be decidedparsing terminates successfully when q is emp ty and s contains only a single tree if q is empty s contains two or more items and no further reduce ac tions can be taken parsing terminates and the input is rejectedin such cases the remaining items in s contain partial analyses for contiguous segments of the input22 a probabilistic lr model for dependen cy parsing in the traditional lr algorithm parser states are placed onto the stack and an lr table is consulted to determine the next parser actionin our case the parser state is encoded as a set of features de rived from the contents of the stack s and queue q and the next parser action is determined according to that set of featuresin the deterministic case described above the procedure used for determin ing parser actions returns a single actionif instead this procedure returns a list of several possible actions with corresponding probabilities we can then parse with a model simi lar to the probabilistic lr models described by briscoe and carroll where the probability of a parse tree is the product of the probabilities of each of the actions taken in its derivationto find the most probable parse tree according to the probabilistic lr model we use a bestfirst strategythis involves an extension of the deter ministic shiftreduce into a bestfirst shiftreduce algorithmto describe this extension we first in troduce a new data structure ti that represents a parser state which includes a stack si a queue qi and a probability pithe deterministic algorithm is a special case of the probabilistic algorithm where we have a single parser state t0 that contains s0 and q0 and the probability of the parser state is 1the bestfirst algorithm on the other handkeeps a heap h containing multiple parser states t0tmthese states are ordered in the heap ac cording to their probabilities which are determined by multiplying the probabilities of each of the parser actions that resulted in that parser statethe heap h is initialized to contain a single parser state t0 which contains a stack s0 a queue q0 and prob ability p0 10s0 and q0 are initialized in the same way as s and q in the deterministic algo rithmthe bestfirst algorithm then loops while h is nonemptyat each iteration first a state tcurrent is popped from the top of h if tcurrent corresponds to a final state we return the single item in scurrent as the dependency structure corresponding to the input sentenceotherwise we get a list of parser actions act0actn corresponding to state tcurrentfor each of these parser actions actj we create a new parser state tnew by applying actj to tcurrent and set the probability tnew to be pnew pcurrnet pactjthen tnew is inserted into the heap h once new states have been inserted onto h for each of the n parser actions we move on to the next iteration of the algorithmfor each of the ten languages for which training data was provided in the multilingual track of the conll 2007 shared task we trained three lr models as followsthe first lr model for each language uses maximum entropy classification to determine possible parser actions and their probabilities4to control overfitting in the maxent models we used boxtype in equality constraints the second lr model for each language also uses maxent classification but parsing is performed backwards which is accomplished simply by reversing the input string before parsing startssa gae and lavie and zeman and abokrtsk have observed that reversing the direction of stepwise parsers can be beneficial in parser combinationsthe third model uses support vector machines 5 using the polynomial 4 implementation by yoshimasa tsuruoka available at httpwwwtsujiiissyoutokyoacjptsuruokamaxent 5 implementation by taku kudo available at httpchasenorgtakusoftwaretinysvm and all vs all was used for multiclass classification1046 kernel with degree 2probabilities were estimated for svm outputs using the method described in but accuracy improvements were not observed during development when these esti mated probabilities were used instead of simply the single best action given by the classifier so in practice the svm parsing models we used were deterministicat test time each input sentence is parsed using each of the three lr models and the three result ing dependency structures are combined according to the maximumspanningtree parser combination scheme6 where each de pendency proposed by each of the models has the same weight the combined dependency tree is the final analysis for the input sentencealthough it is clear that finetuning could provide accuracy improvements for each of the models in each language the same set of meta parameters and features were used for all of the ten languages due to time constraints during system developmentthe features used were7 for the subtrees in s and s the number of children of the root word of the subtrees the number of children of the root word of the subtree to the right of the root word the number of children of the root word of the subtree to the left of the root word the pos tag and deprel of the rightmost and leftmost children the pos tag of the word immediately to the right of the root word of s the pos tag of the word immediately to the left of s 6 each dependency tree is deprojectivized before the combination occurs7 s denotes the nth item from the top of the stack is the item on top of the stack and q denotes the nth item in the queuefor a description of the features names in capital letters see the shared task description the previous parser action the features listed for the root words of the subtrees in table 1in addition the maxent models also used selected combinations of these featuresthe classes used to represent parser actions were designed to encode all aspects of an action simultaneouslyresults for each of the ten languages are shown in table 2 as labeled and unlabeled attachment scores along with the average labeled attachment score and highest labeled attachment score for all participants in the shared taskour results shown in boldface were among the top three scores for those particular languages s s s q q q word x x x x x lemma x x x pos x x x x x x cpos x x x feats x x x table 1 additional featureslanguage las uas average las top las arabic 7471 8404 6834 7652 basque 7464 8119 6806 7694 catalan 8816 9334 7985 8870 chinese 8469 8894 7659 8469 czech 7483 8127 7012 8019 english 8901 8987 8095 8961 greek 7358 8037 7022 7631 hungarian 7953 8351 7149 8027 italian 8391 8768 7806 8440 turkish 7591 8272 7006 7981 all 7990 8529 6550 8032 table 2 multilingual resultsin a similar way as we used multiple lr models in the multilingual track in the domain adaptation track we first trained two lr models on the outof 1047domain labeled training datathe first was a forward maxent model and the second was a back ward svm modelwe used these two models to perform a procedure similar to a single iteration of cotraining except that selection of the newly produced training instances was done by selecting sentences for which the two models produced identical analyseson the development data we verified that sentences for which there was perfect agreement between the two models had labeled attachment score just above 90 on average even though each of the models had accuracy be tween 78 and 79 over the entire development setour approach was as follows 1we trained the forward maxent and backwardsvm models using the outofdomain labeled training data 2we then used each of the models to parse thefirst two of the three sets of domainspecific unlabeled data that were provided 3we compared the output for the two modelsand selected only identical analyses that were produced by each of the two separate models 4we added those analyses to the original labeled training set the new larger training set and finally 6we used this model to parse the test datafollowing this procedure we obtained a labeled attachment score of 8106 and unlabeled attach ment score of 8342 both the highest scores for this trackthis was done without the use of any additional resources but these re sults are also higher than the top score for the open track where the use of certain additional resources was allowedsee one of the main assumptions in our use of differ ent models based on the same algorithm is that while the output generated by those models may often differ agreement between the models is an indication of correctnessin our domain adapta tion approach this was clearly truein fact the approach would not have worked if this assump tion was falseexperiments on the development set were encouragingas stated before when the parsers agreed labeled attachment score was over 90 even though the score of each model alone was lower than 79the domainadapted parser had a score of 821 a significant improvementinteres tingly the ensemble used in the multilingual track also produced good results on the development set for the domain adaptation data without the use of the unlabeled data at all with a score of 819 the different models used in each track were distinct in a few ways direction learner and search strategy of those differences the first one is particularly inter esting in singlestack shiftreduce models as oursin these models the context to each side of a dependency differs in a fundamental wayto one side we have tokens that have already been processed and are already in subtrees and to the other side we simply have a lookahead of the re maining input sentencethis way the context of the same dependency in a forward parser may differ significantly from the context of the same de pendency in a backward parserinterestingly the accuracy scores of the maxent backward models were found to be generally just below the accuracy of their corresponding forward models when tested on development data with two exceptions hunga rian and turkishin hungarian the accuracy scores produced by the forward and backward maxent lr models were not significantly differ ent with both labeled attachment scores at about 773 in turkish however the backward score was sig nificantly higher than the forward score 750 and 723 respectivelythe forward svm score was 731 and the combined score was 758in expe riments performed after the official submission of results we evaluated a backward svm model on the same development set and found it to be significantly more accurate than the forward model with a score of 757adding that score to the combination raised the combination score to 779 the likely reason for this difference is that over 80 of the dependencies in the turkish data set have the head to the right of 1048 the dependent while only less than 4 have the head to the leftthis means that the backward model builds much more partial structure in the stack as it consumes input tokens while the for ward model must consume most tokens before it starts making attachmentsin other words context in general in the backward model has more struc ture and attachments are made while there are still lookahead tokens while the opposite is generally true in the forward modelour results demonstrate the effectiveness of even small ensembles of parsers that are relatively similar there are several possible extensions and improvements to the approach we have describedfor example in section 3 we mention the use of different weighting schemes in dependency votingwe list additional ideas that were not attempted due to time constraints but that are likely to produce improved resultsone of the simplest improvements to our approach is simply to train more models with no oth er changes to our setupas mentioned in section 5 the addition of a backward svm model did i am prove accuracy on the turkish set significantly and it is likely that improvements would also be obtained in other languagesin addition other learning approaches such as memorybased lan guage processing could be useda drawback of adding more models that became obvious in our experiments was the increased cost of both training and runtime a similar idea that may be more effective but requires more effort is to add parsers based on dif ferent approachesfor example using mstparser a largemargin all pairs parser in our domain adaptation procedure results in significantly improved accuracy of course the use of different approaches used by different groups in the conll 2006 and 2007 shared tasks represents great opportunity for parser ensemblesacknowledgements we thank the shared task organizers and treebank providerswe also thank the reviewers for their comments and suggestions and yusuke miyao for insightful discussionsthis work was supported in part by grantinaid for specially promoted re search 18002007
D07-1111
dependency parsing and domain adaptation with lr models and parser ensembleswe present a datadriven variant of the lr algorithm for dependency parsing and extend it with a bestfirst search for probabilistic generalized lr dependency parsingparser actions are determined by a classifier based on features that represent the current state of the parserwe apply this parsing framework to both tracks of the conll 2007 shared task in each case taking advantage of multiple models trained with different learnersin the multilingual track we train three lr models for each of the ten languages and combine the analyses obtained with each individual model with a maximum spanning tree voting schemein the domain adaptation track we use two models to parse unlabeled data in the target domain to supplement the labeled outofdomain training set in a scheme similar to one iteration of cotrainingwe use a combination of colearning and active learning by training two different parsers on the labeled training data parsing the unlabeled domain data with both parsers and adding parsed sentences to the training data only if the two parsers agreed on their analysiswe generalize the standard deterministic framework to probabilistic parsing by using a bestfirst search strategy
extracting aspectevaluation and aspectof relations in opinion mining the technology of opinion extraction allowsusers to retrieve and analyze peoples opinions scattered over web documents we define an opinion unit as a quadruple consist ing of the opinion holder the subject being evaluated the part or the attribute in which the subject is evaluated and the value of theevaluation that expresses a positive or neg ative assessment we use this definition as the basis for our opinion extraction task we focus on two important subtasks of opinion extraction extracting aspectevaluationrelations and extracting aspectof re lations and we approach each task usingmethods which combine contextual and sta tistical clues our experiments on japaneseweblog posts show that the use of contex tual clues improve the performance for both tasks the explosive increase in web communication hasattracted increasing interest in technologies for automatically mining personal opinions from web doc uments such as product reviews and weblogssuch technologies would benefit users who seek reviews on certain consumer products of interestprevious approaches to the task of mining a large scale document collection of customer opinions can be classified into two approaches doc ument classification and information extractiontheformer is the task of classifying documents or pas sages according to their semantic orientation such aspositive vs negativethis direction has been form ing the mainstream of research on opinionsensitive text processing the latter on the other hand focuses on the task of extracting opinions consisting of informationabout for example who feels how about which as pect of what productfrom unstructured text datain this paper we refer to this information extraction oriented task as opinion extractionin contrast to sentiment classification opinion extraction aims atproducing richer information and requires an in depth analysis of opinions which has only recently been attempted by a growing but still relatively small research community most previous work on customer opinion ex traction assumes the source of information to be customer reviews collected from customer review sites in contrast in this paper we con sider the task of extracting customer opinions fromunstructured weblog postscompared with extrac tion from review articles extraction from weblogs is more challenging because weblog posts tend toexhibit greater diversity in topics goals vocabulary style etc and are much more likely to include descriptions irrelevant to the subject in questionin this paper we first describe our task set ting of opinion extractionwe conducted a corpusstudy and investigated the feasibility of the task def 1065 inition by showing the statistics and interannotator agreement of our corpus annotationnext we showthat the crucial body of the above opinion extraction task can be decomposed into two kinds of relation extraction ie aspectevaluation relation extraction and aspectof relation extractionfor exam ple the passage i went out for lunch at the deli and ordered a curry with chickenit was pretty goodhas an aspectevaluation relation curry with chicken was goodand an aspectof relation the deli curry with the chickenthe former task can be regarded as a special type of predicateargument structure analysis or semantic role labelingthelatter on the other hand can be regarded as bridg ing reference resolution which is the task of identifying relations between definite noun phrases and discoursenew entities implicitly related to some previously mentioned entitiesmost of the previous work on customer opinionextraction however does not adopt the stateoftheart techniques in those fields relying only on sim ple proximitybased or patternbased methodsinthis context this paper empirically shows that incor porating machine learningbased techniques devisedfor predicateargument structure analysis and bridg ing reference resolution improve the performanceof both aspectevaluation and aspectof relation extractionfurthermore we also show that combin ing contextual clues with a common cooccurrencestatisticsbased technique for bridging reference resolution makes a significant improvement on aspect of relation extractionour present goal is to build a computational model to extract opinions from web documents in such a form as who feels how on which aspects of which subjectsgiven the passage presented in figure 1 for example the opinion we want to extract is the writer feels that the colors of pictures taken with powershot are beautifulas suggested by this example we consider it reasonable to start with an assumption that most evaluative opinionscan be structured as a frame composed of the fol lowing constituentsopinion holder the person who is making an eval uationan opinion holder is typically the first
D07-1114
extracting aspectevaluation and aspectof relations in opinion miningthe technology of opinion extraction allows users to retrieve and analyze people opinions scattered over web documentswe define an opinion unit as a quadruple consisting of the opinion holder the subject being evaluated the part or the attribute in which the subject is evaluated and the value of the evaluation that expresses a positive or negative assessmentwe use this definition as the basis for our opinion extraction taskwe focus on two important subtasks of opinion extraction extracting aspectevaluation relations and extracting aspectof relations and we approach each task using methods which combine contextual and statistical cluesour experiments on japanese weblog posts show that the use of contextual clues improve the performance for both taskswe analyze the labeled corpus for opinion expressions and observe that many opinion expressions are used in multiple domainswe adopt a supervised learning technique to search for useful syntactic patterns as contextual clues
indirecthmmbased hypothesis alignment for combining outputs from machine translation systems this paper presents a new hypothesis alignment method for combining outputs of multiple machine translation systems an indirect hidden markov model is proposed to address the synonym matching and word ordering issues in hypothesis alignment unlike traditional hmms whose parameters are trained via maximum likelihood estimation the of the ihmm are estimated a variety of sources including word semantic similarity word surface similarity and a distancebased distortion penalty the ihmmbased method significantly outperforms the stateoftheart terbased alignment model in our experiments on nist benchmark datasets our combined smt system using the proposed method achieved the best chinesetoenglish translation result in the constrained training track of the system combination has been applied successfully to various machine translation tasksrecently confusionnetworkbased system combination algorithms have been developed to combine outputs of multiple machine translation systems to form a consensus output a confusion network comprises a sequence of sets of alternative words possibly including nulls with associated scoresthe consensus output is then derived by selecting one word from each set of alternatives to produce the sequence with the best overall score which could be assigned in various ways such as by voting by using posterior probability estimates or by using a combination of these measures and other featuresconstructing a confusion network requires choosing one of the hypotheses as the backbone and other hypotheses are aligned to it at the word levelhigh quality hypothesis alignment is crucial to the performance of the resulting system combinationhowever there are two challenging issues that make mt hypothesis alignment difficultfirst different hypotheses may use different synonymous words to express the same meaning and these synonyms need to be aligned to each othersecond correct translations may have different word orderings in different hypotheses and these words need to be properly reordered in hypothesis alignmentin this paper we propose an indirect hidden markov model for mt hypothesis alignmentthe hmm provides a way to model both synonym matching and word orderingunlike traditional hmms whose parameters are trained via maximum likelihood estimation the parameters of the ihmm are estimated indirectly from a variety of sources including word semantic similarity word surface similarity and a distancebased distortion penalty without using large amount of training dataour combined smt system using the proposed method gave the best result on the chinesetoenglish test in the constrained training track of the 2008 nist open mt evaluation the current stateoftheart is confusionnetworkbased mt system combination as described by rosti and colleagues the major steps are illustrated in figure 1in fig1 hypotheses from different mt systems are first collectedthen in fig1 one of the hypotheses is selected as the backbone for hypothesis alignmentthis is usually done by a sentencelevel minimum bayes risk method which selects a hypothesis that has the minimum average distance compared to all hypothesesthe backbone determines the word order of the combined outputthen as illustrated in fig1 all other hypotheses are aligned to the backbonenote that in fig1 the symbol ε denotes a null word which is inserted by the alignment normalization algorithm described in section 34fig1 also illustrates the handling of synonym alignment and word reordering of the hypothesisthen in fig1 a confusion network is constructed based on the aligned hypotheses which consists of a sequence of sets in which each word is aligned to a list of alternative words in the same setthen a set of global and local features are used to decode the confusion networkin confusionnetworkbased system combination for smt a major difficulty is aligning hypotheses to the backboneone possible statistical model for word alignment is the hmm which has been widely used for bilingual word alignment in this paper we propose an indirecthmm method for monolingual hypothesis alignmentlet denote the backbone _ a hypothesis to be aligned to e and the alignment that specifies the position of the backbone word aligned to each hypothesis wordwe treat each word in the backbone as an hmm state and the words in the hypothesis as the observation sequencewe use a firstorder hmm assuming that the emission probability depends only on the backbone word and the transition probability p depends only on the position of the last state and the length of the backbonetreating the alignment as hidden variable the conditional probability that the hypothesis is generated by the backbone is given by as in hmmbased bilingual word alignment we also associate a null with each backbone word to allow generating hypothesis words that do not align to any backbone wordin hmmbased hypothesis alignment emission probabilities model the similarity between a backbone word and a hypothesis word and will be referred to as the similarity modelthe transition probabilities model word reordering and will be called the distortion modelthe similarity model which specifies the emission probabilities of the hmm models the similarity between a backbone word and a hypothesis wordsince both words are in the same language the similarity model can be derived based on both semantic similarity and surface similarity and the overall similarity model is a linear interpolation of the two where and reflect the semantic and surface similarity between and e respectively and α is the interpolation factorsince the semantic similarity between two target words is sourcedependent the semantic similarity model is derived by using the source word sequence as a hidden layer where is the source sentencemoreover in order to handle the case that two target words are synonyms but neither of them has counterpart in the source sentence a null is introduced on the source side which is represented by f0the last step in assumes that first ei generates all source words including nullthen ej is generated by all source words including nullin the common smt scenario where a large amount of bilingual parallel data is available we can estimate the translation probabilities from a source word to a target word and vice versa via conventional bilingual word alignmentthen both p and in can be derived where is the translation model from the sourcetotarget word alignment model and p which enforces the sumto1 constraint over all words in the source sentence takes the following form where a2s is the translation model from the targettosource word alignment modelin our method a2s for all target words is simply a constant pnull whose value is optimized on heldout data 1the surface similarity model can be estimated in several waysa very simple model could be based on exact match the surface similarity model per or the length of the longest common subsequence between themthen this raw similarity measure is transformed to a surface similarity score between 0 and 1 through an exponential mapping where is computed as and is the raw similarity measure of ej ei which is the length of the lmp or lcs of ej and ei and p is a smoothing factor that characterizes the mapping thus as p approaches infinity backs off to the exact match modelwe found the smoothed similarity model of yields slightly better results than the exact match modelboth lmp and lcs based methods achieve similar performance but the computation of lmp is fastertherefore we only report results of the lmpbased smoothed similarity modelthe distortion model which specifies the transition probabilities of the hmm models the firstorder dependencies of word orderingin bilingual hmmbased word alignment it is commonly assumed that transition probabilities following och and ney we use a fixed value p0 for the probability of jumping to a null state which can be optimized on heldout data and the overall distortion model becomes as suggested by liang et al we can group the distortion parameters c d i i into a few bucketsin our implementation 11 buckets are used for c c c c cthe probability mass for transitions with jump distance larger than 6 and less than 4 is uniformly dividedby doing this only a handful of c parameters need to be estimatedalthough it is possible to estimate them using the them algorithm on a small development set we found that a particularly simple model described below works surprisingly well in our experimentssince both the backbone and the hypothesis are in the same language it seems intuitive that the distortion model should favor monotonic alignment and only allow nonmonotonic alignment with a certain penaltythis leads us to use a distortion model of the following form where k is a tuning factor optimized on heldout dataas shown in fig2 the value of distortion score peaks at d1 ie the monotonic alignment and decays for nonmonotonic alignments depending on how far it diverges from the monotonic alignmentgiven an hmm the viterbi alignment algorithm can be applied to find the best alignment between the backbone and the hypothesis however the alignment produced by the algorithm cannot be used directly to build a confusion networkthere are two reasons for thisfirst the alignment produced may contain 1n mappings between the backbone and the hypothesis whereas 11 mappings are required in order to build a confusion networksecond if hypothesis words are aligned to a null in the backbone or vice versa we need to insert actual nulls into the right places in the hypothesis and the backbone respectivelytherefore we need to normalize the alignment produced by viterbi searchfirst whenever more than one hypothesis words are aligned to one backbone word we keep the link which gives the highest occupation probability computed via the forwardbackward algorithmthe other hypothesis words originally aligned to the backbone word will be aligned to the null associated with that backbone wordsecond for the hypothesis words that are aligned to a particular null on the backbone side a set of nulls are inserted around that backbone word associated with the null such that no links cross each otheras illustrated in fig3 if a hypothesis word e2 is aligned to the backbone word e2 a null is inserted in front of the backbone word e2 linked to the hypothesis word e1 that comes before e2nulls are also inserted for other hypothesis words such as e3 and e4 after the backbone word e2if there is no hypothesis word aligned to that backbone word all nulls are inserted after that backbone word 3 for a backbone word that is aligned to no hypothesis word a null is inserted on the hypothesis side right after the hypothesis word which is aligned to the immediately preceding backbone wordan example is shown in fig3 the two main hypothesis alignment methods for system combination in the previous literature are giza and terbased methodsmatusov et al proposed using giza to align words between different mt hypotheses where all hypotheses of the test corpus are collected to create hypothesis pairs for giza trainingthis approach uses the conventional hmm model bootstrapped from ibm model1 as implemented in giza and heuristically combines results from aligning in both directionssystem combination based on this approach gives an improvement over the best single systemhowever the number of hypothesis pairs for training is limited by the size of the test corpusalso mt hypotheses from the same source sentence are correlated with each other and these hypothesis pairs are not iid data samplestherefore giza training on such a data set may be unreliablebangalore et al used a multiple stringmatching algorithm based on levenshtein edit distance and later sim et al and rosti et al extended it to a terbased method for hypothesis alignmentter measures the minimum number of edits including substitution insertion deletion and shift of blocks of words that are needed to modify a hypothesis so that it exactly matches the other hypothesisthe best alignment is the one that gives the minimum number of translation editsterbased confusion network construction and system combination has demonstrated superior performance on various largescale mt tasks however when searching for the optimal alignment the terbased method uses a strict surface hard match for counting editstherefore it is not able to handle synonym matching wellmoreover although terbased alignment allows phrase shifts to accommodate the nonmonotonic word ordering all nonmonotonic shifts are penalized equally no matter how short or how long the move is and this penalty is set to be the same as that for substitution deletion and insertion editstherefore its modeling of nonmonotonic word ordering is very coarsegrainedin contrast to the gizabased method our ihmmbased method has a similarity model estimated using bilingual word alignment hmms that are trained on a large amount of bitext datamoreover the surface similarity information is explicitly incorporated in our model while it is only used implicitly via parameter initialization for ibm model1 training by matusov et al on the other hand the terbased alignment model is similar to a coarsegrained nonnormalized version of our ihmm in which the similarity model assigns no penalty to an exact surface match and a fixed penalty to all substitutions insertions and deletions and the distortion model simply assigns no penalty to a monotonic jump and a fixed penalty to all other jumps equal to the nonexactmatch penalty in the similarity modelthere have been other hypothesis alignment methodskarakos et al proposed an itgbased method for hypothesis alignment rosti et al proposed an incremental alignment method and a heuristicbased matching algorithm was proposed by jayaraman and lavie in this section we evaluate our ihmmbased hypothesis alignment method on the chinesetoenglish test in the constrained training track of the 2008 nist open mt evaluation we compare to the terbased method used by rosti et al in the following experiments the nist bleu score is used as the evaluation metric which is reported as a percentage in the following sectionsin our implementation the backbone is selected with mbronly the top hypothesis from each single system is considered as a backbonea uniform posteriori probability is assigned to all hypothesester is used as loss function in the mbr computationsimilar to each word in the confusion network is associated with a word posterior probabilitygiven a system s each of its hypotheses is assigned with a rankbased score of 1η where r is the rank of the hypothesis and η is a rank smoothing parameterthe system specific rankbased score of a word w for a given system s is the sum of all the rankbased scores of the hypotheses in system s that contain the word w at the given position this score is then normalized by the sum of the scores of all the alternative words at the same position and from the same system s to generate the system specific word posteriorthen the total word posterior of w over all systems is a sum of these system specific posteriors weighted by system weightsbeside the word posteriors we use language model scores and a word count as features for confusion network decodingtherefore for an mway system combination that uses n lms a total of mn1 decoding parameters including m1 system weights one rank smoothing factor n language model weights and one weight for the word count feature are optimized using powells method to maximize bleu score on a development set4 two language models are used in our experimentsone is a trigram model estimated from the english side of the parallel training data and the other is a 5gram model trained on the english gigaword corpus from ldc using the msrlm toolkit 4 the parameters of ihmm are not tuned by maximumbleu trainingin order to reduce the fluctuation of bleu scores caused by the inconsistent translation output length an unsupervised length adaptation method has been devisedwe compute an expected length ratio between the mt output and the source sentences on the development set after maximumbleu trainingthen during test we adapt the length of the translation output by adjusting the weight of the word count feature such that the expected outputsource length ratio is metin our experiments we apply length adaptation to the system combination output at the level of the whole test corpusthe development set used for system combination parameter training contains 1002 sentences sampled from the previous nist mt chinesetoenglish test sets 35 from mt04 55 from mt05 and 10 from mt06newswirethe test set is the mt08 chinesetoenglish current test set which includes 1357 sentences from both newswire and webdata genresboth dev and test sets have four references per sentenceas inputs to the system combination 10best hypotheses for each source sentence in the dev and test sets are collected from each of the eight single systemsall outputs on the mt08 test set were truecased before scoring using a loglinear conditional markov model proposed by toutanova et al however to save computation effort the results on the dev set are reported in case insensitive bleu score insteadin our main experiments outputs from a total of eight single mt systems were combinedas listed in table 1 sys1 is a treetostring system proposed by quirk et al sys2 is a phrasebased system with fast pruning proposed by moore and quirk sys3 is a phrasebased system with syntactic source reordering proposed by wang et al sys4 is a syntaxbased preordering system proposed by li et al sys5 is a hierarchical system proposed by chiang sys6 is a lexicalized reordering system proposed by xiong et al sys7 is a twopass phrasebased system with adapted lm proposed by foster and kuhn and sys8 is a hierarchical system with twopass rescoring using a parserbased lm proposed by wang et al all systems were trained within the confines of the constrained training condition of nist mt08 evaluationthese single systems are optimized with maximumbleu training on different subsets of the previous nist mt test datathe bilingual translation models used to compute the semantic similarity are from the worddependent hmms proposed by he which are trained on two million parallel sentencepairs selected from the training corpus allowed by the constrained training condition of mt08in the ihmmbased method the smoothing factor for surface similarity model is set to ρ 3 the interpolation factor of the overall similarity model is set to α 03 and the controlling factor of the distancebased distortion parameters is set to k2these settings are optimized on the dev setindividual system results and system combination results using both ihmm and ter alignment on both the dev and test sets are presented in table 1the terbased hypothesis alignment tool used in our experiments is the publicly available ter java program tercom default settings of tercom are used in the following experimentson the dev set the case insensitive bleu score of the ihmmbased 8way system combination output is about 58 points higher than that of the best single systemcompared to the terbased method the ihmmbased method is about 15 bleu points betteron the mt08 test set the ihmmbased system combination gave a case sensitive bleu score of 3089it outperformed the best single system by 47 bleu points and the terbased system combination by 10 bleu pointsnote that the best single system on the dev set and the test set are differentthe different single systems are optimized on different tuning sets so this discrepancy between dev set and test set results is presumably due to differing degrees of mismatch between the dev and test sets and the various tuning setsin order to evaluate how well our method performs when we combine more systems we collected mt outputs on mt08 from seven additional single systems as summarized in table 2these systems belong to two groupssys9 to sys12 are in the first groupthey are syntaxaugmented hierarchical systems similar to those described by shen et al using different chinese word segmentation and language modelsthe second group has sys13 to sys15sys13 is a phrasal system proposed by koehn et al sys14 is a hierarchical system proposed by chiang and sys15 is a syntaxbased system proposed by galley et alall seven systems were trained within the confines of the constrained training condition of nist mt08 evaluationwe collected 10best mt outputs only on the mt08 test set from these seven extra systemsno mt outputs on our dev set are available from them at presenttherefore we directly adopt system combination parameters trained for the previous 8way system combination except the system weights which are reset by the following heuristics first the total system weight mass 10 is evenly divided among the three groups of single systems sys18 sys912 and sys1315each group receives a total system weight mass of 13then the weight mass is further divided in each group in the first group the original weights of systems 18 are multiplied by 13 in the second and third groups the weight mass is evenly distributed within the group ie 112 for each system in group 2 and 19 for each system in group 35length adaptation is applied to control the final output length where the same expected length ratio of the previous 8way system combination is adoptedthe results of the 15way system combination are presented in table 3it shows that the ihmmbased method is still about 1 bleu point better than the terbased methodmoreover combining 15 single systems gives an output that has a nist bleu score of 3482 which is 39 points better than the best submission to the nist mt08 constrained training track to our knowledge this is the best result reported on this taskin this section we evaluate the effect of the semantic similarity model and the surface similarity model by varying the interpolation weight α of the results on both the dev and test sets are reported in table 4in one extreme case α 1 the overall similarity model is based only on semantic similaritythis gives a case insensitive bleu score of 4170 and a case sensitive bleu score of 2892 on the dev and test set respectivelythe accuracy is significantly improved to 4362 on the dev set and 3089 on test set when α 03in another extreme case α 0 in which only the surface similarity model is used for the overall similarity model the performance degrades by about 02 pointtherefore the surface similarity information seems more important for monolingual hypothesis alignment but both submodels are usefulwe investigate the effect of the distancebased distortion model by varying the controlling factor k in for example setting k10 gives a lineardecay distortion model and setting k20 gives a quadratic smoothed distancebased distortion modelas shown in table 5 the optimal result can be achieved using a properly smoothed distancebased distortion modelsynonym matching and word ordering are two central issues for hypothesis alignment in confusionnetworkbased mt system combinationin this paper an ihmmbased method is proposed for hypothesis alignmentit uses a similarity model for synonym matching and a distortion model for word orderingin contrast to previous methods the similarity model explicitly incorporates both semantic and surface word similarity which is critical to monolingual word alignment and a smoothed distancebased distortion model is used to model the firstorder dependency of word ordering which is shown to be better than simpler approachesour experimental results show that the ihmmbased hypothesis alignment method gave superior results on the nist mt08 c2e test set compared to the terbased methodmoreover we show that our system combination method can scale up to combining more systems and produce a better output that has a case sensitive bleu score of 3482 which is 39 bleu points better than the best official submission of mt08the authors are grateful to chris quirk arul menezes kristina toutanova william dolan mu li chiho li dongdong zhang long jiang ming zhou george foster roland kuhn jing zheng wen wang necip fazil ayan dimitra vergyri nicolas scheffer andreas stolcke kevin knight jenssoenke voeckler spyros matsoukas and anttiveikko rosti for assistance with the mt systems andor for the valuable suggestions and discussions
D08-1011
indirecthmmbased hypothesis alignment for combining outputs from machine translation systemsthis paper presents a new hypothesis alignment method for combining outputs of multiple machine translation systemsan indirect hidden markov model is proposed to address the synonym matching and word ordering issues in hypothesis alignmentunlike traditional hmms whose parameters are trained via maximum likelihood estimation the parameters of the ihmm are estimated indirectly from a variety of sources including word semantic similarity word surface similarity and a distancebased distortion penaltythe ihmmbased method significantly outperforms the stateoftheart terbased alignment model in our experiments on nist benchmark datasetsour combined smt system using the proposed method achieved the best chinesetoenglish translation result in the constrained training track of the 2008 nist open mt evaluationwe propose using an indirect hidden markov model for pairwise alignment of system outputs
multilingual subjectivity analysis using machine translation although research in other languages is increasing much of the work in subjectivity analysis has been applied to english data mainly due to the large body of electronic resources and tools that are available for this language in this paper we propose and evaluate methods that can be employed to transfer a repository of subjectivity resources across languages specifically we attempt to leverage on the resources available for english and by employing machine translation generate resources for subjectivity analysis in other languages through comparative evaluations on two different languages we show that automatic translation is a viable alternative for the construction of resources and tools for subjectivity analysis in a new target language we have seen a surge in interest towards the application of automatic tools and techniques for the extraction of opinions emotions and sentiments in text a large number of text processing applications have already employed techniques for automatic subjectivity analysis including automatic expressive texttospeech synthesis text semantic analysis tracking sentiment timelines in online forums and news mining opinions from product reviews and question answering a significant fraction of the research work to date in subjectivity analysis has been applied to english which led to several resources and tools available for this languagein this paper we explore multiple paths that employ machine translation while leveraging on the resources and tools available for english to automatically generate resources for subjectivity analysis for a new target languagethrough experiments carried out with automatic translation and crosslingual projections of subjectivity annotations we try to answer the following questionsfirst assuming an english corpus manually annotated for subjectivity can we use machine translation to generate a subjectivityannotated corpus in the target languagesecond assuming the availability of a tool for automatic subjectivity analysis in english can we generate a corpus annotated for subjectivity in the target language by using automatic subjectivity annotations of english text and machine translationfinally third can these automatically generated resources be used to effectively train tools for subjectivity analysis in the target languagesince our methods are particularly useful for languages with only a few electronic tools and resources we chose to conduct our initial experiments on romanian a language with limited text processing resources developed to datefurthermore to validate our results we carried a second set of experiments on spanishnote however that our methods do not make use of any target language specific knowledge and thus they are applicable to any other language as long as a machine translation engine exists between the selected language and englishresearch in sentiment and subjectivity analysis has received increasingly growing interest from the natural language processing community particularly motivated by the widespread need for opinionbased applications including product and movie reviews entity tracking and analysis opinion summarization and othersmuch of the work in subjectivity analysis has been applied to english data though work on other languages is growing eg japanese data are used in chinese data are used in and german data are used in in addition several participants in the chinese and japanese opinion extraction tasks of ntcir6 performed subjectivity and sentiment analysis in languages other than englishin general efforts on building subjectivity analysis tools for other languages have been hampered by the high cost involved in creating corpora and lexical resources for a new languageto address this gap we focus on leveraging resources already developed for one language to derive subjectivity analysis tools for a new languagethis motivates the direction of our research in which we use machine translation coupled with crosslingual annotation projections to generate the resources and tools required to perform subjectivity classification in the target languagethe work closest to ours is the one reported in where a bilingual lexicon and a manually translated parallel text are used to generate the resources required to build a subjectivity classifier in a new languagein that work we found that the projection of annotations across parallel texts can be successfully used to build a corpus annotated for subjectivity in the target languagehowever parallel texts are not always available for a given language pairtherefore in this paper we explore a different approach where instead of relying on manually translated parallel corpora we use machine translation to produce a corpus in the new languagewe explore the possibility of using machine translation to generate the resources required to build subjectivity annotation tools in a given target languagewe focus on two main scenariosfirst assuming a corpus manually annotated for subjectivity exists in the source language we can use machine translation to create a corpus annotated for subjectivity in the target languagesecond assuming a tool for automatic subjectivity analysis exists in the source language we can use this tool together with machine translation to create a corpus annotated for subjectivity in the target languagein order to perform a comprehensive investigation we propose three experiments as described belowthe first scenario based on a corpus manually annotated for subjectivity is exemplified by the first experimentthe second scenario based on a corpus automatically annotated with a tool for subjectivity analysis is subsequently divided into two experiments depending on the direction of the translation and on the dataset that is translatedin all three experiments we use english as a source language given that it has both a corpus manually annotated for subjectivity and a tool for subjectivity analysis in this experiment we use a corpus in the source language manually annotated for subjectivitythe corpus is automatically translated into the target language followed by a projection of the subjectivity labels from the source to the target languagethe experiment is illustrated in figure 1we use the mpqa corpus which is a collection of 535 englishlanguage news articles from a variety of news sources manually annotated for subjectivityalthough the corpus was originally annotated at clause and phrase level we use the sentencelevel annotations associated with the dataset from the total of 9700 sentences in this corpus 55 of the sentences are labeled as subjective while the rest are objectiveafter the automatic translation of the corpus and the projection of the annotations we obtain a large corpus of 9700 subjectivityannotated sentences in the target language which can be used to train a subjectivity classifierin the second experiment we assume that the only resources available are a tool for subjectivity annotation in the source language and a collection of raw texts also in the source languagethe source language text is automatically annotated for subjectivity and then translated into the target languagein this way we produce a subjectivity annotated corpus that we can use to train a subjectivity annotation tool for the target languagefigure 2 illustrates this experimentin order to generate automatic subjectivity annotations we use the opinionfinder tool developed by opinionfinder includes two classifiersthe first one is a rulebased highprecision classifier that labels sentences based on the presence of subjective clues obtained from a large lexiconthe second one is a highcoverage classifier that starts with an initial corpus annotated using the highprecision classifier followed by several bootstrapping steps that increase the size of the lexicon and the coverage of the classifierfor most of our experiments we use the highcoverage classifiertable 1 shows the performance of the two opinionfinder classifiers as measured on the mpqa corpus as a raw corpus we use a subset of the semcor corpus consisting of 107 documents with roughly 11000 sentencesthis is a balanced corpus covering a number of topics in sports politics fashion education and othersthe reason for working with this collection is the fact that we also have a manual translation of the semcor documents from english into one of the target languages used in the experiments which enables comparative evaluations of different scenarios note that in this experiment the annotation of subjectivity is carried out on the original source language text and thus expected to be more accurate than if it were applied on automatically translated texthowever the training data in the target language is produced by automatic translation and thus likely to contain errorsthe third experiment is similar to the second one except that we reverse the direction of the translationwe translate raw text that is available in the target language into the source language and then use a subjectivity annotation tool to label the automatically translated source language textafter the annotation the labels are projected back into the target language and the resulting annotated corpus is used to train a subjectivity classifierfigure 3 illustrates this experimentas before we use the highcoverage classifier available in opinionfinder and the semcor corpuswe use a manual translation of this corpus available in the target languagein this experiment the subjectivity annotations are carried out on automatically generated source text and thus expected to be less accuratehowever since the training data was originally written in the target language it is free of translation errors and thus training carried out on this data should be more robustfor comparison purposes we also propose an experiment which plays the role of an upper bound on the methods described so farthis experiment involves the automatic translation of the test data from the target language into the source languagethe source language text is then annotated for subjectivity using opinionfinder followed by a projection of the resulting labels back into the target languageunlike the previous three experiments in this experiment we only generate subjectivityannotated resources and we do not build and evaluate a standalone subjectivity analysis tool for the target languagefurther training of a machine learning algorithm as in experiments two and three is required in order to build a subjectivity analysis toolthus this fourth experiment is an evaluation of the resources generated in the target language which represents an upper bound on the performance of any machine learning algorithm that would be trained on these resourcesfigure 4 illustrates this experimentour initial evaluations are carried out on romanianthe performance of each of the three methods is evaluated using a dataset manually annotated for subjectivityto evaluate our methods we generate a romanian training corpus annotated for subjectivity on which we train a subjectivity classifier which is then used to label the test datawe evaluate the results against a goldstandard corpus consisting of 504 romanian sentences manually annotated for subjectivitythese sentences represent the manual translation into romanian of a small subset of the semcor corpus which was removed from the training corpora used in experiments two and threethis is the same evaluation dataset as used in two romanian native speakers annotated the sentences individually and the differences were adjudicated through discussionsthe agreement of the two annotators is 083 when the uncertain annotations are removed the agreement rises to 089 the two annotators reached consensus on all sentences for which they disagreed resulting in a gold standard dataset with 272 subjective sentences and 232 objective sentencesmore details about this dataset are available in in order to learn from our annotated data we experiment with two different classifiers naive bayes and support vector machines selected for their performance and diversity of learning methodologyfor naive bayes we use the multinomial model with a threshold of 03for svm we use the libsvm implementation with a linear kernelthe automatic translation of the mpqa and of the semcor corpus was performed using language weaver1 a commercial statistical machine translation softwarethe resulting text was postprocessed by removing diacritics stopwords and numbersfor training we experimented with a series of weighting schemes yet we only report the results obtained for binary weighting as it had the most consistent behaviorthe results obtained by running the three experiments on romanian are shown in table 2the baseline on this data set is 5416 represented by the percentage of sentences in the corpus that are subjective and the upper bound is 7183 which is the accuracy obtained under the scenario where the test data is translated into the source language and then annotated using the highcoverage opinionfinder toolperhaps not surprisingly the svm classifier outperforms naive bayes by 2 to 6 implying that svm may be better fitted to lessen the amount of noise embedded in the dataset and provide more accurate classificationsthe first experiment involving the automatic translation of the mpqa corpus enhanced with manual annotations for subjectivity at sentence level does not seem to perform well when compared to the experiments in which automatic subjectivity classification is usedthis could imply that a classifier cannot be so easily trained on the cues that humans use to express subjectivity especially when they are not overtly expressed in the sentence and thus can be lost in the translationinstead the automatic annotations produced with a rulebased tool relying on overt mentions of words in a subjectivity lexicon seems to be more robust to translation further resulting in better classification resultsto exemplify consider the following subjective sentence from the mpqa corpus which does not include overt clues of subjectivity but was annotated as subjective by the human judges because of the structure of the sentence it is the palestinians that are calling for the implementation of the agreements understandings and recommendations pertaining to the palestinianisraeli conflictwe compare our results with those obtained by a previously proposed method that was based on the manual translation of the semcor subjectivityannotated corpusin we used the manual translation of the semcor corpus into romanian to form an englishromanian parallel data setthe english side was annotated using the opinion finder tool and the subjectivity labels were projected on the romanian texta naive bayes classifier was then trained on the subjectivity annotated romanian corpus and tested on the same gold standard as used in our experimentstable 3 shows the results obtained in those experiments by using the highcoverage opinionfinder classifieramong our experiments experiments two and three are closest to those proposed in by using machine translation from english into romanian or romanian into english and annotating this dataset with the highcoverage opinionfinder classifier we obtain an fmeasure of 6369 and 6587 respectively using naive bayes this implies that at most 4 in fmeasure can be gained by using a parallel corpus as compared to an automatically translated corpus further suggesting that machine translation is a viable alternative to devising subjectivity classification in a target language leveraged on the tools existent in a source languageas english is a language with fewer inflections when compared to romanian which accommodates for gender and case as a suffix to the base form of a word the automatic translation into english is closer to a human translation therefore labeling this data using the opinionfinder tool and projecting the labels onto a fully inflected humangenerated romanian text provides more accurate classification results as compared to a setup where the training is carried out on machinetranslated romanian text we also wanted to explore the impact that the corpus size may have on the accuracy of the classifierswe reran experiments two and three with 20 corpus size increments at a time it is interesting to note that a corpus of approximately 6000 sentences is able to achieve a high enough fmeasure to be considered viable for training a subjectivity classifieralso at a corpus size over 10000 sentences the naive bayes classifier performs worse than svm which displays a directly proportional trend between the number of sentences in the data set and the observed fmeasurethis trend could be explained by the fact that the svm classifier is more robust with regard to noisy data when compared to naive bayesto test the validity of the results on other languages we ran a portability experiment on spanishto build a test dataset a native speaker of spanish translated the gold standard of 504 sentences into spanishwe maintain the same subjectivity annotations as for the romanian datasetto create the training data required by the first two experiments we translate both the mpqa corpus and the semcor corpus into spanish using the google translation service2 a publicly available machine translation engine also based on statistical machine translationwe were therefore able to implement all the experiments but the third which would have required a manually translated version of the semcor corpusalthough we could have used a spanish text to carry out a similar experiment due to the fact that the dataset would have been different the results would not have been directly comparablethe results of the two experiments exploring the portability to spanish are shown in table 4interestingly all the figures are higher than those obtained for romanianwe assume this occurs because spanish is one of the six official united nations languages and the google translation engine is using the united nations parallel corpus to train their translation engine therefore implying that a better quality translation is achieved as compared to the one available for romanianwe can therefore conclude that the more accurate the translation engine the more accurately the subjective content is translated and therefore the better the resultsas it was the case for romanian the svm classifier produces the best results with absolute improvements over the naive bayes classifier ranging from 02 to 35since the spanish automatic translation seems to be closer to a humanquality translation we are not surprised that this time the first experiment is able to generate a more accurate training corpus as compared to the second experimentthe mpqa corpus since it is manually annotated and of better quality has a higher chance of generating a more reliable data set in the target languageas in the experiments on romanian when performing automatic translation of the test data we obtain the best results with an fmeasure of 7341 which is also the upper bound on our proposed experimentsbased on our experiments we can conclude that machine translation offers a viable approach to generating resources for subjectivity annotation in a given target languagethe results suggest that either a manually annotated dataset or an automatically annotated one can provide sufficient leverage towards building a tool for subjectivity analysissince the use of parallel corpora requires a large amount of manual labor one of the reasons behind our experiments was to asses the ability of machine translation to transfer subjective content into a target language with minimal effortas demonstrated by our experiments machine translation offers a viable alternative in the construction of resources and tools for subjectivity classification in a new target language with only a small decrease in performance as compared to the case when a parallel corpus is available and usedto gain further insights two additional experiments were performedfirst we tried to isolate the role played by the quality of the subjectivity annotations in the sourcelanguage for the crosslingual projections of subjectivityto this end we used the highprecision opinionfinder classifier to annotate the english datasetsas shown in table 1 this classifier has higher precision but lower recall as compared to the highcoverage classifier we used in our previous experimentswe reran the second experiment this time trained on the 3700 sentences that were classified by the opinionfinder highprecision classifier as either subjective or objectivefor romanian we obtained an fmeasure of 6905 while for spanish we obtained an fmeasure of 6647second we tried to isolate the role played by languagespecific clues of subjectivityto this end we decided to set up an experiment which by comparison can suggest the degree to which the languages are able to accommodate specific markers for subjectivityfirst we trained an english classifier using the semcor training data automatically annotated for subjectivity with the opinionfinder highcoverage toolthe classifier was then applied to the english version of the manually labeled test data set next we ran a similar experiment on romanian using a classifier trained on the romanian version of the same semcor training data set annotated with subjectivity labels projected from englishthe classifier was tested on the same gold standard data setthus the two classifiers used the same training data the same test data and the same subjectivity annotations the only difference being the language used the results for these experiments are compiled in table 5interestingly the experiment conducted on romanian shows an improvement of 35 to 95 over the results obtained on english which indicates that subjective content may be easier to learn in romanian versus englishthe fact that romanian verbs are inflected for mood enables an automatic classifier to identify additional subjective markers in textsome moods such as conditional and presumptive entail human judgment and therefore allow for clear subjectivity annotationmoreover romanian is a highly inflected language accommodating for forms of various words based on number gender case and offering an explicit lexicalization of formality and politenessall these features may have a cumulative effect in allowing for better classificationat the same time english entails minimal inflection when compared to other indoeuropean languages as it lacks both gender and adjective agreement verb moods are composed with the aid of modals while tenses and expressions are built with the aid of auxiliary verbsfor this reason a machine learning algorithm may not be able to identify the same amount of information on subjective content in an english versus a romanian textit is also interesting to note that the labeling of the training set was performed using a subjectivity classifier developed for english which takes into account a large humanannotated subjectivity lexicon also developed for englishone would have presumed that any classifier trained on this annotated text would therefore provide the best results in englishyet as explained earlier this was not the casein this paper we explored the use of machine translation for creating resources and tools for subjectivity analysis in other languages by leveraging on the resources available in englishwe introduced and evaluated three different approaches to generate subjectivity annotated corpora in a given target language and exemplified the technique on romanian and spanishthe experiments show promising results as they are comparable to those obtained using manually translated corporawhile the quality of the translation is a factor machine translation offers an efficient and effective alternative in capturing the subjective semantics of a text coming within 4 fmeasure as compared to the results obtained using human translated corporain the future we plan to explore additional languagespecific clues and integrate them into the subjectivity classifiersas shown by some of our experiments romanian seems to entail more subjectivity markers compared to english and this factor motivates us to further pursue the use of languagespecific clues of subjectivityour experiments have generated corpora of about 20000 sentences annotated for subjectivity in romanian and spanish which are available for download at httplitcsciunteduindexphpdownloads along with the manually annotated data setsthe authors are grateful to daniel marcu and languageweaver for kindly providing access to their romanianenglish and englishromanian machine translation enginesthis work was partially supported by a national science foundation grant iis0840608
D08-1014
multilingual subjectivity analysis using machine translationalthough research in other languages is increasing much of the work in subjectivity analysis has been applied to english data mainly due to the large body of electronic resources and tools that are available for this languagein this paper we propose and evaluate methods that can be employed to transfer a repository of subjectivity resources across languagesspecifically we attempt to leverage on the resources available for english and by employing machine translation generate resources for subjectivity analysis in other languagesthrough comparative evaluations on two different languages we show that automatic translation is a viable alternative for the construction of resources and tools for subjectivity analysis in a new target languagewe demonstrate that machine translation can perform quite well when extending the subjectivity analysis to multilingual environmentwe hypothesize that subjectivity is expressed differently in various languages due to lexicalization formal versus informal markers etc
dependency parsing by belief propagation we formulate dependency parsing as a graphical model with the novel ingredient of global constraints we show how to apply loopy belief propagation a simple and tool for and inference as a parsing algorithm bp is both asymptotically and empirically efficient even with secondorder features or latent variables which would make exact parsing considerslower or nphard bp needs only with a small constant factor furthermore such features significantly improve parse accuracy over exact firstorder methods incorporating additional features would increase the runtime additively rather than multiplicatively computational linguists worry constantly about runtimesometimes we oversimplify our models trading linguistic nuance for fast dynamic programmingalternatively we write down a better but intractable model and then use approximationsthe cl community has often approximated using heavy pruning or reranking but is beginning to adopt other methods from the machine learning community such as gibbs sampling rejection sampling and certain variational approximationswe propose borrowing a different approximation technique from machine learning namely loopy belief propagation in this paper we show that bp can be used to train and decode complex parsing modelsour approach calls a simpler parser as a subroutine so it still exploits the useful wellstudied combinatorial structure of the parsing problem1we wish to make a dependency parses score depend on higherorder features which consider arbitrary interactions among two or more edges in the parse such features can help accuracyas we showalas they raise the polynomial runtime of projective parsing and render nonprojective parsing nphardhence we seek approximationswe will show how bps messagepassing discipline offers a principled way for higherorder features to incrementally adjust the numerical edge weights that are fed to a fast firstorder parserthus the firstorder parser is influenced by higherorder interactions among edgesbut not asymptotically slowed down by considering the interactions itselfbps behavior in our setup can be understood intuitively as followsinasmuch as the firstorder parser finds that edge e is probable the higherorder features will kick in and discourage other edges e to the extent that they prefer not to coexist with e2 thus the next call to the firstorder parser assigns lower probabilities to parses that contain these e this behavior is somewhat similar to parser stacking in which a firstorder parser derives some of its input features from the full 1best output of another parserin our method a firstorder parser derives such input features from its own previous full output this circular process is iterated to convergenceour method also permits the parse to interact cheaply with other variablesthus firstorder parsing partofspeech tagging and other tasks on a common input could mutually influence one anotherour method and its numerical details emerge naturally as an instance of the wellstudied loopy bp algorithm suggesting several potential future improvements to accuracy and efficiency loopy bp has occasionally been used before in nlp with good results to handle nonlocal features or joint decoding however our application to parsing requires an innovation to bp that we explain in 5a global constraint to enforce that the parse is a treethe tractability of some such global constraints points the way toward applying bp to other computationally intensive nlp problems such as syntaxbased alignment of parallel textto apply bp we must formulate dependency parsing as a search for an optimal assignment to the variables of a graphical modelwe encode a parse using the following variables sentencethe nword input sentence w is fully observed let w w0w1 wn where w0 is always the special symbol roottagsif desired the variables t t1t2 tn may specify tags on the n words drawn from some tagset t these variables are needed iff the tags are to be inferred jointly with the parselinksthe o boolean variables lij 0 0 is a temperatureas t 0 the distribution retains the same mode but becomes more sharply peaked at the mode and sumproduct bp approaches maxproduct bpdeterministic annealing runs sumproduct bp while gradually reducing t toward 0 as it iteratesby starting at a high t and reducing t slowly it often manages in practice to find a good local optimumwe may then extract an assignment just as we do for maxproduct under equation regularizing only by early stoppingif all variables are observed in training this objective function is convex the difficult step in computing the gradient of our objective is finding vθ log z where z in equation is the normalizing constant that sums over all assignments aas usual for loglinear models since vθfm only depends on the assignment as values for variables that are connected to fm in the factor graph its expectation under p depends only on the marginalization of p to those variables jointlyfortunately bp provides an estimate of that marginal distribution namely its belief about the factor fm given w and 0 25 note that the hard constraints do not depend on 0 at all so their summands in equation will be 0we employ stochastic gradient descent since this does not require us to compute the objective function itself but only to estimate its gradient as explained abovealternatively given any of the map decoding procedures from 6 we could use an errordriven learning method such as the perceptron or mira26we asked for projective parsing where higherorder factors have traditionally been incorporated into slow but exact dynamic programming what are the comparative speed and quality of the bp approximation how helpful are such higherorder factorsparticularly for nonprojective parsing where bp is needed to make them tractable do our global constraints contribute to the goodness of bps approximationwe built a firstorder projective parserone that uses only factors ptree and linkand then compared the cost of incorporating secondorder factors grand and childseq by bp versus dp28 under dp the firstorder runtime of o is increased to o with grand and to o when we add childseq as wellbp keeps runtime down to oalthough with a higher constant factor since it takes several rounds to converge and since it computes more than just the best parse29 figures 23 compare the empirical runtimes for various input sentence lengthswith only the grand factor exact dp can still find the viterbi parse faster than ten iterations of the asymptotically better bp at least for sentences with n 75however once we add the childseq factor bp is always faster dramatically so for longer sentences more complex models would widen bps advantagefig4 shows the tradeoff between runtime and search error of bp in the former case to determine bps search error at finding the mbr parse we measured its dependency accuracy not against the gold standard but against the optimal mbr parse under the model which dp is able to findafter 10 iterations the overall macroaveraged search error compared to o dp mbr is 04 compared to o 24more bp iterations may help accuracyin future work we plan to compare bps speedaccuracy curve on more complex projective models with the speedaccuracy curve of pruned or reranked dpthe bp approximation can be used to improve the accuracy of nonprojective parsing by adding higherorder featuresthese would be nphard to incorporate exactly dp cannot be usedwe used bp with a nonprojective tree factor to train conditional loglinear parsing models of two highly nonprojective languages danish and dutch as well as slightly nonprojective english in all three languages the firstorder nonprojective parser greatly overpredicts the number of crossing linkswe thus added nocross factors as well as grand and childseq as beforeall of these significantly improve the firstorder baseline though not necessarily cumulatively finally table 2 compares loopy bp to a previously proposed hillclimbing method for approximate inference in nonprojective parsing mcdonald and pereira hillclimbing decodes our richest nonprojective model by finding the best projective parse under that modelusing slow higherorder dpand then greedily modifies words parents until the parse score stops improving with tree decoding it with weaker constraints is asymptotically faster but usually harmfuldecoding the tree model with the even stronger ptree constraint can actually be helpful for a more projective languageall results use 5 iterations of bpbp for nonprojective languages is much faster and more accurate than the hillclimbing methodalso hillclimbing only produces an 1best parse but bp also obtains marginals of the distribution over all parsesgiven the bp architecture do we even need the hard tree constraintor would it suffice for more local hard constraints to negotiate locally via bpwe investigated this for nonprojective firstorder parsingtable 3 shows that global constraints are indeed important and that it is essential to use tree during trainingat test time the weaker but still global exactly1 may suffice for total time otable 3 includes not2 which takes o time merely to demonstrate how the bp approximation becomes more accurate for training and decoding when we join the simple not2 constraints into more global atmost1 constraintsthis does not change the distribution but makes bp enforce stronger local consistency requirements at the factors relying less on independence assumptionsin general one can get better bp approximations by replacing a group of factors ft with their product30 the above experiments concern goldstandard 30in the limit one could replace the product with a single allpurpose factor then bp would be exactbut slow accuracy under a given firstorder nonprojective modelflipping all three of these parameters for danish we confirmed the pattern by instead measuring search error under a higherorder projective model when ptree was weakened during decodingcompared to the mbr parse under that model the search errors from decoding with weaker hard constraints were 22 for not2 21 for exactly1 17 for exactly1 no2cycle and 00 for ptreebelief propagation improves nonprojective dependency parsing with features that would make exact inference intractablefor projective parsing it is significantly faster than exact dynamic programming at the cost of small amounts of search error we are interested in extending these ideas to phrasestructure and lattice parsing and in trying other higherorder features such as those used in parse reranking and historybased parsing we could also introduce new variables eg nonterminal refinements or secondary links mid that augment the parse with representations of control binding etcother parsinglike problems that could be attacked with bp appear in syntaxbased machine translationdecoding is very expensive with a synchronous grammar composed with an ngram language model but our footnote 10 suggests that bp might incorporate a language model rapidlystring alignment with synchronous grammars is quite expensive even for simple synchronous formalisms like itg but duchi et al show how to incorporate bipartite matching into maxproduct bpfinally we can take advantage of improvements to bp proposed in the context of other applicationsfor example instead of updating all messages in parallel at every iteration it is empirically faster to serialize updates using a priority queue 31 31these methods need alteration to handle our global propagators which do update all their outgoing messages at once
D08-1016
dependency parsing by belief propagationwe formulate dependency parsing as a graphical model with the novel ingredient of global constraintswe show how to apply loopy belief propagation a simple and effective tool for approximate learning and inferenceas a parsing algorithm bp is both asymptotically and empirically efficienteven with secondorder features or latent variables which would make exact parsing considerably slower or nphard bp needs only o time with a small constant factorfurthermore such features significantly improve parse accuracy over exact firstorder methodsincorporating additional features would increase the runtime additively rather than multiplicativelywe can encapsulate common dynamic programming algorithms within specialpurpose factors to efficiently globally constrain variable configurationsdeptree is a global combinatorial factor which attaches to all link variables and similarly contributes a factor of 1 iff the configuration of link variables forms a valid projective dependency graph
revisiting readability a unified framework for predicting text quality we combine lexical syntactic and discourse features to produce a highly predictive model of human readers judgments of text readability this is the first study to take into account such a variety of linguistic factors and the first to empirically demonstrate that discourse relations are strongly associated with the perceived quality of text we show that various surface metrics generally expected to be related to readability are not very good predictors of readability judgments in our wall street journal corpus we also establish that readability predictors behave differently depending on the task predicting text readability or ranking the readability our experiments indicate that discourse relations are the one class of features that exhibits robustness across these two tasks the quest for a precise definition of text quality pinpointing the factors that make text flow and easy to readhas a long history and traditionway back in 1944 robert gunning associates was set up offering newspapers magazines and business firms consultations on clear writing in education teaching good writing technique and grading student writing has always been of key importance linguists have also studied various aspects of text flow with cohesionbuilding devices in english rhetorical structure theory and centering theory among the most influential contributionsstill we do not have unified computational models that capture the interplay between various aspects of readabilitymost studies focus on a single factor contributing to readability for a given intended audiencethe use of rare words or technical terminology for example can make text difficult to read for certain audience types syntactic complexity is associated with delayed processing time in understanding and is another factor that can decrease readabilitytext organization topic development and the form of referring expressions also determine readabilitybut we know little about the relative importance of each factor and how they combine in determining perceived text qualityin our work we use texts from the wall street journal intended for an educated adult audience to analyze readability factors including vocabulary syntax cohesion entity coherence and discoursewe study the association between these features and reader assigned readability ratings showing that discourse and vocabulary are the factors most strongly linked to text qualityin the easier task of text quality ranking entity coherence and syntax features also become significant and the combination of features allows for ranking prediction accuracy of 88our study is novel in the use of goldstandard discourse features for predicting readability and the simultaneous analysis of various readability factorsthe definition of what one might consider to be a wellwritten and readable text heavily depends on the intended audience obviously even a superbly written scientific paper will not be perceived as very readable by a lay person and a great novel might not be appreciated by a third graderas a result the vast majority of prior work on readability deals with labeling texts with the appropriate school grade levela key observation in even the oldest work in this area is that the vocabulary used in a text largely determines its readabilitymore common words are easier so some metrics measured text readability by the percentage of words that were not among the n most frequent in the languageit was also observed that frequently occurring words are often short so word length was used to approximate readability more robustly than using a predefined word frequency liststandard indices were developed based on the link between word frequencylength and readability such as fleschkincaid automated readability index gunning fog smog and colemanliau they use only a few simple factors that are designed to be easy to calculate and are rough approximations to the linguistic factors that determine readabilityfor example fleschkincaid uses the average number of syllables per word to approximate vocabulary difficulty and the average number of words per sentence to approximate syntactic difficultyin recent work the idea of linking word frequency and text readability has been explored for making medical information more accessible to the general public classified words in medical texts as familiar or unfamiliar to a general audience based on their frequencies in corporawhen a description of the unfamiliar terms was provided the perceived readability of the texts almost doubleda more general and principled approach to using vocabulary information for readability decisions has been the use of language modelsfor any given text it is easy to compute its likelihood under a given language model ie one for text meant for children or for text meant for adults or for a given grade level and used language models to predict the suitability of texts for a given school grade levelbut even for this type of task other factors besides vocabulary use are at play in determining readabilitysyntactic complexity is an obvious factor indeed and also used syntactic features such as parse tree height or the number of passive sentences to predict reading grade levelsfor the task of deciding whether a text is written for an adult or child reader found that adding entity coherence to s list of features improves classification accuracy by 10in linguistics and natural language processing the text properties rather than those of the reader are emphasizedtext coherence is defined as the ease with which a person understands a textcoherent text is characterized by various types of cohesive links that facilitate text comprehension in recent work considerable attention has been devoted to entity coherence in text quality especially in relation to information orderingin many applications such as text generation and summarization systems need to decide the order in which selected sentences or generated clauses should be presented to the usermost models attempting to capture local coherence between sentences were based on or inspired by centering theory which postulated strong links between the center of attention in comprehension of adjacent sentences and syntactic position and form of referencein a detailed study of information ordering in three very different corpora assessed the performance of various formulations of centeringtheir results were somewhat unexpected showing that while centering transition preferences were useful the most successful strategy for information ordering was based on avoiding rough shifts that is sequences of sentences that share no entities in commonthis supports previous findings that such types of transitions are associated with poorly written text and can be used to improve the accuracy of automatic grading of essays based on various nondiscourse features in a more powerful generalization of centering barzilay and lapata developed a novel approach which does not postulate a preference for any type of transition but rather computes a set of features that capture transitions of all kinds in the text and their relative proportiontheir entity coherence features prove to be very suitable for various tasks notably for information ordering and reading difficulty levelform of reference is also important in wellwritten text and appropriate choices lead to improved readabilityuse of pronouns for reference to highly salient entities is perceived as more desirable than the use of definite noun phrases the syntactic forms of first mentionwhen an entity is first introduced in a textdiffer from those of subsequent mentions and can be exploited for improving and predicting text coherence the objective of our study is to analyze various readability factors including discourse relations because few empirical studies exist that directly link discourse structure with text qualityin the past subsections of the penn treebank have been annotated for discourse relations for our study we chose to work with the newly released penn discourse treebank which is the largest annotated resource which focuses exclusively on implicit local relations between adjacent sentences and explicit discourse connectivesthe penn discourse treebank is a new resource with annotations of discourse connectives and their senses in the wall street journal portion of the penn treebank all explicit relations are annotatedin addition each adjacent pair of sentences within a paragraph is annotatedif there is a discourse relation then it is marked implicit and annotated with one or more connectivesif there is a relation between the sentences but adding a connective would be inappropriate it is marked altlexif the consecutive sentences are only related by entitybased coherence they are annotated with entrelotherwise they are annotated with norelbesides labeling the connective the pdtb also annotates the sense of each relationthe relations are organized into a hierarchythe top level relations are expansion comparison contingency and temporalbriefly an expansion relation means that the second clause continues the theme of the first clause a comparison relation indicates that something in the two clauses is being compared contingency means that there is a causal relation between the clauses and temporal means they occur either at the same time or sequentiallywe randomly selected thirty articles from the wall street journal corpus that was used in both the penn treebank and the penn discourse treebank1 each article was read by at least three college students each of whom was given unlimited time to read the texts and perform the ratings2 subjects were asked the following questions for each question they provided a rating between 1 and 5 with 5 being the best and 1 being the worst1one of the selected articles was missing from the penn treebankthus results that do not require syntactic information are over all thirty articles while tables 3 5 and 7 report results for the twentynine articles with treebank parse trees2 found that human ratings are significantly correlated with selfpaced reading times a more direct measure of processing effort which we plan to explore in future workafter collecting the data it turned out that most of the time subjects gave the same rating to all questionsfor competent language users we view text readability and text coherence as equivalent properties measuring the extent to which a text is well writtenthus for all subsequent analysis we will use only the first question the score of an article was then the average of all the ratings it receivedthe article scores ranged from 15 to 433 with a mean of 32008 and a standard deviation of 7242the median score was 3286we define our task as predicting this average rating for each articlenote that this task may be more difficult than predicting reading level as each of these articles appeared in the wall street journal and thus is aimed at the same target audiencewe suspected that in classifying adult text more subtle features might be necessarywe first computed the pearson correlation coefficients between the simple metrics that most traditional readability formulas use and the average human ratingsthese results are shown in table 1we tested the average number of characters per word average number of words per sentence maximum number of words per sentence and article length 3 article length was the only significant baseline factor with correlation of 037longer articles are perceived as less wellwritten and harder to read than shorter onesnone of the other baseline metrics were close to being significant predictors of readabilitywe use a unigram language model where the probability of an article is p is the probability of wordtype w according to a background corpus m and c is the number of times w appears in the articlethe log likelihood of an article is then note that this model will be biased in favor of shorter articlessince each word has probability less than 1 the log probability of each word is less than 0 and hence including additional words decreases the log likelihoodwe compensate for this by performing linear regressions with the unigram log likelihood and with the number of words in the article as an additional variablethe question then arises as to what to use as a background corpuswe chose to experiment with two corpora the entire wall street journal corpus and a collection of general ap news which is generally more diverse than the financial news found in the wsjwe predicted that the news vocabulary would be more representative of the types of words our readers would be familiar within both cases we used laplace smoothing over the word frequencies and a stoplistthe vocabulary features we used are article likelihood estimated from a language model from wsj and article likelihood according to a unigram language model from news we also combine the two likelihood features with article length in order to get a better estimate of the language models influence on readability independent of the length of the articleboth vocabularybased features are significantly correlated with the readability judgments with pvalues smaller than 005 the correlations are positive the more probable an article was based on its vocabulary the higher it was generally ratedas expected the news model that included more general news stories had a higher correlation with peoples judgmentswhen combined with the length of the article the unigram language model from the news corpus becomes very predictive of readability with the correlation between the two as high as 063syntactic constructions affect processing difficulty and so might also affect readability judgmentswe examined the four syntactic features used in average parse tree height average number of noun phrases per sentence average number of verb phrases per sentence and average number of subordinate clauses per sentence the sentence we are talking about years ago sbar before anyone heard of asbestos having any questionable properties contains an example of an sbar clausehaving multiple noun phrases in each sentence requires the reader to remember more items but may make the article more interesting found that articles written for adults tended to contain many more entities than articles written for childrenwhile including more verb phrases in each sentence increases the sentence complexity adults might prefer to have related clauses explicitly grouped togetherthe correlations between readability and syntactic features is shown in table 3the strongest correlation is that between readability and number of verb phrases this finding is in line with prescriptive clear writing advice but is to our knowledge novel in the computational linguistics literatureas point out the sentences in are easier to comprehend than the sentences in even though they are longermultiple verb phrases in one sentence may be indicative of explicit discourse relations which we will discuss further in section 46surprisingly the use of clauses introduced by a subordinating conjunction are actually positively correlated with readabilityso while for children or less educated adults these constructions might pose difficulties they were favored by our assessorson the other hand the average parse tree height negatively correlated with readability as expected but surprisingly the correlation is very weak in their classic study of cohesion in english discuss the various aspects of well written discourse including the use of cohesive devices such as pronouns definite descriptions and topic continuity from sentence to sentence4 to measure the association between these features and readability rankings we compute the number ofpronouns per sentence and the number of definite articles per sentence in order to qualify topic continuity from sentence to sentence in the articles we compute average cosine similarity word overlap and word overlap over just nouns and pronouns between pairs of adjacent sentences5each sentence is turned into a vector of wordtypes where each types value is its tfidf the cosine similarity metric is then none of these features correlate significantly with readability as can be seen from the results in table 4the overlap features are particularly bad predictors of readability with average wordcosine overlap in fact being negatively correlated with readabilitythe form of referenceuse of pronouns and definite descriptionsexhibit a higher correlation with readability but these values are not significant for the size of our corpuswe use the brown coherence toolkit6 to compute entity grids for each articlein each sentence an entity is identified as the subject object other or not present the probability of each transition type is computedfor example an so transition occurs when an entity is the subject in one sentence then an object in the next xn transition occurs when an entity appears in nonsubject or object position in one sentence and not present in the next etc7 the entity coherence features are the probability of each of these pairs of transitions for a total of 16 features none of the entity grid features are significantly correlated with the readability ratingsone very interesting result is that the proportion of ss transitions in which the same entity was mentioned in subject position in two adjacent sentences is negatively correlated with readabilityin centering theory this is considered the most coherent type of transition keeping the same center of attentionmoreover the feature most strongly correlated with readability is the sn transition in which the subject of one sentence does not appear at all in the following sentenceof course it is difficult to interpret the entity grid features one by one since they are interdependent and probably it is the interaction of features that capture overall readability patternsdiscourse relations are believed to be a major factor in text coherencewe computed another language model which is over discourse relations instead of wordswe treat each text as a bag of relations rather than a bag of wordseach relation is annotated for both its sense and how it is realized for example one text might contain implicit comparison explicit temporal norelwe computed the probability of each of our articles according to a multinomial model where the probability of a text with n relation tokens and k relation types is p is the probability of an article having length n xi is the number of times relation i appeared and pi is the probability of relation i based on the penn discourse treebankp is the maximum likelihood estimation of an article having n discourse relations based on the entire penn discourse treebank the log likelihood of an article based on its discourse relations feature is defined as the multinomial distribution is particularly suitable because it directly incorporates length which significantly affects readability as we discussed earlierit also captures patterns of relative frequency of relations unlike the simpler unigram modelnote also that this equation has an advantage over the unigram model that was not present for vocabularywhile every article contains at least one word some articles do not contain any discourse relationssince the pdtb annotated all explicit relations and relations between adjacent sentences in a paragraph an article with no discourse connectives and only single sentence paragraphs would not contain any annotated discourse relationsunder the unigram model these articles probabilities cannot be computedunder the multinomial model the probability of an article with zero relations is estimated as pr which can be calculated from the corpusas in the case of vocabulary features the presence of more relations will lead to overall lower probabilities so we also consider the number of discourse relations and the log likelihood combined with the number of relations as featuresin order to isolate the effect of the type of discourse relation we also compute multinomial model features for the explicit discourse relations and over just the implicit discourse relations the likelihood of discourse relations in the text under a multinomial model is very highly and significantly correlated with readability ratings especially after text length is taken into accountcorrelations are 048 and 054 respectivelythe probability of the explicit relations alone is not a sufficiently strong indicator of readabilitythis fact is disappointing as the explicit relations can be identified much more easily in unannotated text note that the sequence of just the implicit relations is also not sufficientthis observation implies that the proportion of explicit and implicit relations may be meaningful but we leave the exploration of this issue for later workso far we introduced six classes of factors that have been discussed in the literature as readability correlatesthrough statistical tests of associations we identified the individual factors significantly correlated with readability ratingsthese are in decreasing order of association strength vocabulary and discourse relations are the strongest predictors of readability followed by average number of verb phrases and length of the textthis empirical confirmation of the significance of discourse relations as a readability factor is novel for the computational linguistics literaturenote though that for our work we use oracle discourse annotations directly from the pdtb and no robust systems for automatic discourse annotation exist todaythe significance of the average number of verb phrases as a readability predictor is somewhat surprising but intriguingit would lead to reexamination of the role of verbspredicates in written text which we also plan to address in future worknone of the other factors showed significant association with readability ratings even though some correlations had relatively large positive valuesin this section we turn to the question of how the combination of various factors improves the prediction of readabilitywe use the leaps package in r to find the best subset of features for linear regression for subsets of size one to eightwe use the squared multiple correlation coefficient to assess the effectiveness of predictionsr2 is the proportion of variance in readability ratings explained by the modelif the model predicts readability perfectly r2 1 and if the model has no predictive capability r2 0the linear regression results confirm the expectation that the combination of different factors is a rather complex issueas expected discourse vocabulary and length which were the significant individual factors appear in the best model for each feature set sizetheir combination gives the best result for regression with three predictors and they explain half of the variance in readability ratings r2 05029but the other individually significant feature average number of verb phrases per sentence never appears in the best modelsinstead f1the depth of the parse treeappears in the best model with more than four featuresalso unexpectedly two of the superficial cohesion features appear in the larger models f10 is the average word overlap over nouns and pronouns and f11 is the average number of pronouns per sentenceentity grid features also make their way into the best models when more features are used for prediction sx oo ox no transitions in this section we consider the problem of pairwise ranking of text readabilitythat is rather than trying to predict the readability of a single document we consider pairs of documents and predict which one is betterthis task may in fact be the more natural one since in most applications the main concern is with the relative quality of articles rather than their absolute scoresthis setting is also beneficial in terms of data use because each pair of articles with different average readability scores now becomes a data point for the classification taskwe thus create a classification problem given two articles is article 1 more readable than article 2for each pair of texts whose readability ratings on the 1 to 5 scale differed by at least 05 we form one data point for the ranking problem resulting in 243 examplesthe predictors are the differences between the two articles featuresfor classification we used wekas linear support vector implementation and performance was evaluated using 10fold crossvalidationthe classification results are shown in table 7when all features are used for prediction the accuracy is high 8888the length of the article can serve as a baseline featurelonger articles are ranked lower by the assessors so this feature can be taken as baseline indicator of readabilityonly six features used by themselves lead to accuracies higher than the length baselinethese results indicate that the most important individual factors in the readability ranking task in decreasing order of importance are log likelihood of discourse relations number of discourse relations no transitions on transitions average number of vps per sentence and text probability under a general language modelin terms of classes of features the 16 entity grid features perform the best leading to an accuracy of 7941 followed by the combination of the four discourse features and syntax features this is evidence for the fact that there is a complex interplay between readability factors the entity grid factors which individually have very weak correlation with readability combine well while adding the three additional discourse features to the likelihood of discourses relations actually worsens performance slightlysimilar indication for interplay between features is provided by the class ablation classification results in which classes of features are removedsurprisingly removing syntactic features causes the biggest deterioration in performance a drop in accuracy from 8888 to 8271the removal of vocabulary length or discourse features has a minimal negative impact on performance while removing the cohesion features actually boosts performancewe have investigated which linguistic features correlate best with readability judgmentswhile surface measures such as the average number of words per sentence or the average number of characters per word are not good predictors there exist syntactic semantic and discourse features that do correlate highlythe average number of verb phrases in each sentence the number of words in the article the likelihood of the vocabulary and the likelihood of the discourse relations all are highly correlated with humans judgments of how well an article is writtenwhile using any one out of syntactic lexical coherence or discourse features is substantally better than the baseline surface features on the discrimination task using a combination of entity coherence and discourse relations produces the best performancethis work was partially supported by an integrative graduate education and research traineeship grant from national science foundation and by nsf grant iis0705671we thank aravind joshi bonnie webber and the anonymous reviewers for their many helpful comments
D08-1020
revisiting readability a unified framework for predicting text qualitywe combine lexical syntactic and discourse features to produce a highly predictive modelof human readers judgments of text readabilitythis is the first study to take into account such a variety of linguistic factors andthe first to empirically demonstrate that discourse relations are strongly associated withthe perceived quality of textwe show that various surface metrics generally expected to be related to readability are not very good predictors of readability judgments in our wall street journal corpuswe also establish that readability predictors behave differently depending on the task predicting text readability or ranking the readabilityour experiments indicate that discourse relations are the one class of features that exhibits robustness across these two taskswe propose a unified framework composed of vocabulary syntactic elements of lexical cohesion entity coherence and discourse relations to measure text qualitywhen readability is targeted towards adult competent language users a more prominent role is played by discourse featuresfive annotators have assessed the overall text quality of each article on a scale from 1 to 5we find nonsignificant correlation for the mean number of words per sentence and the mean number of characters per word
syntactic constraints on paraphrases extracted from parallel corpora we improve the quality of paraphrases extracted from parallel corpora by requiring that phrases and their paraphrases be the same syntactic type this is achieved by parsing the english side of a parallel corpus and altering the phrase extraction algorithm to extract phrase labels alongside bilingual phrase pairs in order to retain broad coverage of nonconstituent phrases complex syntactic labels are introduced a manual evaluation indicates a 19 absolute improvement in paraphrase quality over the baseline method paraphrases are alternative ways of expressing the same informationbeing able to identify or generate paraphrases automatically is useful in a wide range of natural language applicationsrecent work has shown how paraphrases can improve question answering through query expansion automatic evaluation of translation and summarization by modeling alternative lexicalization and machine translation both by dealing with out of vocabulary words and phrases and by expanding the set of reference translations for minimum error rate training while all applications require the preservation of meaning when a phrase is replaced by its paraphrase some additionally require the resulting sentence to be grammaticalin this paper we examine the effectiveness of placing syntactic constraints on a commonly used paraphrasing technique that extracts paraphrases from parallel corpora the paraphrasing technique employs various aspects of phrasebased statistical machine translation including phrase extraction heuristics to obtain bilingual phrase pairs from word alignmentsenglish phrases are considered to be potential paraphrases of each other if they share a common foreign language phrase among their translationsmultiple paraphrases are frequently extracted for each phrase and can be ranked using a paraphrase probability based on phrase translation probabilitieswe find that the quality of the paraphrases that are generated in this fashion improves significantly when they are required to be the same syntactic type as the phrase that they are paraphrasingthis constraint a thorough manual evaluation of the refined paraphrasing technique finds a 19 absolute improvement in the number of paraphrases that are judged to be correctthis paper is structured as follows section 2 describes related work in syntactic constraints on phrasebased smt and work utilizing syntax in paraphrase discoverysection 3 details the problems with extracting paraphrases from parallel corpora and our improvements to the techniquesection 4 describes our experimental design and evaluation methodologysection 5 gives the results of our experiments and section 6 discusses their implicationsa number of research efforts have focused on employing syntactic constraints in statistical machine translationwu introduced the inversion transduction grammar formalism which treats translation as a process of parallel parsing of the source and target language via a synchronized grammarthe synchronized grammar places constraints on which words can be aligned across bilingual sentence pairsto achieve computational efficiency the original proposal used only a single nonterminal label rather than a linguistic grammarsubsequent work used more articulated parses to improve alignment quality by applying cohesion constraints if two english phrases are in disjoint subtrees in the parse then the phrasal cohesion constraint prevents them from being aligned to overlapping sequences in the foreign sentenceother recent work has incorporated constituent and dependency subtrees into the translation rules used by phrasebased systems phrasebased rules have also been replaced with synchronous context free grammars and with tree fragments a number of techniques for generating paraphrases have employed syntactic information either in the process of extracting paraphrases from monolingual texts or in the extracted patterns themselveslin and pantel derived paraphrases based on the distributional similarity of paths in dependency treesbarzilay and mckeown incorporated partofspeech information and other morphosyntactic clues into their cotraining algorithmthey extracted paraphrase patterns that incorporate this informationibrahim et al generated structural paraphrases capable of capturing longdistance dependenciespang et al employed a syntaxbased algorithm to align equivalent english sentences by merging corresponding nodes in parse trees and compressing them down into a word latticeperhaps the most closely related work is a recent extension to bannard and callisonburchs paraphrasing methodzhao et al extended the method so that it is capable of generating richer paraphrase patterns that include partofspeech slots rather than simple lexical and phrasal paraphrasesfor example they extracted patterns such as consider nn take nn into considerationto accomplish this zhao el al used dependency parses on the english side of the parallel corpustheir work differs from the work presented in this paper because their syntactic constraints applied to slots within paraphrase patters and our constraints apply to the paraphrases themselvesbannard and callisonburch extract paraphrases from bilingual parallel corporathey give a probabilistic formation of paraphrasing which naturally falls out of the fact that they use techniques from phrasebased statistical machine translation phrase translation probabilities p and p are commonly calculated using maximum likelihood estimation where the counts are collected by enumerating all bilingual phrase pairs that are consistent with the word alignments for sentence pairs in a bilingual parallel corpusvarious phrase extraction heuristics are possibleoch and ney defined consistent bilingual phrase pairs as follows where f1 is a foreign sentence ei is an english sentence and a is a set of word alignment pointsthe heuristic allows unaligned words to be included at the boundaries of the source or target language phrasesfor example when enumerating the consistent phrase pairs for the sentence pair given in figure 1 la igualdad would align not only to equal but also to create equal and to create equalin smt these alternative translations are ranked by the translation probabilities and other feature functions during decodingthe interaction between the phrase extraction heuristic and unaligned words results in an undesirable effect for paraphrasingby bannard and callisonburchs definition equal create equal and to create equal would be considered paraphrases because they are aligned to the same foreign phrasetables 1 and 2 show how sub and superphrases can creep into the paraphrases equal can be paraphrased as equal rights and create equal can be paraphrased as equalobviously when e2 is substituted for e1 the resulting sentence will generally be ungrammaticalthe first case could result in equal equal rights and the second would drop the verbthis problem is pervasiveto test its extent we attempted to generate paraphrases for 900000 phrases using bannard and callisonburchs method trained on the europarl corpora it generated a total of 37 million paraphrases for 400000 phrases in the list1 we observed that 34 of the paraphrases were super or substrings of the original phrasethe most probable paraphrase was a super or substring of the phrase 73 of the timethere are a number of strategies that might be adopted to alleviate this problem we could change the phrase extraction heuristics treatment of unaligned words or we could attempt to ensure that we have fewer unaligned items in our word alignments the paraphrase criterion could be changed from being e2 e1 to specifying that e2 is not sub or superstring of e1in this paper we adopt a different strategythe essence of our strategy is to constrain paraphrases to be the same syntactic type as the phrases that they are paraphrasingsyntactic constraints can apply in two places during phrase extraction and when substituting paraphrases into sentencesthese are described in sections 31 and 32when we apply syntactic constraints to the phrase extraction heuristic we change how bilingual phrase pairs are enumerated and how the component probabilities of the paraphrase probability are calculatedwe use the syntactic type s of e1 in a refined version of the paraphrase probability where p can be approximated as we define a new phrase extraction algorithm that operates on an english parse tree p along with foreign sentence f1 english sentence ei and word alignment awe dub this sbp for syntactic bilingual phrases the sbp phrase extraction algorithm produces tuples containing a foreign phrase an english phrase and a syntactic label after enumerating these for all phrase pairs in a parallel corpus we can calculate p and p as by redefining the probabilities in this way we partition the space of possible paraphrases by their syntactic categoriesin order to enumerate all phrase pairs with their syntactic labels we need to parse the english side of the parallel corpus this limits the potential applicability of our refined paraphrasing method to languages which have parserstable 3 gives an example of the refined paraphrases for equal when it occurs as an adjective or adjectival phrasenote that most of the paraphrases that were possible under the baseline model are now excludedwe no longer get the noun equality the verb equals the adverb equally the determier the or the np equal rightsthe paraphrases seem to be higher quality especially if one considers their fidelity when they replace the original phrase in the context of some sentencewe tested the rate of paraphrases that were suband superstrings when we constrain paraphrases based on nonterminal nodes in parse treesthe percent of the best paraphrases being substrings dropped from 73 to 24 and the overall percent of paraphrases subsuming or being subsumed by the original phrase dropped from 34 to 12however the number of phrases for which we were able to generated paraphrases dropped from 400000 to 90000 since we limited ourselves to phrases that were valid syntactic constituentsthe number of unique paraphrases dropped from several million to 800000the fact that we are able to produce paraphrases for a much smaller set of phrases is a downside to using syntactic constraints as we have initially proposedit means that we would not be able to generate paraphrases for phrases such as create equalmany nlp tasks such as smt which could benefit from paraphrases require broad coverage and may need to paraphrases for phrases which are not syntactic constituentsto generate paraphrases for a wider set of phrases we change our phrase extraction heuristic again so that it produces phrase pairs for arbitrary spans in the sentence including spans that are not syntactic constituentswe assign every span in a sentence a syntactic label using ccgstyle notation which gives a syntactic role with elements missing on the left andor right hand sidesthe function ccglabels describes the set of ccglabels for the phrase spanning positions i to i n in a parse tree p it generates three complex syntactic labels for the nonsyntactic constituent phrase create equal in the parse tree given in figure 2 we can use these complex labels instead of atomic nonterminal symbols to handle nonconstituent phrasesfor example table 4 shows the paraphrases and syntactic labels that are generated for the nonconstituent phrase create equalthe paraphrases are significantly better than the paraphrases generated for the phrase by the baseline method the labels shown in the figure are a fraction of those that can be derived for the phrase in the parallel corpuseach of these corresponds to a different syntactic context and each has its own set of associated paraphraseswe increase the number of phrases that are paraphrasable from the 90000 in our initial definition of sbp to 250000 when we use complex ccg labelsthe number of unique paraphrases increases from 800000 to 35 million which is nearly as many paraphrases that were produced by the baseline method for the samplein addition to applying syntactic constraints to our phrase extraction algorithm we can also apply them when we substitute a paraphrase into a sentenceto do so we limit the paraphrases to be the same syntactic type as the phrase that it is replacing based on the syntactic labels that are derived from the phrase tree for a test sentencesince each phrase normally has a set of different ccg labels we need a way of choosing which label to use when applying the constraintthere are several different possibilities for choosing among labelswe could simultaneously choose the best paraphrase and the best label for the phrase in the parse tree of the test sentence alternately we could average over all of the labels that are generated for the phrase in the parse tree the potential drawback of using equations 8 and 9 is that the ccg labels for a particular sentence significantly reduces the paraphrases that can be usedfor instance vp is the only label for the paraphrases in table 4 that is compatible with the parse tree given in figure 2because the ccg labels for a given sentence are so specific many times there are no matchestherefore we also investigated a looser constraintwe choose the highest probability paraphrase with any label equation 10 only applies syntactic constraints during phrase extraction and ignores them during substitutionin our experiments we evaluate the quality of the paraphrases that are generated using equations 8 9 and 10we compare their quality against the bannard and callisonburch baselinewe conducted a manual evaluation to evaluate paraphrase qualitywe evaluated whether paraphrases retained the meaning of their original phrases and whether they remained grammatical when they replaced the original phrase in a sentenceour paraphrase model was trained using the europarl corpus we used ten parallel corpora between english and danish dutch finnish french german greek italian portuguese spanish and swedish with approximately 30 million words per language for a total of 315 million english wordsautomatic word alignments were created for these using giza the english side of each parallel corpus was parsed using the bikel parser a total of 16 million unique sentences were parseda trigram language model was trained on these english sentences using the sri language modeling toolkit the paraphrase model and language model for the bannard and callisonburch baseline were trained on the same data to ensure a fair comparisonthe test set was the english portion of test sets used in the shared translation task of the acl2007 workshop on statistical machine translation the test sentences were also parsed with the bikel parserthe phrases to be evaluated were selected such that there was an even balance of phrase lengths with half of the phrases being valid syntactic constituents and half being arbitrary sequences of words410 phrases were selected at random for evaluation30 items were excluded from our results subsequent to evaluation on the grounds that they consisted solely of punctuation and stop words like determiners prepositions and pronounsthis left a total of 380 unique phraseswe produced paraphrases under the following eight conditions 1baseline the paraphrase probability defined by bannard and callisonburch calculated over multiple parallel corpora as given in equation 5note that under this condition the best paraphrase is the same for each occurrence of the phrase irrespective of which sentence it occurs in2baseline lm the paraphrase probability combined with the language model probability calculated for the sentence with the phrase replaced with the paraphrase3extraction constraints this condition selected the best paraphrase according to equation 10it chooses the single best paraphrase over all labelsconditions 3 and 5 only apply the syntactic constraints at the phrase extraction stage and do not require that the paraphrase have the same syntactic label as the phrase in the sentence that it is being subtituted into corresponds to equation 8 which selects the highest probability paraphrase which matches at least one of the syntactic labels of the phrase in the test sentenceconditions 58 apply the syntactic constraints both and the phrase extraction and at the substitution stages condition corresponds to equation 9 which averages over all of the syntactic labels for the phrase in the sentence instead of choosing the single one which maximizes the probability8averaged substitution constraints lm as above but including a language model probabilitywe evaluated the paraphrase quality through a substitution testwe retrieved a number of sentences which contained each test phrase and substituted the phrase with automaticallygenerated paraphrasesannotators judged whether the paraphrases had the same meaning as the original and whether the resulting sentences were grammaticalthey assigned two values to each sentence using the 5point scales given in table 5we considered an item to have the same meaning if it was assigned a score of 3 or greater and to be grammatical if it was assigned a score of 4 or 5we evaluated several instances of a phrase when it occurred multiple times in the test corpus since paraphrase quality can vary based on context there were an average of 31 instances for each phrase with a maximum of 6there were a total of 1195 sentences that paraphrases were substituted into with a total of 8422 judgements collectednote that 7 different paraphrases were judged on average for every instancethis is because annotators judged paraphrases for eight conditions and because we collected judgments for the 5best paraphrases for many of the conditionswe measured interannotator agreement with the kappa statistic using the 1391 items that two annotators scored in commonthe two annotators assigned the same absolute score 47 of the timeif we consider chance agreement to be 20 for 5point scales then k 033 which is commonly interpreted as fair if we instead measure agreement in terms of how often the annotators both judged an item to be above or below the thresholds that we set then their rate of agreement was 80in this case chance agreement would be 50 so k 061 which is substantialin order to allow other researchers to recreate our results or extend our work we have prepared the following materials for download2 the complete set of paraphrases generated for the test setthis includes the 37 million paraphrases generated by the baseline method and the 35 million paraphrases generated with syntactic constraints the code that we used to produce these paraphrases and the complete data sets so that researchers can extract paraphrases for new sets of phrases the manual judgments about paraphrase qualitythese may be useful as development material for setting the weights of a loglinear formulation of paraphrasing as suggested in zhao et al table 6 summarizes the results of the manual evaluationwe can observe a strong trend in the syntactically constrained approaches performing better of the eight conditionscorrect meaning is the percent of time that a condition was assigned a 3 4 or 5 and correct grammar is the percent of time that it was given a 4 or 5 using the scales from table 5 than the baselinethey retain the correct meaning more often they are judged to be grammatical far more frequently they perform nearly 20 better when both meaning and grammaticality are used as criteria3 another trend that can be observed is that incorporating a language model probability tends to result in more grammatical output but meaning suffers as a result in some caseswhen the lm is applied there is a drop of 12 in correct meaning for the baseline but only a slight dip of 12 for the syntacticallyconstrained phrasesnote that for the conditions where the paraphrases were required to have the same syntactic type as the phrase in the parse tree there was a reduction in the number of paraphrases that could be appliedfor the first two conditions paraphrases were posited for 1194 sentences conditions 3 and 4 could be applied to 1142 of those sentences but conditions 58 could only be applied to 876 sentencesthe substitution constraints reduce coverage to 73 of the test sentencesgiven that the extraction constraints have better coverage and nearly identical performance on 3our results show a significantly lower score for the baseline than reported in bannard and callisonburch this is potentially due to the facts that in this work we evaluated on outofdomain news commentary data and we randomly selected phrasesin the pervious work the test phrases were drawn from wordnet and they were evaluated solely on indomain european parliament data the meaning criterion they might be more suitable in some circumstancesin this paper we have presented a novel refinement to paraphrasing with bilingual parallel corporawe illustrated that a significantly higher performance can be achieved by constraining paraphrases to have the same syntactic type as the original phrasea thorough manual evaluation found an absolute improvement in quality of 19 using strict criteria about paraphrase accuracy when comparing against a strong baselinethe syntactically enhanced paraphrases are judged to be grammatically correct over two thirds of the time as opposed to the baseline method which was grammatically correct under half of the timethis paper proposed constraints on paraphrases at two stages when deriving them from parsed parallel corpora and when substituting them into parsed test sentencesthese constraints produce paraphrases that are better than the baseline and which are less commonly affected by problems due to unaligned wordsfurthermore by introducing complex syntactic labels instead of solely relying on nonterminal symbols in the parse trees we are able to keep the broad coverage of the baseline methodsyntactic constraints significantly improve the quality of this paraphrasing method and their use opens the question about whether analogous constraints can be usefully applied to paraphrases generated from purely monolingual corporaour improvements to the extraction of paraphrases from parallel corpora suggests that it may be usefully applied to other nlp applications such as generation which require grammatical outputthanks go to sally blatz emily hinchcliff and michelle bland for conducting the manual evaluation and to michelle bland and omar zaidan for proofreading and commenting on a draft of this paperthis work was supported by the national science foundation under grant no0713448the views and findings are the authors alone
D08-1021
syntactic constraints on paraphrases extracted from parallel corporawe improve the quality of paraphrases extracted from parallel corpora by requiring thatphrases and their paraphrases be the same syntactic typethis is achieved by parsing the english side of a parallel corpus and altering the phrase extraction algorithm to extract phrase labels alongside bilingual phrase pairsin order to retain broad coverage of nonconstituent phrases complex syntactic labels are introduceda manual evaluation indicates a 19 absolute improvement in paraphrase quality over the baseline methodwe show how the monolingual context of a sentence to paraphrase can be used to improve the quality of the acquired paraphraseshuman evaluators are asked to score each pair of an original sentence and a paraphrased sentence with the following two 5point scale grades grammaticality whether the paraphrased sentence is grammatical meaning whether the meaning of the original sentence is properly retained by the paraphrased sentencea problem of phrasebased methods to paraphrase or term variation acquisition is the fact that a large proportion of the term variations or paraphrases proposed by the system are superior substrings of the original termwe automatically acquire paraphrase dictionary
forestbased translation rule extraction examples target treetotree ding and palmer translation rule extraction is a fundamental problem in machine translation especially for syntaxbased that need parse trees from either or both sides of the bitext the current dominant practice only uses 1best trees which adversely affects the rule set quality due to parsing errors so we propose a novel approach which extracts rules a forest compactly encodes exponentially many parses experiments show that this method improves translation quality by over 1 bleu point on a stateoftheart treetostring system and is 05 points better than extracting on 30best parses when combined with our previous work on forestbased decoding it achieves a 25 bleu points improvement over the baseline and even outperforms the hierarchical system of hiero by 07 points examples abstract source target treetotree ding and palmer translation rule extraction is a fundamental problem in machine translation especially for linguistically syntaxbased systems that need parse trees from either or both sides of the bitextthe current dominant practice only uses 1best trees which adversely affects the rule set quality due to parsing errorsso we propose a novel approach which extracts rules from a packed forest that compactly encodes exponentially many parsesexperiments show that this method improves translation quality by over 1 bleu point on a stateoftheart treetostring system and is 05 points better than extracting on 30best parseswhen combined with our previous work on forestbased decoding it achieves a 25 bleu points improvement over the baseline and even outperforms the hierarchical system of hiero by 07 pointsautomatic extraction of translation rules is a fundamental problem in statistical machine translation especially for many syntaxbased models where translation rules directly encode linguistic knowledgetypically these models extract rules using parse trees from both or either side of the bitextthe former case with trees on both sides is often called treetotree models while the latter case with trees on either source or target side include both treetostring and stringtotree models leveraging from structural and linguistic information from parse trees these models are believed to be better than their phrasebased counterparts in treetostring stringtotree handling nonlocal reorderings and have achieved promising translation results1 however these systems suffer from a major limitation that the rule extractor only uses 1best parse tree which adversely affects the rule set quality due to parsing errorsto make things worse modern statistical parsers are often trained on domains quite different from those used in mtby contrast formally syntaxbased models do not rely on parse trees yet usually perform better than these linguistically sophisticated counterpartsto alleviate this problem an obvious idea is to extract rules from kbest parses insteadhowever a kbest list with its limited scope has too few variations and too many redundancies this situation worsens with longer sentences as the number of possible parses grows exponentially with the sentence length and a kbest list will only capture a tiny fraction of the whole spacein addition many subtrees are repeated across different parses so it is also inefficient to extract rules separately from each of these very similar trees we instead propose a novel approach that extracts rules from packed forests which compactly encodes many more alternatives than kbest listsexperiments show that forestbased extraction improves bleu score by over 1 point on a stateoftheart treetostring system which is also 05 points better than extracting on 30best parseswhen combined with our previous orthogonal work on forestbased decoding the forestforest approach achieves a 25 bleu points improvement over the baseline and even outperforms the hierarchical system of hiero one of the bestperforming systems to datebesides treetostring systems our method is also applicable to other paradigms such as the stringtotree models where the rules are in the reverse order and easily generalizable to pairs of forests in treetotree modelswe review in this section the treebased approach to machine translation and its rule extraction algorithm current treebased systems perform translation in two separate steps parsing and decodingthe input string is first parsed by a parser into a 1best tree which will then be converted to a target language string by applying a set of treetostring transformation rulesfor example consider the following example translating from chinese to english bush held a meeting2 with sharon1 figure 2 shows how this process worksthe chinese sentence is first parsed into a parse tree which will be converted into an english string in 5 stepsfirst at the root node we apply rule r1 shown in figure 1 which translates the chinese coordination construction into an english prepositional phrasethen from step we continue applying rules to untranslated chinese subtrees until we get the complete english translation in 2 shalong bush held a meeting with sharon more formally a translation rule is a tuple rhs 0 where lhs is the sourceside tree fragment whose internal nodes are labeled by nonterminal symbols and whose frontier nodes are labeled by sourcelanguage words or variables from a set x x1 x2 rhs is the targetside string expressed in targetlanguage words and variables and 0 is a mapping from x to nonterminalseach variable xi e x occurs exactly once in lhs and exactly once in rhsfor example for rule r1 in figure 1 these rules are being used in the reverse direction of the stringtotree transducers in galley et al we now briefly explain the algorithm of galley et al that can extract these translation rules from a wordaligned bitext with sourceside parsesconsider the example in figure 3the basic idea is to decompose the source parse into a series of tree fragments each of which will form a rule with its corresponding english translationhowever not every fragmentation can be used for rule extraction since it may or may not respect the alignment and reordering between the two languagesso we say a fragmentation is wellformed with respect to an alignment if the root node of every tree fragment corresponds to a contiguous span on the target side the intuition is that there is a translational equivalence between the subtree rooted at the node and the corresponding target spanfor example in figure 3 each node is annotated with its corresponding english span where the np node maps to a noncontiguous one bush you with sharonmore formally we need a precise formulation to handle the cases of onetomany manytoone and manytomany alignment linksgiven a sourcetarget sentence pair with alignment a the span of node v is the set of target words aligned to leaf nodes yield under node v for example in figure 3 every node in the parse tree is annotated with its corresponding span below the node where most nodes have contiguous spans except for the np node which maps to a gapped phrase bush you with sharonbut contiguity alone is not enough to ensure wellformedness since there might be words within the span aligned to source words uncovered by the nodeso we also define a span s to be faithful to node v if every word in it is only aligned to nodes dominated by v ie for example sibling nodes vv and as in the tree have nonfaithful spans because they both map to held thus neither of them can be translated to held alonein this case a larger tree fragment rooted at vpb has to be extractednodes with nonempty contiguous and faithful spans form the admissible set which serve as potential cutpoints for rule extraction3 with the admissible set computed rule extraction is as simple as a depthfirst traversal from the root we cut the tree at all admissible nodes to form tree fragments and extract a rule for each fragment with variables matching the admissible descendant nodesfor example the tree in figure 3 is cut into 6 pieces each of which corresponds to a rule on the rightthese extracted rules are called minimal rules which can be glued together to form composed rules with larger tree fragments our experiments use composed ruleswe now extend treebased extraction algorithm from the previous section to work with a packed forest representing exponentially many parse treesinformally a packed parse forest or forest in short is a compact representation of all the derivations for a given sentence under a contextfree grammar for example consider again the chinese sentence in example above which has two readings depending on the partofspeech of the word yˇyou it can be either a conjunction as shown in figure 3 or a preposition as shown in figure 5 with only pp and vpb swapped from the english word orderthese two parse trees can be represented as a single forest by sharing common subtrees such as npb0 1 and vpb3 6 as shown in figure 4such a forest has a structure of a hypergraph where items like np0 3 are called nodes whose indices denote the source span and combinations like we call hyperedgeswe denote head and tails to be the consequent and antecedant items of hyperedge e respectivelyfor example we also denote bs to be the set of incoming hyperedges of node v being different ways of deriving itfor example in figure 4 bs e1 e2like in treebased extraction we extract rules from a packed forest f in two steps it turns out that the exact formulation developed for admissible set in the treebased case can be applied to a forest without any changethe fragmentation step however becomes much more involved since we now face a choice of multiple parse hyperedges at each nodein other words it becomes nondeterministic how to cut a forest into tree fragments which is analogous to the nondeterministic patternmatch in forestbased decoding for example there are two parse hyperedges e1 and e2 at the root node in figure 4when we follow one of them to grow a fragment there again will be multiple choices at each of its tail nodeslike in treebased case a fragment is said to be complete if all its leaf nodes are admissibleotherwise an incomplete fragment can grow at any nonadmissible frontier node v where following each parse hyperedge at v will split off a new fragmentfor example following e2 at the root node will immediately lead us to two admissible nodes npb0 1 and vp1 6 so this fragment frag1 e2 is now complete and we can extract a rule ip x1 x2however following the other hyperedge e1 ip0 6 np0 3 vpb3 6 will leave the new fragment frag2 e1 incomplete with one nonadmissible node np0 3we then grow frag2 at this node by choosing hyperedge e3 np0 3 npb0 1 cc1 2 npb2 3 and spin off anew fragment frag3 e1 e3 which is now complete since all its four leaf nodes are admissiblewe then extract a rule with four variables this procedure is formalized by a breadthfirst search in pseudocode 1the basic idea is to visit each frontier node v and keep a queue open of actively growing fragments rooted at v we keep expanding incomplete fragments from open and extract a rule if a complete fragment is found each fragment is associated with a frontier being the subset of nonadmissible leaf nodes so each initial fragment along hyperedge e is associated with an initial frontier front tails admseta fragment is complete if its frontier is empty otherwise we pop one frontier node you to expand spin off new fragments by following hyperedges of you and update the frontier until all active fragments are complete and open queue is empty a single parse tree can also be viewed as a trivial forest where each node has only one incoming hyperedgeso the galley et al algorithm for treebased rule extraction can be considered a special case of our algorithm where the queue open always contains one single active fragmentin treebased extraction for each sentence pair each rule extracted naturally has a count of one which will be used in maximumlikelihood estimation of rule probabilitieshowever a forest is an implicit collection of many more trees each of which when enumerated has its own probability accumulated from of the parse hyperedges involvedin other words a forest can be viewed as a virtual weighted kbest list with a huge k so a rule extracted from a non 1best parse ie using non 1best hyperedges should be penalized accordingly and should have a fractional count instead of a unit one similar to the estep in them algorithmsinspired by the parsing literature on pruning we penalize a rule r by the posterior probability of its tree fragment frag lhsthis posterior probability notated αβ can be computed in an insideoutside fashion as the product of three parts the outside probability of its root node the probabilities of parse hyperedges involved in the fragment and the inside probabilities of its leaf nodes where α and β denote the outside and inside probabilities of tree nodes respectivelyfor example in figure 4 where top denotes the root node of the forestlike in the mstep in them algorithm we now extend the maximum likelihood estimation to fractional counts for three conditional probabilities regarding a rule which will be used in the experimentsthe concept of packed forest has been previously used in translation rule extraction for example in rule composition and tree binarization however both of these efforts only use 1best parses with the second one packing different binarizations of the same tree in a forestnevertheless we suspect that their extraction algorithm is in principle similar to ours although they do not provide details of forestbased fragmentation which we think is nontrivialthe forest concept is also used in machine translation decoding for example to characterize the search space of decoding with integrated language models the first direct application of parse forest in translation is our previous work which translates a packed forest from a parser it is also the base system in our experiments this work on the other hand is in the orthogonal direction where we utilize forests in rule extraction instead of decodingbleu score our experiments will use both default 1best decoding and forestbased decodingas we will see in the next section the best result comes when we combine the merits of both ie using forests in both rule extraction and decodingthere is also a parallel work on extracting rules from kbest parses and kbest alignments but both their experiments and our own below confirm that extraction on kbest parses is neither efficient nor effectiveour experiments are on chinesetoenglish translation based on a treetostring system similar to given a 1best tree t the decoder searches for the best derivation d among the set of all possible derivations d where the first two terms are translation and language model probabilities τ is the target string for derivation d and the last two terms are derivation and translation length penalties respectivelythe conditional probability p decomposes into the product of rule probabilities where the first three are conditional probabilities based on fractional counts of rules defined in section 33 and the last two are lexical probabilitiesthese parameters al as are tuned by minimum error rate training on the dev setswe refer readers to mi et al for details of the decoding algorithmwe use the chinese parser of xiong et al to parse the source side of the bitextfollowing huang we also modify this parser to output a packed forest for each sentence which can be pruned by the marginal probabilitybased insideoutside algorithm we will first report results trained on a smallscaled dataset with detailed analysis and then scale to a larger one where we also combine the technique of forestbased decoding to test the effect of forestbased rule extraction we parse the training set into parse forests and use three levels of pruning thresholds pe 2 5 8figure 6 plots the extraction speed and translation quality of forestbased extraction with various pruning thresholds compared to 1best and 30best baselinesusing more than one parse tree apparently improves the bleu score but at the cost of much slower extraction since each of the topk trees has to be processed individually although they share many common subtreesforest extraction by contrast is much faster thanks to packing and produces consistently better bleu scoreswith pruning threshold pe 8 forestbased extraction achieves a bleu score of 02533 which is an absolute improvement of 10 points over the 1best baseline and is statistically significant using the signtest of collins et al this is also 05 points better than extracting on 30best parsesthese bleu score results are summarized in table 2 which also shows that decoding with forestextracted rules is less than twice as slow as with 1best rules and only fractionally slower than with 30best ruleswe also investigate the question of how often rules extracted from non 1best parses are used by the decodertable 3 shows the numbers of rules extracted from 1best 30best and forestbased extractions and the numbers that survive after filtering on the dev setbasically in the forestbased case we can use about twice as many rules as in the 1best case or about 15 times of 30best extractionbut the real question is are these extra rules really useful in generating the final translationthe last row shows that 163 of the rules used in 1best derivations are indeed only extracted from non 1best parses in the forestsnote that this is a stronger condition than changing the distribution of rules by considering more parses here we introduce new rules never seen on any 1best parseswe also conduct experiments on a larger training dataset fbis which contains 239k sentence pairs with about 69m89m words in chineseenglish respectivelywe also use a bigger trigram model trained on the first 13 of the xinhua portion of gigaword corpusto integrate with forestbased decoding we use both 1best trees and packed forests during both rule extraction and decoding phasessince the data scale is larger than the small data we are forced to use harsher pruning thresholds with pe 5 for extraction and pd 10 for decodingthe final bleu score results are shown in table 4with both treebased and forestbased decoding rules extracted from forests significantly outperform those extracted from 1best trees the final result with both forestbased extraction and forestbased decoding reaches a bleu score of 02816 outperforming that of hiero one of the best performing systems to datethese results confirm that our novel forestbased rule extraction approach is a promising direction for syntaxbased machine translationin this paper we have presented a novel approach that extracts translation rules from a packed forest encoding exponentially many trees rather than from 1best or kbest parsesexperiments on a stateoftheart treetostring system show that this method improves bleu score significantly with reasonable extraction speedwhen combined with our previous work on forestbased decoding the final result is even better than the hierarchical system hierofor future work we would like to apply this approach to other types of syntaxbased translation systems namely the stringtotree systems and treetotree systemsthis work was funded by national natural science foundation of china contracts 60736014 and 60573188 and 863 state key project no2006aa010108 and by nsf itr eia0205456 we would also like to thank qun liu for supporting this work and the three anonymous reviewers for improving the earlier version
D08-1022
forestbased translation rule extractiontranslation rule extraction is a fundamental problem in machine translation especially for linguistically syntaxbased systems that need parse trees from either or both sides of the bitextthe current dominant practice only uses 1best trees which adversely affects the rule set quality due to parsing errorsso we propose a novel approach which extracts rules from a packed forest that compactly encodes exponentially many parsesexperiments show that this method improves translation quality by over 1 bleu point on a stateoftheart treetostring system and is 05 points better than extracting on 30best parseswhen combined with our previous work on forestbased decoding it achieves a 25 bleu points improvement over the baseline and even outperforms the hierarchical system of hiero by 07 pointswe use viterbi algorithm to prune the forest
online largemargin training of syntactic and structural translation features minimumerrorrate training is a bottleneck for current development in statistical machine translation because it is limited in the number of weights it can reliably optimize building on the work of watanabe et al we explore the use of the mira algorithm of crammer et al as an alternative to mert we first show that by parallel processing and exploiting more of the parse forest we can obtain results using mira that match or surpass mert in terms of both translation quality and computational cost we then test the method on two classes of features that address deficiencies in the hiero hierarchical phrasebased model first we simultaneously train a large number of marton and resniks soft syntactic constraints and second we introduce a novel structural distortion model in both cases we obtain significant improvements in translation performance optimizing them in combination for a total of 56 feature weights improve performance by 26 a subset of the nist 2006 arabicenglish evaluation data since its introduction by och minimum error rate training has been widely adopted for training statistical machine translation systemshowever mert is limited in the number of feature weights that it can optimize reliably with folk estimates of the limit ranging from 15 to 30 featuresone recent example of this limitation is a series of experiments by marton and resnik in which they added syntactic features to hiero which ordinarily uses no linguistically motivated syntactic informationeach of their new features rewards or punishes a derivation depending on how similar or dissimilar it is to a syntactic parse of the input sentencethey found that in order to obtain the greatest improvement these features had to be specialized for particular syntactic categories and weighted independentlynot being able to optimize them all at once using mert they resorted to running mert many times in order to test different combinations of featuresbut it would have been preferable to use a training method that can optimize the features all at oncethere has been much work on improving merts performance or on replacing mert wholesale this paper continues a line of research on online discriminative training extending that of watanabe et al who use the margin infused relaxed algorithm due to crammer et alour guiding principle is practicality like watanabe et al we train on a small tuning set comparable in size to that used by mert but by parallel processing and exploiting more of the parse forest we obtain results using mira that match or surpass mert in terms of both translation quality and computational cost on a largescale translation tasktaking this further we test mira on two classes of features that make use of syntactic information and hierarchical structurefirst we generalize marton and resniks soft syntactic constraints by training all of them simultaneously and second we introduce a novel structural distortion modelwe obtain significant improvements in both cases and further large improvements when the two feature sets are combinedthe paper proceeds as followswe describe our training algorithm in section 2 our generalization of marton and resniks soft syntactic constraints in section 3 our novel structural distortion features in section 4 and experimental results in section 5the translation model is a standard linear model which we train using mira following watanabe et al we describe the basic algorithm first and then progressively refine itlet e by abuse of notation stand for both output strings and their derivationswe represent the feature vector for derivation e as hinitialize the feature weights w then repeatedly passes through the training data are made we only average the weight vectors from the last passthe technique of averaging was introduced in the context of perceptrons as an approximation to taking a vote among all the models traversed during training and has been shown to work well in practice we follow mcdonald et al in applying this technique to miranote that the objective is not the same as that used by watanabe et al rather it is the same as that used by crammer and singer and related to that of taskar et alwe solve this optimization problem using a variant of sequential minimal optimization for each i initialize αij c for a single value of j such that eij ei and initialize αij 0 for all other values of jthen repeatedly choose a sentence i and a pair of hypotheses j j0 and let where where we set c 001the first term means that we want w0 to be close to w and second term means that we want w0 to score ei higher than each eij by a margin at least as wide as the loss ijwhen training is finished the weight vectors from all iterations are averaged together gives the closest number to z in the interval x yassuming bleu as the evaluation criterion the loss ij of ei j relative to ei should be related somehow to the difference between their bleu scoreshowever bleu was not designed to be used on individual sentences in general the highestbleu translation of a sentence depends on what the other sentences in the test set aresentencelevel approximations to bleu exist but we found it most effective to perform bleu computations in the context of a set o of previouslytranslated sentences following watanabe et al however we do not try to accumulate translations for the entire dataset but simply maintain an exponentiallyweighted moving average of previous translationsmore precisely for an input sentence f let e be some hypothesis translation and let rk be the set of reference translations for f let c or simply c for short be the vector of the following counts e the effective reference length mink rk and for 1 n 4 the number of ngrams in e and the number of ngram matches between e and rkthese counts are sufficient to calculate a bleu score which we write as bleuthe pseudodocument o is an exponentiallyweighted moving average of these vectorsthat is for each training sentence let eˆ be the 1best translation after processing the sentence we update o and its input length of we can then calculate the bleu score of hypotheses e in the context of obut the larger o is the smaller the impact the current sentence will have on the bleu scoreto correct for this and to bring the loss function roughly into the same range as typical margins we scale the bleu score by the size of the input which we also simply write as bfinally the loss function is defined to be we now describe the selection of ewe know of three approaches in previous workthe first is to force the decoder to output the reference sentence exactly and select the derivation with the highest model score which liang et al call bold updatingthe second uses the decoder to search for the highestbleu translation which arun and koehn call maxbleu updatingliang et al and arun and koehn experiment with these methods and both opt for a third method which liang et al call local updating generate an nbest list of translations and select the highestbleu translation from itthe intuition is that due to noise in the training data or reference translations a highbleu translation may actually use peculiar rules which it would be undesirable to encourage the model to usehence in local updating the search for the highestbleu translation is limited to the n translations with the highest model score where n must be determined experimentallyhere we introduce a new oracletranslation selection method formulating the intuition behind local updating as an optimization problem instead of choosing the highestbleu translation from an nbest list we choose the translation that maximizes a combination of bleu and the modelwe can also interpret in the following way we want e to be the maxbleu translation but we also want to minimize so we balance these two criteria against each other where h w is that part of that depends on e and µ is a parameter that controls how much we are willing to allow some translations to have higher bleu than e if we can better minimize setting µ 0 would reduce to maxbleu updating setting µ would never update w at allsetting µ 05 reduces to equation figure 1 shows the 10best unique translations for a single input sentence according to equation under various settings of µthe points at far right are the translations that are scored highest according to the modelthe p 0 points in the upperleft corner are typical of oracle translations that would be selected under the maxbleu policy they indeed have a very high bleu score but are far removed from the translations preferred by the model thus they would cause violent updates to w local updating would select the topmost point labeled p 1our scheme would select one of the p 05 points which have bleu scores almost as high as the maxbleu translations yet are not very far from the translations preferred by the modelwhat is the set eij of translation hypothesesideally we would let it be the set of all possible translations and let the objective function take all of them into accountthis is the approach taken by taskar et al but their approach assumes that the loss function can be decomposed into local loss functionssince our loss function cannot be so decomposed we select the last case is what crammer et al call maxloss updating and taskar et al call lossaugmented inferencethe rationale here is that since the objective tries to minimize maxj we should include the translations that have the highest in order to approximate the effect of using the whole forestsee figure 1 again for an illustration of the hypotheses selected for a single sentencethe maxbleu points in the upper left are not included the p oo points in the lowerright are the negative examples they are poor translations that are scored too high by the model and the learning algorithm attempts to shift them to the leftto perform the forest rescoring we need to use several approximations since an exact search for bleuoptimal translations is nphard for every derivation e in the forest we calculate a vector c of counts as in section 22 except using unclipped counts of ngram matches that is the number of matches for an ngram can be greater than the number of occurrences of the ngram in any reference translationthis can be done efficiently by calculating c for every hyperedge in the forest we keep track of ngrams using the same scheme used to incorporate an ngram language model into the decoder to find the best derivation in the forest we traverse it bottomup as usual and for every set of alternative subtranslations we select the one with the highest scorebut here a rough approximation lurks because we need to calculate b on the nodes of the forest but b does not have the optimal substructure property ie the optimal score of a parent node cannot necessarily be calculated from the optimal scores of its childrennevertheless we find that this rescoring method is good enough for generating highbleu oracle translations and lowbleu negative examplesone convenient property of mert is that it is embarrassingly parallel we decode the entire tuning set sending different sentences to different processors and during optimization of feature weights different random restarts can be sent to different processorsin order to make mira comparable in efficiency to mert we must parallelize itbut with an online learning algorithm parallelization requires a little more coordinationwe run mira on each processor simultaneously with each maintaining its own weight vectora master process distributes different sentences from the tuning set to each of the processors when each processor finishes decoding a sentence it transmits the resulting hypotheses with their losses to all the other processors and receives any hypotheses waiting from other processorsthose hypotheses were generated from different weight vectors but can still provide useful informationthe sets of hypotheses thus collected are then processed as one batchwhen the whole training process is finished we simply average all the weight vectors from all the processorshaving described our training algorithm which includes several practical improvements to watanabe et als usage of mira we proceed in the remainder of the paper to demonstrate the utility of the our training algorithm on models with large numbers of structurally sensitive featuresthe first features we explore are based on a line of research introduced by chiang and improved on by marton and resnik a hierarchical phrasebased translation model is based on synchronous contextfree grammar but does not normally use any syntactic information derived from linguistic knowledge or treebank data it uses translation rules that span any string of words in the input sentence without regard for parserdefined syntactic constituency boundarieschiang experimented with a constituency feature that rewarded rules whose source language side exactly spans a syntactic constituent according to the output of an external sourcelanguage parserthis feature can be viewed as a soft syntactic constraint it biases the model toward translations that respect syntactic structure but does not force it to use themhowever this more syntactically aware model when tested in chineseenglish translation did not improve translation performancerecently marton and resnik revisited the idea of constituency features and succeeded in showing that finergrained soft syntactic constraints yield substantial improvements in bleu score for both chineseenglish and arabicenglish translationin addition to adding separate features for different syntactic nonterminals they introduced a new type of constraint that penalizes rules when the source language side crosses the boundaries of a source syntactic constituent as opposed to simply rewarding rules when they are consistent with the sourcelanguage parse treemarton and resnik optimized their features weights using mertbut since mert does not scale well to large numbers of feature weights they were forced to test individual features and manually selected feature combinations each in a separate modelalthough they showed gains in translation performance for several such models many larger potentially better feature combinations remained unexploredmoreover the bestperforming feature subset was different for the two language pairs suggesting that this laborintensive feature selection process would have to be repeated for each new language pairhere we use mira to optimize marton and resniks finergrained singlecategory features all at oncewe define below two sets of features a coarsegrained class that combines several constituency categories and a finegrained class that puts different categories into different featuresboth kinds of features were used by marton and resnik but only a few at a timecrucially our training algorithm provides the ability to train all the finegrained features a total of 34 feature weights simultaneouslycoarsegrained features as the basis for coarsegrained syntactic features we selected the following nonterminal labels based on their frequency in the tuning data whether they frequently cover a span of more than one word and whether they represent linguistically relevant constituents np pp s vp sbar adjp advp and qpwe define two new features one which fires when a rules source side span in the input sentence matches any of the abovementioned labels in the input parse and another which fires when a rules source side span crosses a boundary of one of these labels these two features are equivalent to marton and resniks xp and xp feature combinations respectivelyfinegrained features we selected the following nonterminal labels that appear more than 100 times in the tuning data np pp s vp sbar adjp whnp prt advp prn and qpthe labels that were excluded were parts of speech nonconstituent labels like frag or labels that occurred only two or three timesfor each of these labels x we added a separate feature that fires when a rules source side span in the input sentence matches x and a second feature that fires when a span crosses a boundary of xthese features are similar to marton and resniks x and x except that our set includes features for whnp prt and prnin addition to parserbased syntactic constraints which were introduced in prior work we introduce a completely new set of features aimed at improving the modeling of reordering within hieroagain the feature definition gives rise to a larger number of features than one would expect to train successfully using mertin a phrasebased model reordering is performed both within phrase pairs and by the phrasereordering modelboth mechanisms are able to learn that longerdistance reorderings are more costly than shorterdistance reorderings phrase pairs because phrases that involve more extreme reorderings will have a lower count in the data and phrase reordering because models are usually explicitly dependent on distanceby contrast in a hierarchical model all reordering is performed by a single mechanism the rules of the grammarin some cases the model will be able to learn a preference for shorterdistance reorderings as in a phrasebased system but in the case of a word being reordered across a nonterminal or two nonterminals being reordered there is no dependence in the model on the size of the nonterminal or nonterminals involved in reorderingso for example if we have rules we might expect that rule is more common in general but that rule becomes more and more rare as x1 gets largerthe default hiero features have no way to learn thisto address this defect we can classify every nonterminal pair occurring on the righthand side of each grammar rule as reordered or not reordered that is whether it intersects any other word alignment link or nonterminal pair we then define coarse and finegrained versions of the structural distortion modelcoarsegrained features let r be a binaryvalued random variable that indicates whether a nonterminal occurrence is reordered and let s be an integervalued random variable that indicates how many source words are spanned by the nonterminal occurrencewe can estimate p via relativefrequency estimation from the rules as they are extracted from the parallel text and incorporate this probability as a new feature of the modelfinegrained features a difficulty with the coarsegrained reordering features is that the grammar extraction process finds overlapping rules in the training data and might not give a sensible probability estimate moreover reordering statistics from the training data might not carry over perfectly into the translation task as an alternative we introduce a finegrained version of our distortion model that can be trained directly in the translation task as follows define a separate binary feature for each value of where r is as above and s e j 1 9 _101 and means any sizefor example if a nonterminal with span 11 has its contents reordered then the features and would both firegrouping all sizes of 10 or more into a single feature is designed to avoid overfittingagain using mira makes it practical to train with the full finegrained feature setcoincidentally also a total of 34 featureswe now describe our experiments to test mira and our features the softsyntactic constraints and the structural distortion features on an arabicenglish translation taskit is worth noting that this experimentation is on a larger scale than watanabe et als and considerably larger than marton and resniks the baseline model was hiero with the following baseline features the probability features are base100 logprobabilitiesthe rules were extracted from all the allowable parallel text from the nist 2008 evaluation aligned by ibm model 4 using giza hierarchical rules were extracted from the most indomain corpora and phrases were extracted from the remainderwe trained the coarsegrained distortion model on 10000 sentences of the training datatwo language models were trained one on data similar to the english side of the parallel text and one on 2 billion words of englishboth were 5gram models with modified kneserney smoothing lossily compressed using a perfecthashing scheme similar to that of talbot and brants but using minimal perfect hashing we partitioned the documents of the nist 2004 and 2005 arabicenglish evaluation data into a tuning set and a development set the test data was the nist 2006 arabicenglish evaluation data to obtain syntactic parses for this data we tokenized it according to the arabic treebank standard using amira parsed it with the stanford parser and then forced the trees back into the mt systems tokenization1 we ran both mert and mira on the tuning set using 20 parallel processorswe stopped mert when the score on the tuning set stopped increasing as is common practice and for mira we used the development set to decide when to stop training2 in our runs mert took an average of 9 passes through the tuning set and mira took an average of 8 passestable 1 shows the results of our experiments with the training methods and features described aboveall significance testing was performed against the first line using paired bootstrap resampling first of all we find that mira is competitive with mert when both use the baseline feature setindeed the mira system scores significantly higher on the test set but if we break the test set down by genre we see that the mira system does slightly worse on newswire and better on newsgroupswhen we add more features to the model the two training methods diverge more sharplywhen training with mert the coarsegrained pair of syntax features yields a small improvement but the finegrained syntax features do not yield any further improvementby contrast when the finegrained features are trained using mira they yield substantial improvementswe observe similar behavior for the structural distortion features mert is not able to take advantage of the finergrained features but mira isfinally using mira to combine both classes of features 56 in all produces the largest improvement 26 bleu points over the mert baseline on the full test setwe also tested some of the differences between our training method and watanabe et als the results are shown in table 2compared with local updating our method of selecting the oracle translation and negative examples does better by 05 bleu points on the development datausing lossaugmented inference to add negative examples to local updating does not appear to helpnevertheless the negative examples are important for if setting dev full 536 local updating no lai local updating lai p 05 oracle no lai no sharing of updates 531 we use our method for selecting the oracle translation without the additional negative examples the algorithm fails generating very long translations and unable to find a weight setting to shorten themit appears then that the additional negative examples enable the algorithm to reliably learn from the enhanced oracle translationsfinally we compared our parallelization method against a simpler method in which all processors learn independently and their weight vectors are all averaged together we see that sharing information among the processors makes a significant differencein this paper we have brought together two existing lines of work the training method of watanabe et al and the models of chiang and marton and resnik watanabe et als work showed that largemargin training with mira can be made feasible for stateoftheart mt systems by using a manageable tuning set we have demonstrated that parallel processing and exploiting more of the parse forest improves miras performance and that even using the same set of features miras performance compares favorably to mert in terms of both translation quality and computational costmarton and resnik showed that it is possible to improve translation in a datadriven framework by incorporating sourceside syntactic analysis in the form of soft syntactic constraintsthis work joins a growing body of work demonstrating the utility of syntactic information in statistical mtin the area of sourceside syntax recent research has continued to improve treetostring translation models soften the constraints of the input tree in various ways and extend phrasebased translation with sourceside soft syntactic constraints all this work shows strong promise but marton and resniks soft syntactic constraint approach is particularly appealing because it can be used unobtrusively with any hierarchicallystructured translation modelhere we have shown that using mira to weight all the constraints at once removes the crucial drawback of the approach the problem of feature selectionfinally we have introduced novel structural distortion features to fill a notable gap in the hierarchical phrasebased approachby capturing how reordering depends on constituent length these features improve translation quality significantlyin sum we have shown that removing the bottleneck of mert opens the door to many possibilities for better translationthanks to michael bloodgood for performing initial simulations of parallelized perceptron trainingthanks also to john denero kevin knight daniel marcu and fei sha for valuable discussions and suggestionsthis research was supported in part by darpa contract hr001106c0022 under subcontract to bbn technologies and hr00110602001 under subcontract to ibm
D08-1024
online largemargin training of syntactic and structural translation featuresminimumerrorrate training is a bottleneck for current development in statistical machine translation because it is limited in the number of weights it can reliably optimizebuilding on the work of watanabe et al we explore the use of the mira algorithm of crammer et al as an alternative to mertwe first show that by parallel processing and exploiting more of the parse forest we can obtain results using mira that match or surpass mert in terms of both translation quality and computational costwe then test the method on two classes of features that address deficiencies in the hiero hierarchical phrasebased model first we simultaneously train a large number of marton and resnik soft syntactic constraints and second we introduce a novel structural distortion modelin both cases we obtain significant improvements in translation performanceoptimizing them in combination for a total of 56 feature weights we improve performance by 26 blue on a subset of the nist 2006 arabicenglish evaluation datawe introduce structural distortion features into a hierarchical phrasebased model aimed at modeling nonterminal reordering given source span lengthwe show that mert is competitive with small numbers of features compared to highdimensional optimizers such as miraour feature explicitly counters overestimates of rule counts or rules with bad overlap points bad rewrites or with undesired insertions of targetside terminals
cheap and fast â but is it good evaluating nonexpert annotations for natural language tasks human linguistic annotation is crucial for many natural language processing tasks but can be expensive and timeconsuming we explore the use of amazons mechanical turk system a significantly cheaper and faster method for collecting annotations from a broad base of paid nonexpert contributors over the web we investigate five tasks affect recognition word similarity recognizing textual entailment event temporal ordering and word sense disambiguation for all five we show high agreement between mechanical turk nonexpert annotations and existing gold standard labels provided by expert labelers for the task of affect recognition we also show that using nonexpert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts we propose a technique for bias correction that significantly improves annotation quality on two tasks we conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense large scale annotation projects such as treebank propbank timebank framenet semcor and others play an important role in natural language processing research encouraging the development of novel ideas tasks and algorithmsthe construction of these datasets however is extremely expensive in both annotatorhours and financial costsince the performance of many natural language processing tasks is limited by the amount and quality of data available to them one promising alternative for some tasks is the collection of nonexpert annotationsin this work we explore the use of amazon mechanical turk1 to determine whether nonexpert labelers can provide reliable natural language annotationswe chose five natural language understanding tasks that we felt would be sufficiently natural and learnable for nonexperts and for which we had gold standard labels from expert labelers as well as expert labeler agreement informationthe tasks are affect recognition word similarity recognizing textual entailment event temporal ordering and word sense disambiguationfor each task we used amt to annotate data and measured the quality of the annotations by comparing them with the gold standard labels on the same datafurther we compare machine learning classifiers trained on expert annotations vs nonexpert annotationsin the next sections of the paper we introduce the five tasks and the evaluation metrics and offer methodological insights including a technique for bias correction that improves annotation quality2the idea of collecting annotations from volunteer contributors has been used for a variety of tasksluis von ahn pioneered the collection of data via online annotation tasks in the form of games including the espgame for labeling images and verbosity for annotating word relations the open mind initiative has taken a similar approach attempting to make such tasks as annotating word sense and commonsense word relations sufficiently easy and fun to entice users into freely labeling datathere have been an increasing number of experiments using mechanical turk for annotationin workers provided annotations for the tasks of hotel name entity resolution and attribute extraction of age product brand and product model and were found to have high accuracy compared to goldstandard labelskittur et al compared amt evaluations of wikipedia article quality against experts finding validation tests were important to ensure good resultszaenen studied the agreement of annotators on the problem of recognizing textual entailment at least several studies have already used amt without external gold standard comparisonsin workers generated paraphrases of 250 nounnoun compounds which were then used as the gold standard dataset for evaluating an automatic method of noun compound paraphrasingkaisser and lowe use amt to help build a dataset for question answering annotating the answers to 8107 questions with the sentence containing the answerkaisser et al examines the task of customizing the summary length of qa output nonexperts from amt chose a summary length that suited their information needs for varying query typesdakka and ipeirotis evaluate a document facet generation system against amtsupplied facets and also use workers for user studies of the systemsorokin and forsyth collect data for machine vision tasks and report speed and costs similar to our findings their summaries of worker behavior also corroborate with what we have foundin general volunteersupplied or amtsupplied data is more plentiful but noisier than expert datait is powerful because independent annotations can be aggregated to achieve high reliabilitysheng et al explore several methods for using many noisy labels to create labeled data how to choose which examples should get more labels and how to include labels uncertainty information when training classifierssince we focus on empirically validating amt as a data source we tend to stick to simple aggregation methodsin this section we describe amazon mechanical turk and the general design of our experimentswe employ the amazon mechanical turk system in order to elicit annotations from nonexpert labelersamt is an online labor market where workers are paid small amounts of money to complete small tasksthe design of the system is as follows one is required to have an amazon account to either submit tasks for annotations or to annotate submitted tasksthese amazon accounts are anonymous but are referenced by a unique amazon ida requester can create a group of human intelligence tasks each of which is a form composed of an arbitrary number of questionsthe user requesting annotations for the group of hits can specify the number of unique annotations per hit they are willing to pay for as well as the reward payment for each individual hitwhile this does not guarantee that unique people will annotate the task this does guarantee that annotations will be collected from unique accountsamt also allows a requester to restrict which workers are allowed to annotate a task by requiring that all workers have a particular set of qualifications such as sufficient accuracy on a small test set or a minimum percentage of previously accepted submissionsannotators may then annotate the tasks of their choosingfinally after each hit has been annotated the requester has the option of approving the work and optionally giving a bonus to individual workersthere is a twoway communication channel between the task designer and the workers mediated by amazon and amazon handles all financial transactionsin general we follow a few simple design principles we attempt to keep our task descriptions as succinct as possible and we attempt to give demonstrative examples for each class wherever possiblewe have published the full experimental design and the data we have collected for each task online3we have restricted our study to tasks where we require only a multiplechoice response or numeric input within a fixed rangefor every task we collect ten independent annotations for each unique item this redundancy allows us to perform an indepth study of how data quality improves with the number of independent annotationswe analyze the quality of nonexpert annotations on five tasks affect recognition word similarity recognizing textual entailment temporal event recognition and word sense disambiguationin this section we define each annotation task and the parameters of the annotations we request using amtadditionally we give an initial analysis of the task results and summarize the cost of the experimentsthis experiment is based on the affective text annotation task proposed in strapparava and mihalcea wherein each annotator is presented with a list of short headlines and is asked to give numeric judgments in the interval 0100 rating the headline for six emotions anger disgust fear joy sadness and surprise and a single numeric rating in the interval 100100 to denote the overall positive or negative valence of the emotional content of the headline as in this sample headlineannotation pair for our experiment we select a 100headline sample from the original semeval test set and collect 10 affect annotations for each of the seven label types for a total of 7000 affect labelswe then performed two comparisons to evaluate the quality of the amt annotationsfirst we asked how well the nonexperts agreed with the expertswe did this by comparing the interannotator agreement of individual expert annotations to that of single nonexpert and averaged nonexpert annotationsin the original experiment ita is measured by calculating the pearson correlation of one annotators labels with the average of the labels of the other five annotatorsfor each expert labeler we computed this ita score of the expert against the other five we then average these ita scores across all expert annotators to compute the average expert ita we then calculate the ita for each expert vs the averaged labels from all other experts and nonexperts and for each nonexpert vs the pool of other nonexperts and all experts we compute these ita scores for each emotion task separately averaging the six emotion tasks as averageemo and the average of all tasks as averageallthe results in table 1 conform to the expectation that experts are better labelers experts agree with experts more than nonexperts agree with experts although the itas are in many cases quite closebut we also found that adding nonexperts to the gold standard improves agreement suggesting that nonexpert annotations are good enough to increase the overall quality of the gold labelsour first comparison showed that individual experts were better than individual nonexpertsin our next comparison we ask how many averaged nonexperts it would take to rival the performance of a single expertwe did this by averaging the labels of each possible subset of n nonexpert annotations for value of n in 1 2 10we then treat this average as though it is the output of a single metalabeler and compute the ita with respect to each subset of five of the six expert annotatorswe then average the results of these studies across each subset size the results of this experiment are given in table 2 and in figure 1in addition to the single metalabeler we ask what is the minimum number of nonexpert annotations k from which we can create a metalabeler that has equal or better ita than an expert annotatorin table 2 we give the minimum k for each emotion and the averaged ita for that metalabeler consisting of k nonexperts in figure 1 we plot the expert ita correlation as the horizontal dashed linethese results show that for all tasks except fear we are able to achieve expertlevel ita with the heldout set of experts within 9 labelers and frequently within only 2 labelerspooling judgments across all 7 tasks we find that on average it requires only 4 nonexpert annotations per example to achieve the equivalent ita as a single expert annotatorgiven that we paid us200 in order to collect the 7000 nonexpert annotations we may interpret our rate of 3500 nonexpert labels per usd as at least 875 expertequivalent labels per usdthis task replicates the word similarity task used in following a previous task initially proposed by specifically we ask for numeric judgments of word similarity for 30 word pairs on a scale of 010 allowing fractional responses4these word pairs range from highly similar to unrelated numerous expert and nonexpert studies have shown that this task typically yields very high interannotator agreement as measured by pearson correlation found a 097 correlation of the annotations of 38 subjects with the annotations given by 51 subjects in and a following study with 10 subjects found a 0958 correlation with in our experiment we ask for 10 annotations each of the full 30 word pairs at an offered price of 002 for each set of 30 annotations the most surprising aspect of this study was the speed with which it was completed the task of 300 annotations was completed by 10 annotators in less than 11 min4 and others originally used a numerical score of 04 utes from the time of submission of our task to amt at the rate of 1724 annotations houras in the previous task we evaluate our nonexpert annotations by averaging the numeric responses from each possible subset of n annotators and computing the interannotator agreement with respect to the gold scores reported in our results are displayed in figure 2 with resniks 0958 correlation plotted as the horizontal line we find that at 10 annotators we achieve a correlation of 0952 well within the range of other studies of expert and nonexpert annotationsthis task replicates the recognizing textual entailment task originally proposed in the pascal recognizing textual entailment task here for each question the annotator is presented with two sentences and given a binary choice of whether the second hypothesis sentence can be inferred from the firstfor example the hypothesis sentence oil prices drop would constitute a true entailment from the text crude oil prices slump but a false entailment from the government announced last week that it plans to raise oil priceswe gather 10 annotations each for all 800 sentence pairs in the pascal rte1 datasetfor this dataset expert interannotator agreement studies have been reported as achieving 91 and 96 agreement over various subsections of the corpuswhen considering multiple nonexpert annotations for a sentence pair we use simple majority voting breaking ties randomly and averaging performance over all possible ways to break tieswe collect 10 annotations for each of 100 rte sentence pairs as displayed in figure 3 we achieve a maximum accuracy of 897 averaging over the annotations of 10 workers52 4 6 8 10 annotations this task is inspired by the timebank corpus which includes among its annotations a label for eventpairs that represents the temporal relation between them from a set of fourteen relations we implement temporal ordering as a simplified version of the timebank event temporal annotation task rather than annotating all fourteen event types we restrict our consideration to the two simplest labels strictly before and strictly afterfurthermore rather than marking both nouns and verbs in the text as possible events we only consider possible verb eventswe extract the 462 verb event pairs labeled as strictly before or strictly after in the timebank corpus and we present these pairs to annotators with a forced binary choice on whether the event described by the first verb occurs before or after the secondfor example in a dialogue about a plane explosion we have the utterance it just blew up in the air and then we saw two fireballs go down to the to the water and there was a big small ah smoke from ah coming up from thathere for each annotation we highlight the specific verb pair of interest and ask which event occurs first the results of this task are presented in figure 4we achieve high agreement for this task at a rate of 094 with simple voting over 10 annotators while an expert ita of 077 was reported for the more general task involving all fourteen labels on both noun and verb events no expert ita numbers have been reported for this simplified temporal ordering taskin this task we consider a simple problem on which machine learning algorithms have been shown to produce extremely good results here we annotate part of the semeval word sense disambiguation lexical sample task specifically we present the labeler with a paragraph of text containing the word president and ask the labeler which one of the following three sense labels is most appropriate we collect 10 annotations for each of 177 examples of the noun president for the three senses given in semevalas shown in figure 5 performing simple majority voting over annotators results in a rapid accuracy plateau at a very high rate of 0994 accuracyin fact further analysis reveals that there was only a single disagreement between the averaged nonexpert vote and the gold standard on inspection it was observed that the annotators voted strongly against the original gold label and that it was in fact found to be an error in the original gold standard annotation6 after correcting this error the nonexpert accuracy rate is 100 on the 177 examples in this taskthis is a specific example where nonexpert annotations can be used to correct expert annotationssince expert ita was not reported per word on this dataset we compare instead to the performance of the best automatic system performance for disambiguating president in semeval task 17 with an accuracy of 0982 4 6 8 10 annotators in table 3 we give a summary of the costs associated with obtaining the nonexpert annotations for each of our 5 taskshere time is given as the total amount of time in hours elapsed from submitting the group of hits to amt until the last assignment is submitted by the last workerthe reliability of individual workers variessome are very accurate while others are more careless and make mistakes and a small few give very noisy responsesfurthermore for most amt data collection experiments a relatively small number of workers do a large portion of the task since workers may do as much or as little as they pleasefigure 6 shows accuracy rates for individual workers on one taskboth the overall variability as well as the prospect of identifying highvolume but lowquality workers suggest that controlling for individual worker quality could yield higher quality overall judgmentsin general there are at least three ways to enhance quality in the face of worker errormore workers can be used as described in previous sectionsanother method is to use amazons compensation mechanisms to give monetary bonuses to highlyperforming workers and deny payments to unreliable ones this is useful but beyond the scope of this paperin this section we explore a third alternative to model the reliability and biases of individual workers and correct for thema wide number of methods have been explored to correct for the bias of annotatorsdawid and skene are the first to consider the case of having multiple annotators per example but unknown true labelsthey introduce an them algorithm to simultaneously estimate annotator biases and latent label classeswiebe et al analyze linguistic annotator agreement statistics to find bias and use a similar model to correct labelsa large literature in biostatistics addresses this same problem for medical diagnosisalbert and dodd review several related models but argue they have various shortcomings and emphasize instead the importance of having a gold standardhere we take an approach based on gold standard labels using a small amount of expertlabeled training data in order to correct for the individual biases of different nonexpert annotatorsthe idea is to recalibrate workers responses to more closely match expert behaviorwe focus on categorical examples though a similar method can be used with numeric datafollowing dawid and skene we model labels and workers with a multinomial model similar to naive bayesevery example i has a true label xifor simplicity assume two labels y nseveral different workers give labels yi1 yi2 yiwa workers conditional probability of response is modeled as multinomial and we model each workers judgment as conditionally independent of other workers given the true label xi ie to infer the posterior probability of the true label for a new example worker judgments are integrated via bayes rule yielding the posteri the worker response likelihoods p and p can be directly estimated from frequencies of worker performance on gold standard examplesfor map label estimation the above equation describes a weighted voting rule each workers vote is weighted by their log likelihood ratio for their given responseintuitively workers who are more than 50 accurate have positive votes workers whose judgments are pure noise have zero votes and anticorrelated workers have negative votes511 example tasks rte1 and event annotation we used this model to improve accuracy on the rte1 and event annotation tasksfirst we took a sample of annotations giving k responses per examplewithin this sample we trained and tested via 20fold crossvalidation across examplesworker models were fit using laplace smoothing of 1 pseudocount label priors were uniform which was reasonably similar to the empirical distribution for both tasks curacy increase averaged across 2 through 10 annotatorswe find a 34 gain on event annotationfinally we experimented with a similar calibration method for numeric data using a gaussian noise model for each worker ywjx non the affect task this yielded a small but consistent increases in pearson correlation at all numbers of annotators averaging a 06 gainin this section we train a supervised affect recognition system with expert vs nonexpert annotationsfor the purpose of this experiment we create a simple bagofwords unigram model for predicting affect and valence similar to the swat system one of the topperforming systems on the semeval affective text task7 for each token t in our training set we assign t a weight for each emotion e equal to the average emotion score observed in each headline h that t participates in ie if ht is the set of headlines containing the token t then with these weights of the individual tokens we may then compute the score for an emotion e of a new headline h as the average score over the set of tokens t e h that we have observed in the training set ie annotators annotators numbers of annotatorsthe lowest line is for the naive 50 majority voting ruleeach point is the data sets accuracy against the gold labels averaged across resamplings each of which obtains k annotations per examplerte has an average 40 acwhere jhj is simply the number of tokens in headline h ignoring tokens not observed in the training setwe use 100 headlines as a training set and we use the remaining 900 headlines as our test setsince we are fortunate to have the six separate expert annotations in this task we can perform an extended systematic comparison of the performance of the classifier trained with expert vs nonexpert datafor this evaluation we compare the performance of systems trained on expert and nonexpert annotationsfor each expert annotator we train a system using only the judgments provided by that annotator and then create a gold standard test set using the average of the responses of the remaining five labelers on that setin this way we create six independent experttrained systems and compute the average across their performance calculated as pearson correlation to the gold standard this is reported in the 1expert column of table 4next we train systems using nonexpert labels for each possible subset of n annotators for n e 11 2 10 we train a system and evaluate by calculating pearson correlation with the same set of gold standard datasets used in the experttrained system evaluationaveraging the results of these studies yields the results in table 4as in table 2 we calculate the minimum number of nonexpert annotations per example k required on average to achieve similar performance to the expert annotations surprisingly we find that for five of the seven tasks the average system trained with a single set of nonexpert annotations outperforms the average system trained with the labels from a single expertone possible hypothesis for the because of this nonintuitive result is that individual labelers tend to have a strong bias and since multiple nonexpert labelers may contribute to a single set of nonexpert annotations the annotator diversity within the single set of labels may have the effect of reducing annotator bias and thus increasing system performancewe demonstrate the effectiveness of using amazon mechanical turk for a variety of natural language annotation tasksour evaluation of nonexpert labeler data vs expert annotations for five tasks found that for many tasks only a small number of nonexpert annotations per item are necessary to equal the performance of an expert annotatorin a detailed study of expert and nonexpert agreement for an affect recognition task we find that we require an average of 4 nonexpert labels per item in order to emulate expertlevel label qualityfinally we demonstrate significant improvement by controlling for labeler biasthanks to nathanael chambers annie zaenen rada mihalcea qi su panos ipeirotis bob carpenter david vickrey william morgan and lukas biewald for useful discussions and for the generous support of dolores labsthis work was supported in part by the disruptive technology office s advanced question answering for intelligence phase iii program
D08-1027
cheap and fast â but is it good evaluating nonexpert annotations for natural language taskshuman linguistic annotation is crucial for many natural language processing tasks but can be expensive and timeconsumingwe explore the use of amazons mechanical turk system a significantly cheaper and faster method for collecting annotations from a broad base of paid nonexpert contributors over the webwe investigate five tasks affect recognition word similarity recognizing textual entailment event temporal ordering and word sense disambiguationfor all five we show high agreement between mechanical turk nonexpert annotations and existing gold standard labels provided by expert labelersfor the task of affect recognition we also show that using nonexpert labels for training machine learning algorithms can be as effective as using gold standard annotations from expertswe propose a technique for bias correction that significantly improves annotation quality on two taskswe conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expensewe compare the quality of labels produced by nonexpert turkers against those made by experts for a variety of nlp tasks and find that they required only four responses per item to emulate expert annotationswe show that obtaining multiple low quality labels can approach highquality editorial labelswe work with a majority rule where ties are broken uniformly at random and report an observed agreement between the majority rule and the gold standard of 897
understanding the value of features for coreference resolution in recent years there has been substantial work on the important problem of coreference resolution most of which has concentrated on the development of new models and algorithmic techniques these works often show that complex models improve over a weak pairwise baseline however less attention has been given to the importance of selecting strong features to support learning a coreference model this paper describes a rather simple pairwise classification model for coreference resolution developed with a welldesigned set of features we show that this produces a stateoftheart system that outperforms systems built with complex models we suggest that our system can be used as a baseline for the development of more complex models which may have less impact when a more robust set of features is used the paper also presents an ablation study and discusses the relative contributions of various features coreference resolution is the task of grouping all the mentions of entities1 in a document into equivalence classes so that all the mentions in a given class refer to the same discourse entityfor example given the sentence the task is to group the mentions so that those referring to the same entity are placed together into an equivalence classmany nlp tasks detect attributes actions and relations between discourse entitiesin order to discover all information about a given entity textual mentions of that entity must be grouped togetherthus coreference is an important prerequisite to such tasks as textual entailment and information extraction among othersalthough coreference resolution has received much attention that attention has not focused on the relative impact of highquality featuresthus while many structural innovations in the modeling approach have been made those innovations have generally been tested on systems with features whose strength has not been established and compared to weak pairwise baselinesas a result it is possible that some modeling innovations may have less impact or applicability when applied to a stronger baseline systemthis paper introduces a rather simple but stateoftheart system which we intend to be used as a strong baseline to evaluate the impact of structural innovationsto this end we combine an effective coreference classification model with a strong set of features and present an ablation study to show the relative impact of a variety of featuresas we show this combination of a pairwise model and strong features produces a 15 percentage point increase in bcubed fscore over a complex model in the stateoftheart system by culotta et al although their system uses a complex nonpairwise model computing features over partial clusters of mentionsgiven a document and a set of mentions coreference resolution is the task of grouping the mentions into equivalence classes so that each equivalence class contains exactly those mentions that refer to the same discourse entitythe number of equivalence classes is not specified in advance but is bounded by the number of mentionsin this paper we view coreference resolution as a graph problem given a set of mentions and their context as nodes generate a set of edges such that any two mentions that belong in the same equivalence class are connected by some path in the graphwe construct this entitymention graph by learning to decide for each mention which preceding mention if any belongs in the same equivalence class this approach is commonly called the pairwise coreference model to decide whether two mentions should be linked in the graph we learn a pairwise coreference function pc that produces a value indicating the probability that the two mentions should be placed in the same equivalence classthe remainder of this section first discusses how this function is used as part of a documentlevel coreference decision model and then describes how we learn the pc functiongiven a document d and a pairwise coreference scoring function pc that maps an ordered pair of mentions to a value indicating the probability that they are coreferential we generate a coreference graph gd according to the bestlink decision model as follows for each mention m in document d let bt be the set of mentions appearing before m in d let a be the highest scoring antecedent in section 44 we add the edge to the coreference graph gdthe resulting graph contains connected components each representing one equivalence class with all the mentions in the component referring to the same entitythis technique permits us to learn to detect some links between mentions while being agnostic about whether other mentions are linked and yet via the transitive closure of all links we can still determine the equivalence classeswe also require that no nonpronoun can refer back to a pronoun if m is not a pronoun we do not consider pronouns as candidate antecedentsfor pairwise models it is common to choose the best antecedent for a given mention however the method of deciding which is the best antecedent variessoon et al use the closestlink method they select as an antecedent the closest preceding mention that is predicted coreferential by a pairwise coreference module this is equivalent to choosing the closest mention whose pc value is above a thresholdbestlink was shown to outperform closestlink in an experiment by ng and cardie our model differs from that of ng and cardie in that we impose the constraint that nonpronouns cannot refer back to pronouns and in that we use as training examples all ordered pairs of mentions subject to the constraint aboveculotta et al introduced a model that predicts whether a pair of equivalence classes should be merged using features computed over all the mentions in both classessince the number of possible classes is exponential in the number of mentions they use heuristics to select training examplesour method does not require determining which equivalence classes should be considered as exampleslearning the pairwise scoring function pc is a crucial issue for the pairwise coreference modelwe apply machine learning techniques to learn from examples a function pc that takes as input an ordered pair of mentions such that a precedes m in the document and produces as output a value that is interpreted as the conditional probability that m and a belong in the same equivalence classthe ace training data provides the equivalence classes for mentionshowever for some pairs of mentions from an equivalence class there is little or no direct evidence in the text that the mentions are coreferentialtherefore training pc on all pairs of mentions within an equivalence class may not lead to a good predictorthus for each mention m we select from ms equivalence class the closest preceding mention a and present the pair as a positive training example under the assumption that there is more direct evidence in the text for the existence of this edge than for other edgesthis is similar to the technique of ng and cardie for each m we generate negative examples for all mentions a that precede m and are not in the same equivalence classnote that in doing so we generate more negative examples than positive onessince we never apply pc to a pair where the first mention is a pronoun and the second is not a pronoun we do not train on examples of this formwe learn the pairwise coreference function using an averaged perceptron learning algorithm we use the regularized version in learning based java2 the performance of the documentlevel coreference model depends on the quality of the pairwise coreference function pcbeyond the training paradigm described earlier the quality of pc depends on the features usedwe divide the features into categories based on their functiona full list of features and their categories is given in table 2in addition to these boolean features we also use the conjunctions of all pairs of features3 in the following description the term head means the head noun phrase of a mention the extent is the largest noun phrase headed by the head noun phrasethe type of a mention indicates whether it is a proper noun a common noun or a pronounthis feature when conjoined with others allows us to give different weight to a feature depending on whether it is being applied to a proper name or a pronounfor our experiments in section 5 we use gold mention types as is done by culotta et al and luo and zitouni note that in the experiments described in section 6 we predict the mention types as described there and do not use any gold datathe mention type feature is used in all experimentsstring relation features indicate whether two strings share some property such as one being the substring of another or both sharing a modifier wordfeatures are listed in table 1modifiers are limited to those occurring before the headfeature definition headi substring of headj modi alias acronym headj or lastnamei lastnamej another class of features captures the semantic relation between two wordsspecifically we check whether gender or number match or whether the mentions are synonyms antonyms or hypernymswe also check the relationship of modifiers that share a hypernymdescriptions of the methods for computing these features are described nextgender match we determine the gender of the two phrases and report whether they match for head match extent match substring modifiers match a proper name gender is determined by the existence of mr ms mrs or the gender of the first nameif only a last name is found the phrase is considered to refer to a personif the name is found in a comprehensive list of cities or countries or ends with an organization ending such as inc then the gender is neuterin the case of a common noun phrase the phrase is looked up in wordnet and it is assigned a gender according to whether male female person artifact location or group is found in the hypernym treethe gender of a pronoun is looked up in a tablenumber match number is determined as follows phrases starting with the words a an or this are singular those these or some indicate pluralnames not containing and are singularcommon nouns are checked against extensive lists of singular and plural nouns words found in neither or both lists have unknown numberfinally if the number is unknown yet the two mentions have the same spelling they are assumed to have the same numberwordnet features we check whether any sense of one head noun phrase is a synonym antonym or hypernym of any sense of the otherwe also check whether any sense of the phrases share a hypernym after dropping entity abstraction physical entity object whole artifact and group from the senses since they are close to the root of the hypernym treemodifiers match determines whether the text before the head of a mention matches the head or the text before the head of the other mentionboth mentions speak true if both mentions appear within two words of a verb meaning to saybeing in a window of size two is an approximation to being a syntactic subject of such a verbthis feature is a proxy for having similar semantic typesadditional evidence is derived from the relative location of the two mentionswe thus measure distance for all i up to the distance and less than some maximum using units of compatible mentions and whether the mentions are in the same sentencewe also detect apposition for details see table 3we determine the relationship of any pair of modifiers that share a hypernymeach aligned pair may have one of the following relations match substring synonyms hypernyms antonyms or mismatchmismatch is defined as none of the abovewe restrict modifiers to single nouns and adjectives occurring before the head noun phrasemodifier names if the mentions are both modified by other proper names use a basic coreference classifier to determine whether the modifiers are coreferentialthis basic classifier is trained using mention types string relations semantic features apposition relative pronoun and both speakfor each mention m examples are generated with the closest antecedent a to form a positive example and every mention between a and m to form negative examplesanaphoricity ng and cardie and denis and baldridge show that when used effectively explicitly predicting anaphoricity can be helpfulthus we learn a separate classifier to detect whether a mention is anaphoric and use that classifiers output as a feature for the coreference modelfeatures for the anaphoricity classifier include the mention type whether the mention appears in a quotation the text of the first word of the extent the text of the first word after the head whether there is a longer mention preceding this mention and having the same head text whether any preceding mention has the same extent text and whether any preceding mention has the same text from beginning of the extent to end of the headconjunctions of all pairs of these features are also usedthis classifier predicts anaphoricity with about 82 accuracywe allow our system to learn which pairs of nouns tend to be used to mention the same entityfor example president and he often refer to bush but she and prime minister rarely do if everto enable the system to learn such patterns we treat the presence or absence of each pair of final head nouns one from each mention of an example as a featurewe predict the entity type as follows if a proper name we check a list of personal first names and a short list of honorary titles to determine if the mention is a personotherwise we look in lists of personal last names drawn from us census data and in lists of cities states countries organizations corporations sports teams universities political parties and organization endings if found in exactly one list we return the appropriate typewe return unknown if found in multiple lists because the lists are quite comprehensive and may have significant overlapfor common nouns we look at the hypernym tree for one of the following person political unit location organization weapon vehicle industrial plant and facilityif any is found we return the appropriate typeif multiple are found we sort as in the above listfor personal pronouns we recognize the entity as a person otherwise we specify unknownthis computation is used as part of the following two featuresentity type match this feature checks to see whether the predicted entity types matchthe result is true if the types are identical false if they are different and unknown if at least one type is unknownentity type conjunctions this feature indicates the presence of the pair of predicted entity types for the two mentions except that if either word is a pronoun the word token replaces the type in the pairsince we do this replacement for entity types we also add a similar feature for mention types herethese features are boolean for any given pair a feature is active if that pair describes the examplemany of our features are similar to those described in culotta et al this includes mention types string relation features gender and number match wordnet features alias apposition relative pronoun and both mentions speakthe implementations of those features may vary from those of other systemsanaphoricity has been proposed as a part of the model in several systems including ng and cardie but we are not aware of it being used as a feature for a learning algorithmdistances have been used in egluo et al however we are not aware of any system using the number of compatible mentions as a distancewe use the official ace 2004 english training data much work has been done on coreference in several languages but for this work we focus on english textwe split the corpus into three sets train dev and testour test set contains the same 107 documents as culotta et al our training set is a random 80 of the 336 documents in their training set and our dev set is the remaining 20for our ablation study we further randomly split our development set into two evenly sized parts devtune and devevalfor each experiment we set the parameters of our algorithm to optimize bcubed fscore using devtune and use those parameters to evaluate on the deveval datafor the experiments in section 5 following culotta et al to make experiments more comparable across systems we assume that perfect mention boundaries and mention type labels are givenwe do not use any other gold annotated input at evaluation timein section 6 experiments we do not use any gold annotated input and do not assume mention types or boundaries are givenin all experiments we automatically split words and sentences using our preprocessing tools4 bcubed fscore we evaluate over the commonly used bcubed fscore which is a measure of the overlap of predicted clusters and true clustersit is computed as the harmonic mean of precision where cm is the number of mentions appearing both in ms predicted cluster and in ms true cluster pm is the size of the predicted cluster containing m and tm is the size of ms true clusterfinally d represents a document from the set d and n is the total number of mentions in d bcubed fscore has the advantage of being able to measure the impact of singleton entities and of giving more weight to the splitting or merging of larger entitiesit also gives equal weight to all types of entities and mentionsfor these reasons we report our results using bcubed fscoremuc fscore we also provide results using the official muc scoring algorithm the muc fscore is also the harmonic mean of precision and recallhowever the muc precision counts precision errors by computing the minimum number of links that must be added to ensure that all mentions referring to a given entity are connected in the graphrecall errors are the number of links that must be removed to ensure that no two mentions referring to different entities are connected in the graphwe train a regularized average perceptron using examples selected as described in section 221the learning rate is 01 and the regularization parameter is 35at training time we use a threshold of 00 but when evaluating we select parameters to optimize bcubed fscore on a heldout development setwe sample all even integer thresholds from 16 to 8we choose the number of rounds of training similarly allowing any number from one to twentyin table 4 we compare our performance against a system that is comparable to ours both use gold mention boundaries and types evaluate using bcubed fscore and have the same training and test data splitculotta et al is the best comparable system of which we are awareour results show that a pairwise model with strong features outperforms a stateoftheart system with a more complex modelmuc score we evaluate the performance of our system using the official muc score in table 5in table 6 we show the relative impact of various featureswe report data on deveval to avoid the possibility of overfitting by feature selectionthe parameters of the algorithm are chosen to maximize the bcubed fscore on the devtune datanote that since we report results on deveval the results in table 6 are not directly comparable with culotta et al for comparable results see table 4 and the discussion aboveour ablation study shows the impact of various classes of features indicating that almost all the features help although some more than othersit also illustrates that some features contribute more to precision others more to recallfor example aligned modifiers contribute primarily to precision whereas our learned features and our apposition features contribute to recallthis information can be useful when designing a coreference system in an application where recall is more important than precision or vice versawe examine the effect of some important features selecting those that provide a substantial improvement in precision recall or bothfor each such feature we examine the rate of coreference amongst mention pairs for which the feature is active compared with the overall rate of coreferencewe also show examples on which the coreference systems differ depending on the presence or absence of a featureapposition this feature checks whether two mentions are separated by only a comma and it increases bcubed fscore by about one percentage pointwe hypothesize that proper names and common noun phrases link primarily through apposition and that apposition is thus a significant feature for good coreference resolutionwhen this feature is active 36 of the examples are coreferential whereas only 6 of all examples are coreferentiallooking at some examples our system begins to get right when apposition is added we find the phrase israels deputy defense minister ephraim snehupon adding apposition our system begins to correctly associate israels deputy defense minister with ephraim snehlikewise in the phrase the court president ronald sutherland the system correctly associates the court president with ronald sutherland when they appear in an appositive relation in the textin addition our system previous line at the p 005 level according to a paired nonparametric bootstrapping percentile testthese results show the importance of distance entity type and apposition features begins correctly associating relative pronouns such as who with their referents in phrases like sheikh abbad who died 500 years ago although an explicit relative pronoun feature is added only lateralthough this feature may lead the system to link comma separated lists of entities due to misinterpretation of the comma for example wyoming and western south dakota in a list of locations we believe this can be avoided by refining the apposition feature to ignore listsrelative pronoun next we investigate the relative pronoun featurewith this feature active 93 of examples were positive indicating the precision of this featurelooking to examples we find who in the official who wished to remain anonymous is properly linked as is that in nuclear warheads that can befitted to missilesdistances our distance features measure separation of two mentions in number of compatible mentions and whether the mentions are in the same sentencedistance features are important for a system that makes links based on the best pairwise coreference value rather than implicitly incorporating distance by linking only the closest pair whose score is above a threshold as done by egsoon et al looking at examples we find that adding distances allows the system to associate the pronoun it with this missile not separated by any mentions rather than tehran which is separated from it by many mentionspredicted entity types since no two mentions can have different entity types and be coreferential this feature has strong discriminative powerwhen the entity types match 13 of examples are positive compared to only 6 of examples in generalqualitatively the entity type prediction correctly recognizes the gulf region as a geopolitical entity and he as a person and thus prevents linking the twolikewise the system discerns baghdad from ambassador due to the entity typehowever in some cases an identity type match can cause the system to be overly confident in a bad match as in the case of a palestinian state identified with holy jerusalem on the basis of proximity and shared entity typethis type of example may require some additional world knowledge or deeper comprehension of the documentthe ultimate goal for a coreference system is to process unannotated textwe use the term endtoend coreference for a system capable of determining coreference on plain textwe describe the challenges associated with an endtoend system describe our approach and report results belowdeveloping an endtoend system requires detecting and classifying mentions which may degrade coreference resultsone challenge in detecting mentions is that they are often heavily nestedadditionally there are issues with evaluating an endtoend system against a gold standard corpus resulting from the possibility of mismatches in mention boundaries missing mentions and additional mentions detected along with the need to align detected mentions to their counterparts in the annotated datawe resolve coreference on unannotated text as follows first we detect mention heads following a state of the art chunking approach using standard featuresthis results in a 90 f1 head detectornext we detect the extent boundaries for each head using a learned classifierthis is followed by determining whether a mention is a proper name common noun phrase prenominal modifier or pronoun using a learned mention type classifier thatfinally we apply our coreference algorithm described aboveto evaluate we align the heads of the detected mentions to the gold standard heads greedily based on number of overlapping wordswe choose not to impute errors to the coreference system for mentions that were not detected or for spuriously detected mentions and othersalthough this evaluation is lenient given that the mention detection component performs at over 90 f1 we believe it provides a realistic measure for the performance of the endtoend system and focuses the evaluation on the coreference componentthe results of our endtoend coreference system are shown in table 7we described and evaluated a stateoftheart coreference system based on a pairwise model and strong featureswhile previous work showed the impact of complex models on a weak pairwise baseline the applicability and impact of such models on a strong baseline system such as ours remains uncertainwe also studied and demonstrated the relative value of various types of features showing in particular the importance of distance and apposition features and showing which features impact precision or recall morefinally we showed an endtoend system capable of determining coreference in a plain text documentwe would like to thank mingwei chang michael connor alexandre klementiev nick rizzolo kevin small and the anonymous reviewers for their insightful commentsthis work is partly supported by nsf grant sodhcer0613885 and a grant from boeing
D08-1031
understanding the value of features for coreference resolutionin recent years there has been substantial work on the important problem of coreference resolution most of which has concentrated on the development of new models and algorithmic techniquesthese works often show that complex models improve over a weak pairwise baselinehowever less attention has been given to the importance of selecting strong features to support learning a coreference modelthis paper describes a rather simple pairwise classification model for coreference resolution developed with a welldesigned set of featureswe show that this produces a stateoftheart system that outperforms systems built with complex modelswe suggest that our system can be used as a baseline for the development of more complex models which may have less impact when a more robust set of features is usedthe paper also presents an ablation study and discusses the relative contributions of various featuresour algorithm runs in time quadratic in the number of mentions
bayesian unsupervised topic segmentation this paper describes a novel bayesian approach to unsupervised topic segmentation unsupervised systems for this task are driven the tendency of wellformed segments to induce a compact and consistent lexical distribution we show that lexical cohesion can be placed in a bayesian context by modeling the words in each topic segment as draws from a multinomial language model associated with the segment maximizing the observation likelihood in such a model yields a lexicallycohesive segmentation this contrasts with previous approaches which relied on handcrafted cohesion metrics the bayesian framework provides a principled way to incorporate additional features such as cue phrases a powerful indicator of discourse structure that has not been previously used in unsupervised segmentation systems our model yields consistent improvements over an array of stateoftheart systems on both text and speech datasets we also show that both an entropybased analysis and a wellknown previous technique can be de topic segmentation is one of the fundamental problems in discourse analysis where the task is to divide a text into a linear sequence of topicallycoherent segmentshearsts texttiling introduced the idea that unsupervised segmentation can be driven by lexical cohesion as highquality segmentations feature homogeneous lexical distributions within each topic segmentlexical cohesion has provided the inspiration for several successful systems and is currently the dominant approach to unsupervised topic segmentationbut despite the effectiveness of lexical cohesion for unsupervised topic segmentation it is clear that there are other important indicators that are ignored by the current generation of unsupervised systemsfor example consider cue phrases which are explicit discourse markers such as now or however cue phrases have been shown to be a useful feature for supervised topic segmentation but cannot be incorporated by current unsupervised modelsone reason for this is that existing unsupervised methods use arbitrary handcrafted metrics for quantifying lexical cohesion such as weighted cosine similarity without supervision it is not possible to combine such metrics with additional sources of informationmoreover such handcrafted metrics may not generalize well across multiple datasets and often include parameters which must be tuned on development sets in this paper we situate lexical cohesion in a bayesian framework allowing other sources of information to be incorporated without the need for labeled datawe formalize lexical cohesion in a generative model in which the text for each segment is produced by a distinct lexical distributionlexicallyconsistent segments are favored by this model because probability mass is conserved for a narrow subset of wordsthus lexical cohesion arises naturally through the generative process and other sources of information such as cue words can easily be incorporated as emissions from the segment boundariesmore formally we treat the words in each sentence as draws from a language model associated with the topic segmentthis is related to topicmodeling methods such as latent dirichlet allocation but here the induced topics are tied to a linear discourse structurethis property enables a dynamic programming solution to find the exact maximumlikelihood segmentationwe consider two approaches to handling the language models estimating them explicitly and integrating them out using the dirichlet compound multinomial distribution we model cue phrases as generated from a separate multinomial that is shared across all topics and documents in the dataset a highlikelihood model will obtain a compact set of cue phrasesthe addition of cue phrases renders our dynamic programmingbased inference inapplicable so we design a samplingbased inference techniquethis algorithm can learn in a completely unsupervised fashion but it also provides a principled mechanism to improve search through the addition of declarative linguistic knowledgethis is achieved by biasing the selection of samples towards boundaries with known cue phrases this does not change the underlying probabilistic model but guides search in the direction of linguisticallyplausible segmentationswe evaluate our algorithm on corpora of spoken and written language including the benchmark icsi meeting dataset and a new textual corpus constructed from the contents of a medical textbookin both cases our model achieves performance surpassing multiple stateoftheart baselinesmoreover we demonstrate that the addition of cue phrases can further improve segmentation performance over cohesionbased methodsin addition to the practical advantages demonstrated by these experimental results our model reveals interesting theoretical propertiesother researchers have observed relationships between discourse structure and entropy we show that in a special case of our model the segmentation objective is equal to a weighted sum of the negative entropies for each topic segmentthis finding demonstrates that a relationship between discourse segmentation and entropy is a natural consequence of modeling topic structure in a generative bayesian frameworkin addition we show that the benchmark segmentation system of utiyama and isahara can be viewed as another special case of our bayesian modelexisting unsupervised cohesionbased approaches can be characterized in terms of the metric used to quantify cohesion and the search techniquegalley et al characterize cohesion in terms of lexical chains repetitions of a given lexical item over some fixedlength window of sentencesin their unsupervised model inference is performed by selecting segmentation points at the local maxima of the cohesion functionmalioutov and barzilay optimize a normalized minimumcut criteria based on a variation of the cosine similarity between sentencesmost similar to our work is the approach of utiyama and isahara who search for segmentations with compact language models as shown in section 311 this can be viewed as a special case of our modelboth of these last two systems use dynamic programming to search the space of segmentationsan alternative bayesian approach to segmentation was proposed by purver et al they assume a set of documents that is characterized by some number of hidden topics that are shared across multiple documentsthey then build a linear segmentation by adding a switching variable to indicate whether the topic distribution for each sentence is identical to that of its predecessorunlike purver et al we do not assume a dataset in which topics are shared across multiple documents indeed our model can be applied to single documents individuallyadditionally the inference procedure of purver et al requires sampling multiple layers of hidden variablesin contrast our inference procedure leverages the nature of linear segmentation to search only in the space of segmentation pointsthe relationship between discourse structure and cue phrases has been studied extensively for an early example of computational work on this topic see passonneau and litman were the first to investigate the relationship between cue phrases and linear segmentationmore recently cue phrases have been applied to topic segmentation in the supervised settingin a supervised system that is distinct from the unsupervised model described above galley et al automatically identify candidate cue phrases by mining labeled data for words that are especially likely to appear at segment boundaries the presence of cue phrases is then used as a feature in a rulebased classifier for linear topic segmentationelsner and charniak specify a list of cue phrases by hand the cue phrases are used as a feature in a maximumentropy classifier for conversation disentanglementunlike these approaches we identify candidate cue phrases automatically from unlabeled data and incorporate them in the topic segmentation task without supervisionthe core idea of lexical cohesion is that topicallycoherent segments demonstrate compact and consistent lexical distributions lexical cohesion can be placed in a probabilistic context by modeling the words in each topic segment as draws from a multinomial language model associated with the segmentformally if sentence t is in segment j then the bag of words xt is drawn from the multinomial language model θjthis is similar in spirit to hidden topic models such as latent dirichlet allocation but rather than assigning a hidden topic to each word we constrain the topics to yield a linear segmentation of the documentwe will assume that topic breaks occur at sentence boundaries and write zt to indicate the topic assignment for sentence t the observation likelihood is where x is the set of all t sentences z is the vector of segment assignments for each sentence and θ is the set of all k language models2 a linear segmentation is ensured by the additional constraint that zt must be equal to either zt1 or zt1 1 to obtain a high likelihood the language models associated with each segment should concentrate their probability mass on a compact subset of wordslanguage models that spread their probability mass over a broad set of words will induce a lower likelihoodthis is consistent with the principle of lexical cohesionthus far we have described a segmentation in terms of two parameters the segment indices z and the set of language models θfor the task of segmenting documents we are interested only in the segment indices and would prefer not to have to search in the space of language models as wellwe consider two alternatives taking point estimates of the language models and analytically marginalizing them out one way to handle the language models is to choose a single point estimate for each set of segmentation points zsuppose that each language model is drawn from a symmetric dirichlet prior θj dirlet nj be a vector in which each element is the sum of the lexical counts over all the sentences in segment j nji etztj mti where mti is the count of word i in sentence t assuming that each xt θj then the posterior distribution for θj is dirichlet with vector parameter nj θ0 the expected value of this distribution is the multinomial distribution ˆθj where in this equation w indicates the number of words in the vocabularyhaving obtained an estimate for the language model ˆθj the observed data likelihood for segment j is a product over each sentence in the segment 2our experiments will assume that the number of topics k is knownthis is common practice for this task as the desired number of segments may be determined by the user by viewing the likelihood as a product over all terms in the vocabulary we observe interesting connections with prior work on segmentation and information theoryin this section we explain how our model generalizes the wellknown method of utiyama and isahara as in our work utiyama and isahara propose a probabilistic framework based on maximizing the compactness of the language models induced for each segmenttheir likelihood equation is identical to our equations 35they then define the language models for each segment as bji njiw1 without rigorous justifiwei nji cationthis form is equivalent to laplacian smoothing and is a special case of our equation 2 with b0 1thus the language models in youi can be viewed as the expectation of the posterior distribution p in the special case that b0 1our approach generalizes youi and provides a bayesian justification for the language models that they applythe remainder of the paper further extends this work by marginalizing out the language model and by adding cue phraseswe empirically demonstrate that these extensions substantially improve performanceour model also has a connection to entropy and situates entropybased segmentation within a bayesian frameworkequation 1 defines the objective function as a product across sentences using equations 35 we can decompose this across segments insteadworking in logarithms the last line substitutes in the logarithm of equation 5setting b0 0 and rearranging equation 2 we obtain nji njbji with nj pw i nji the total number of words in segment jsubstituting this into equation 6 we obtain where h is the negative entropy of the multinomial bjthus with b0 0 the log conditional probability in equation 6 is optimized by a segmentation that minimizes the weighted sum of entropies per segment where the weights are equal to the segment lengthsthis result suggests intriguing connections with prior work on the relationship between entropy and discourse structure the previous subsection uses point estimates of the language models to reveal connections to entropy and prior work on segmentationhowever point estimates are theoretically unsatisfying from a bayesian perspective and better performance may be obtained by marginalizing over all possible laneach segment so the overall likelihood for the pointestimate version also decomposes across segmentsany objective function that can be decomposed into a product across segments can be maximized using dynamic programmingwe define b as the value of the objective function for the optimal segmentation up to sentence t the contribution to the objective function from a single segment between sentences t and t is written b p where pdcm refers to the dirichlet compound multinomial distribution also known as the multivariate polya distribution the dcm distribution expresses the expectation over all multinomial language models when conditioning on the dirichlet prior θ0when θ0 is a symmetric dirichlet prior where nji is the count of word i in segment j and nj pwi nji the total number of words in the segmentthe symbol f refers to the gamma function an extension of the factorial function to real numbersusing the dcm distribution we can compute the data likelihood for each segment from the lexical counts over the entire segmentthe overall observation likelihood is a product across the likelihoods for each segmentthe optimal segmentation maximizes the joint probability p ppwe assume that p is a uniform distribution over valid segmentations and assigns no probability mass to invalid segmentationsthe data likelihood is defined for point estimate language models in equation 5 and for marginalized language models in equation 7note that equation 7 is written as a product over segmentsthe point estimates for the language models depend only on the counts within the maximum value of the objective function is then given by the recurrence relation b maxt 0 equal the distance that the selected segmentation point is moved when we transform the segmentation from z to zwe can write the proposal distribution q a cda where a 0 sets the rate of exponential decay and c is an indicator function enforcing the constraint that the moves do not reach or cross existing segmentation points6 we can also incorporate declarative linguistic knowledge by biasing the proposal distribution in favor of moves that place boundaries near known cue phrase markerswe multiply the unnormalized chance of proposing a move to location z z by a term equal to one plus the number of candidate cue phrases in the segmentinitial sentences in the new configuration z written numcueformally qling a qwe use a list of cue phrases identified by hirschberg and litman we evaluate our model with both the basic and linguisticallyenhanced proposal distributionsas in section 34 we set the priors 00 and 00 using gradientbased searchin this case we perform gradientbased optimization after epochs of 1000 maxmove where maxmove is the maximum movelength set to 5 in our experimentsthese parameters affect the rate of convergence but are unrelated to the underlying probability modelin the limit of enough samples all nonpathological settings will yield the same segmentation resultsmetropolishasting stepsinterleaving samplingbased inference with direct optimization of parameters can be considered a form of monte carlo expectationmaximization corpora we evaluate our approach on corpora from two different domains transcribed meetings and written textfor multispeaker meetings we use the icsi corpus of meeting transcripts which is becoming a standard for speech segmentation this dataset includes transcripts of 75 multiparty meetings of which 25 are annotated for segment boundariesfor text we introduce a dataset in which each document is a chapter selected from a medical textbook 7 the task is to divide each chapter into the sections indicated by the authorthis dataset contains 227 chapters with 1136 sections each chapter contains an average of 140 sentences giving an average of 28 sentences per segmentmetrics all experiments are evaluated in terms of the commonlyused pk and windowdiff scoresboth metrics pass a window through the document and assess whether the sentences on the edges of the window are properly segmented with respect to each otherwindowdiff is stricter in that it requires that the number of intervening segments between the two sentences be identical in the hypothesized and the reference segmentations while pk only asks whether the two sentences are in the same segment or notpk and windowdiff are penalties so lower values indicate better segmentationswe use the evaluation source code provided by malioutov and barzilay system configuration we evaluate our bayesian approach both with and without cue phraseswithout cue phrases we use the dynamic programming inference described in section 33this system is referred to as bayesseg in table 1when adding cue phrases we use the metropolishastings model described in 41both basic and linguisticallymotivated proposal distributions are evaluated these are referred to as bayessegcue and bayessegcueprop in the tablefor the samplingbased systems results are averaged over five runsthe initial configuration is obtained from the dynamic programming inference and then 100000 sampling iterations are performedthe final segmentation is obtained by annealing the last 25000 iterations to a temperature of zerothe use of annealing to obtain a maximum a posteriori configuration from samplingbased inference is common the total running time of our system is on the order of three minutes per documentdue to memory constraints we divide the textbook dataset into ten parts and perform inference in each part separatelywe may achieve better results by performing inference over the entire dataset simultaneously due to pooling counts for cue phrases across all documentsbaselines we compare against three competitive alternative systems from the literature youi lcseg mcs all three systems are described in the related work in all cases we use the publicly available executables provided by the authorsparameter settings for lcseg we use the parameter values specified in the paper mcs requires parameter settings to be tuned on a development setour corpora do not include development sets so tuning was performed using the lecture transcript corpus described by malioutov and barzilay our system does not require parameter tuning priors are reestimated as described in sections 34 and 43youi requires no parameter tuning and is used out of the box in all experiments we assume that the number of desired segments is providedpreprocessing standard preprocessing techniques are applied to the text for all comparisonsthe porter stemming algorithm is applied to group equivalent lexical itemsa set of stopwords is also removed using the same list originally employed by several competitive systems table 1 presents the performance results for three instantiations of our bayesian framework and three competitive alternative systemsas shown in the table the bayesian models achieve the best results on both metrics for both corporaon the medical textbook corpus the bayesian systems achieve a raw performance gain of 23 with respect to all baselines on both metricson the icsi meeting corpus the bayesian systems perform 45 better than the best baseline on the pk metric and achieve smaller improvement on the windowdiff metricthe results on the meeting corpus also compare favorably with the topicmodeling method of purver et al who report a pk of 289 and a windowdiff of 329another observation from table 1 is that the contribution of cue phrases depends on the datasetcue phrases improve performance on the meeting corpus but not on the textbook corpusthe effectiveness of cue phrases as a feature depends on whether the writer or speaker uses them consistentlyat the same time the addition of cue phrases prevents the use of exact inference techniques which may explain the decline in results for the meetings datasetto investigate the quality of the cue phrases that our model extracts we list its top ten cue phrases for each dataset in table 2cue phrases are ranked by their chisquared value which is computed based on the number of occurrences for each word at the beginning of a hypothesized segment as compared to the expectationfor cue phrases listed in bold the chisquared value is statistically significant at the level of p 01 indicating that the frequency with which the cue phrase appears at the beginning of segments is unlikely to be a chance phenomenonas shown in the left column of the table our model has identified several strong cue phrases from the meeting dataset which appear to be linguistically plausiblegalley et al performed a similar chisquared analysis but used the true segment boundaries in the labeled data this can be thought of as a sort of ground truthfour of the ten cue phrases identified by our system overlap with their analysis these are indicated with asterisksin contrast to our models success at extracting cue phrases from the meeting dataset only very common words are selected for the textbook datasetthis may help to explain why cue phrases improve performance for meeting transcripts but not for the textbookthis paper presents a novel bayesian approach to unsupervised topic segmentationour algorithm is capable of incorporating both lexical cohesion and cue phrase features in a principled manner and outperforms stateoftheart baselines on text and transcribed speech corporawe have developed exact and samplingbased inference techniques both of which search only over the space of segmentations and marginalize out the associated language modelsfinally we have shown that our model provides a theoretical framework with connections to information theory while also generalizing and justifying prior workin the future we hope to explore the use of similar bayesian techniques for hierarchical segmentation and to incorporate additional features such as prosody and speaker change informationthe authors acknowledge the support of the national science foundation and the microsoft research faculty fellowshipthanks to aaron adler s r k branavan harr chen michael collins randall davis dan roy david sontag and the anonymous reviewers for helpful comments and suggestionswe also thank michel galley igor malioutov and masao utiyama for making their topic segmentation code publically availableany opinions findings and conclusions or recommendations expressed above are those of the authors and do not necessarily reflect the views of the nsf
D08-1035
bayesian unsupervised topic segmentationthis paper describes a novel bayesian approach to unsupervised topic segmentationunsupervised systems for this task are driven by lexical cohesion the tendency of wellformed segments to induce a compact and consistent lexical distributionwe show that lexical cohesion can be placed in a bayesian context by modeling the words in each topic segment as draws from a multinomial language model associated with the segment maximizing the observation likelihood in such a model yields a lexicallycohesive segmentationthis contrasts with previous approaches which relied on handcrafted cohesion metricsthe bayesian framework provides a principled way to incorporate additional features such as cue phrases a powerful indicator of discourse structure that has not been previously used in unsupervised segmentation systemsour model yields consistent improvements over an array of stateoftheart systems on both text and speech datasetswe also show that both an entropybased analysis and a wellknown previous technique can be derived as special cases of the bayesian frameworkwe present a dynamic program for linear segmentationif the actual number of segments is known and only a linear discourse structure is acceptable then a single move shift of the segment border is sufficientwe find the richer model beneficial for a meetings corpus but not for a textbook
a comparison of bayesian estimators for unsupervised hidden markov model pos taggers there is growing interest in applying bayesian techniques to nlp problems there are a number of different estimators for bayesian models and it is useful to know what kinds of tasks each does well on this paper compares a variety of different bayesian estimators for hidden markov model pos taggers with various numbers of hidden states on data sets of different sizes recent papers have given contradictory results when comparing bayesian estimators to expectation maximization for unsupervised hmm pos tagging and we show that the difference in reported results is largely due to differences in the size of the training data and the number of states in the hmm we invesigate a variety of samplers for hmms including some that these earlier papers did not study we find that all of gibbs samplers do well with small data sets and few states and that variational bayes does well on large data sets and is competitive with the gibbs samplers in terms of times of convergence we find that variational bayes was the fastest of all the estimators especially on large data sets and that explicit gibbs sampler were generally faster than their collapsed counterparts on large data sets probabilistic models now play a central role in computational linguisticsthese models define a probability distribution p over structures or analyses xfor example in the partofspeech tagging application described in this paper which involves predicting the partofspeech tag ti of each word wi in the sentence w the structure x consists of the words w in a sentence together with their corresponding partsofspeech t in general the probabilistic models used in computational linguistics have adjustable parameters 0 which determine the distribution pin this paper we focus on bitag hidden markov models since our goal here is to compare algorithms rather than achieve the best performance we keep the models simple by ignoring morphology and capitalization and treat each word as an atomic entitythis means that the model parameters 0 consist of the hmm statetostate transition probabilities and the statetoword emission probabilitiesin virtually all statistical approaches the parameters 0 are chosen or estimated on the basis of training data d this paper studies unsupervised estimation so d w consists of a sequence of words wi containing all of the words of training corpus appended into a single string as explained belowmaximum likelihood is the most common estimation method in computational linguisticsa maximum likelihood estimator sets the parameters to the value 0 that makes the likelihood ld of the data d as large as possible in this paper we use the insideoutside algorithm which is a specialized form of expectationmaximization to find hmm parameters which maximize the likelihood function ldrecently there is increasing interest in bayesian methods in computational linguistics and the primary goal of this paper is to compare the performance of various bayesian estimators with each other and with thema bayesian approach uses bayes theorem to factorize the posterior distribution p into the priors can be useful because they can express preferences for certain types of modelsto take an example from our postagging application most words belong to relatively few partsofspeech one might express this using a prior which prefers hmms in which the statetoword emissions are sparse ie each state emits few wordsan appropriate dirichlet prior can express this preferencewhile it is possible to use bayesian inference to find a single model such as the maximum a posteriori or map value of 0 which maximizes the posterior p this is not necessarily the best approach instead rather than commiting to a single value for the parameters 0 many bayesians often prefer to work with the full posterior distribution p as this naturally reflects the uncertainty in 0s valuein all but the simplest models there is no known closed form for the posterior distributionhowever the bayesian literature describes a number of methods for approximating the posterior pmonte carlo sampling methods and variational bayes are two kinds of approximate inference methods that have been applied to bayesian inference of unsupervised hmm pos taggers these methods can also be used to approximate other distributions that are important to us such as the conditional distribution p of pos tags t given words w this recent literature reports contradictory results about these bayesian inference methodsjohnson compared two bayesian inference algorithms variational bayes and what we call here a pointwise collapsed gibbs sampler and found that variational bayes produced the best solution and that the gibbs sampler was extremely slow to converge and produced a worse solution than themon the other hand goldwater and griffiths reported that the same kind of gibbs sampler produced much better results than them on their unsupervised pos tagging taskone of the primary motivations for this paper was to understand and resolve the difference in these resultswe replicate the results of both papers and show that the difference in their results stems from differences in the sizes of the training data and numbers of states in their modelsit turns out that the gibbs sampler used in these earlier papers is not the only kind of sampler for hmmsthis paper compares the performance of four different kinds of gibbs samplers variational bayes and expectation maximization on unsupervised pos tagging problems of various sizesour goal here is to try to learn how the performance of these different estimators varies as we change the number of hidden states in the hmms and the size of the training datain theory the gibbs samplers produce streams of samples that eventually converge on the true posterior distribution while the variational bayes estimator only produces an approximation to the posteriorhowever as the size of the training data distribution increases the likelihood function and therefore the posterior distribution becomes increasingly peaked so one would expect this variational approximation to become increasingly accuratefurther the gibbs samplers used in this paper should exhibit reduced mobility as the size of training data increases so as the size of the training data increases eventually the variational bayes estimator should prove to be superiorhowever the two pointwise gibbs samplers investigated here which resample the label of each word conditioned on the labels of its neighbours only require o steps per sample while them vb and the sentenceblocked gibbs samplers require o steps per samplethus for hmms with many states it is possible to perform one or two orders of magnitude more iterations of the pointwise gibbs samplers in the same runtime as the other samplers so it is plausible that they would yield better resultsthere are a number of excellent textbook presentations of hidden markov models so we do not present them in detail hereconceptually a hidden markov model uses a markov model to generate the sequence of states t and then generates each word wi conditioned on the corresponding state tiwe insert endmarkers at the beginning and end of the corpus and between sentence boundaries and constrain the estimators to associate endmarkers with a special hmm state that never appears elsewhere in the corpus this means that we can formally treat the training corpus as one long string yet each sentence can be processed independently by a firstorder hmmin more detail the hmm is specified by a pair of multinomials θt and φt associated with each state t where θt specifies the distribution over states t0 following t and φt specifies the distribution over words w given state t the bayesian model we consider here puts a fixed uniform dirichlet prior on these multinomialsbecause dirichlets are conjugate to multinomials this greatly simplifies inferencea multinomial θ is distributed according to the dirichlet distribution dir iff in our experiments we set α and α0 to the uniform values but it is possible to estimate these as well informally α controls the sparsity of the statetostate transition probabilities while α0 controls the sparsity of the statetoword emission probabilitiesas α0 approaches zero the prior strongly prefers models in which each state emits as few words as possible capturing the intuition that most word types only belong to one pos mentioned earlierexpectationmaximization is a procedure that iteratively reestimates the model parameters converging on a local maximum of the likelihoodspecifically if the parameter estimate at iteration is φ then the reestimated parameters at itwhere n0wt is the number of times word w occurs with state t ntt is the number of times state t0 follows t and nt is the number of occurences of state t all expectations are taken with respect to the model φthe experiments below used the forwardbackward algorithm which is a dynamic programming algorithm for calculating the likelihood and the expectations in in o time where n is the number of words in the training corpus and m is the number of hmm statesvariational bayesian inference attempts to find a function q that minimizes an upper bound to the negative log likelihoodthe upper bound is called the variational free energywe make a meanfield assumption that the posterior can be well approximated by a factorized model q in which the state sequence t does not covary with the model parameters θ φ the calculus of variations is used to minimize the kl divergence between the desired posterior distribution and the factorized approximationit turns out that if the likelihood and conjugate prior belong to exponential families then the optimal q1 and q2 do too and there is an themlike iterative procedure that finds locallyoptimal model parameters this procedure is especially attractive for hmm inference since it involves only a minor modification to the mstep of the forwardbackward algorithmmackay and beal describe variational bayesian inference for hmmsin general the estep for vb inference for hmms is the same as in them while the mstep is as follows where m0 and m are the number of word types and states respectively if is the digamma function and the remaining quantities are as in this means that a single iteration can be performed in o time just as for the them algorithmthe goal of markov chain monte carlo algorithms is to produce a stream of samples from the posterior distribution pbesag provides a tutorial on mcmc techniques for hmm inferencea gibbs sampler is a simple kind of mcmc algorithm that is wellsuited to sampling highdimensional spacesa gibbs sampler for p where z proceeds by sampling and updating each zi in turn from p where zi ie all of the z except zi we evaluate four different gibbs samplers in this paper which vary along two dimensionsfirst the sampler can either be pointwise or blockeda pointwise sampler resamples a single state ti at each step while a blocked sampler resamples the labels for all of the words in a sentence at a single step using a dynamic programming algorithm based on the forwardbackward algorithma pointwise sampler requires o time per iteration while a blocked sampler requires o time per iteration where m is the number of hmm states and n is the length of the training corpussecond the sampler can either be explicit or collapsedan explicit sampler represents and samples the hmm parameters 0 and 0 in addition to the states t while in a collapsed sampler the hmm parameters are integrated out and only the states t are sampledthe difference between explicit and collapsed samplers corresponds exactly to the difference between the two pcfg sampling algorithms presented in johnson et al an iteration of the pointwise explicit gibbs sampler consists of resampling 0 and 0 given the statetostate transition counts n and statetoword emission counts n0 using and then resampling each state ti given the corresponding word wi and the neighboring states ti1 and ti1 using the dirichlet distributions in are nonuniform nt is the vector of statetostate transition counts in t leaving state t in the current state vector t while n0t is the vector of statetoword emission counts for state t see johnson et al for a more detailed explanation as well as an algorithm for sampling from the dirichlet distributions in the samplers that goldwater and griffiths and johnson describe are pointwise collapsed gibbs samplersfigure 1 gives the sampling distribution for this sampleras johnson et al explains samples of the hmm parameters 0 and 0 can be obtained using if requiredthe blocked gibbs samplers differ from the pointwise gibbs samplers in that they resample the pos tags for an entire sentence at a timebesag describes the wellknown dynamic programming algorithm for sampling a state sequence t given the words w and the transition and emission probabilities 0 and 0at each iteration the explicit blocked gibbs sampler resamples 0 and 0 using just as the explicit pointwise sampler doesthen it uses the new hmm parameters to resample the states t for the training corpus using the algorithm just mentionedthis can be done in parallel for each sentence in the training corpusthe collapsed blocked gibbs sampler is a straightforward application of the metropoliswithingibbs approach proposed by johnson et al for pcfgs so we only sketch it herewe iterate through the sentences of the training data resampling the states for each sentence conditioned on the statetostate transition counts n and statetoword emission counts n0 for the other sentences in the corpusthis is done by first computing the parameters 0 and 0 of a proposal hmm using scribed above to produce a proposal state sequence t for the words in the sentencefinally we use a metropolishastings acceptreject step to decide whether to update the current state sequence for the sentence with the proposal t or whether to keep the current state sequencein practice with all but the very smallest training corpora the acceptance rate is very high the acceptance rate for all of our collapsed blocked gibbs samplers was over 99the previous section described six different unsupervised estimators for hmmsin this section we compare their performance for english partofspeech taggingone of the difficulties in evaluating unsupervised taggers such as these is mapping the systems states to the goldstandard partsofspeechgoldwater and griffiths proposed an informationtheoretic measure known as the variation ofinformation described by meilˇa as an evaluation of an unsupervised tagginghowever as goldwater points out this may not be an ideal evaluation measure eg a tagger which assigns all words the same single partofspeech tag does disturbingly well under variation of information suggesting that a poor tagger may score well under viin order to avoid this problem we focus here on evaluation measures that construct an explicit mapping between the goldstandard partofspeech tags and the hmms statesperhaps the most straightforward approach is to map each hmm state to the partofspeech tag it cooccurs with most frequently and use this mapping to map each hmm state sequence t to a sequence of partofspeech tagsbut as clark observes this approach has several defectsif a system is permitted to posit an unbounded number of states it can achieve a perfect score on by assigning each word token its own unique statewe can partially address this by crossvalidationwe divide the corpus into two equal parts and from the first part we extract a mapping from hmm states to the partsofspeech they cooccur with most frequently and use that mapping to map the states of the second part of the corpus to partsofspeechwe call the accuracy of the resulting tagging the crossvalidation accuracyfinally following haghighi and klein and johnson we can instead insist that at most one hmm state can be mapped to any partofspeech tagfollowing these authors we used a greedy algorithm to associate states with pos tags the accuracy of the resulting tagging is called the greedy 1to1 then we use the dynamic programming sampler deaccuracythe studies presented by goldwater and griffiths and johnson differed in the number of states that they usedgoldwater and griffiths evaluated against the reduced tag set of 17 tags developed by smith and eisner while johnson evaluated against the full penn treebank tag setwe ran all our estimators in both conditions here also the studies differed in the size of the corpora usedthe largest corpus that goldwater and griffiths studied contained 96000 words while johnson used all of the 1173766 words in the full penn wsj treebankfor that reason we ran all our estimators on corpora containing 24000 words and 120000 words as well as the full treebankwe ran each estimator with the eight different combinations of values for the hyperparameters α and α listed below which include the optimal values for the hyperparameters found by johnson and report results for the best combination for each estimator below 1further we ran each setting of each estimator at least 10 times for at least 1000 iterations as johnson showed that some estimators require many iterations to convergethe results of our experiments are summarized in figures 251we found that on some data sets the results are sensitive to the values of the hyperparametersso there is a bit uncertainty in our comparison results because it is possible that the values we tried were good for one estimator and bad for othersunfortunately we do not know any efficient way of searching the optimal hyperparameters in a much wider and more finegrained spacewe leave it to future workas might be expected our evaluation measures disagree somewhat but the following broad tendancies seem clearon small data sets all of the bayesian estimators strongly outperform them with respect to all of our evaluation measures confirming the results reported in goldwater and griffiths this is perhaps not too surprising as the bayesian prior plays a comparatively stronger role with a smaller training corpus and the approximation used by variational bayes is likely to be less accurate on smaller data setsbut on larger data sets which goldwater et al did not study the results are much less clear and depend on which evaluation measure is usedexpectation maximization does surprisingly well on larger data sets and is competitive with the bayesian estimators at least in terms of crossvalidation accuracy confirming the results reported by johnson variational bayes converges faster than all of the other estimators we examined herewe found that the speed of convergence of our samplers depends to a large degree upon the values of the hyperparameters α and α with larger values leading to much faster convergencethis is not surprising as the α and α specify how likely the samplers are to consider novel tags and therefore directly influence the samplers mobilityhowever in our experiments the best results are obtained in most settings with small values for α and α usually between 01 and 00001in terms of time to convergence on larger data sets we found that the blocked samplers were generally faster than the pointwise samplers and that the explicit samplers were faster than the collapsed samplers largely because the time saved in not computing probabilities on the fly overwhelmed the time spent resampling the parametersof course these experiments only scratch the surface of what is possiblefigure 6 shows that pointwisesamplers initially converge faster but are overtaken later by the blocked samplersinspired by this one can devise hybrid strategies that interleave blocked and pointwise sampling these might perform better than both the blocked and pointwise samplers described here
D08-1036
a comparison of bayesian estimators for unsupervised hidden markov model pos taggersthere is growing interest in applying bayesian techniques to nlp problemsthere are a number of different estimators for bayesian models and it is useful to know what kinds of tasks each does well onthis paper compares a variety of different bayesian estimators for hidden markov model pos taggers with various numbers of hidden states on data sets of different sizesrecent papers have given contradictory results when comparing bayesian estimators to expectation maximization for unsupervised hmm pos tagging and we show that the difference in reported results is largely due to differences in the size of the training data and the number of states in the hmmwe investigate a variety of samplers for hmms including some that these earlier papers did not studywe find that all of gibbs samplers do well with small data sets and few states and that variational bayes does well on large data sets and is competitive with the gibbs samplersin terms of times of convergence we find that variational bayes was the fastest of all the estimators especially on large data sets and that explicit gibbs sampler were generally faster than their collapsed counterparts on large data setswe consider three evaluation criteria mto1 and 1to1 are the tagging accuracies under the best manytoone map and the greedy onetoone map respectively vi is a mapfree information theoretic criterionwe induce a manytoone mapping of state identifiers to pos tags from one half of the corpus and evaluate on the second half which is referred to as crossvalidation accuracywe show that sparse priors can gain 4 in crossvalidated many toone accuracy
a tale of two parsers investigating and combining graphbased and transitionbased dependency parsing graphbased and transitionbased approaches to dependency parsing adopt very different views of the problem each view having its own strengths and limitations we study both approaches under the framework of beamsearch by developing a graphbased and a transitionbased dependency parser we show that a beamsearch decoder is a competitive choice for both methods more importantly we propose a beamsearchbased parser that combines both graphbased and transitionbased parsing into a single system for training and decoding showing that it outperforms both the pure graphbased and the pure transitionbased parsers testing on the english and chinese penn treebank data the combined system gave stateoftheart accuraof respectively graphbased and transitionbased parsing algorithms offer two different approaches to datadriven dependency parsinggiven an input sentence a graphbased algorithm finds the highest scoring parse tree from all possible outputs scoring each complete tree while a transitionbased algorithm builds a parse by a sequence of actions scoring each action individuallythe terms graphbased and transitionbased were used by mcdonald and nivre to describe the difference between mstparser which is a graphbased parser with an exhaustive search decoder and maltparser which is a transitionbased parser with a greedy search decoderin this paper we do not differentiate graphbased and transitionbased parsers by their search algorithms a graphbased parser can use an approximate decoder while a transitionbased parser is not necessarily deterministicto make the concepts clear we classify the two types of parser by the following two criteria by this classification beamsearch can be applied to both graphbased and transitionbased parsersrepresentative of each method mstparser and maltparser gave comparable accuracies in the conllx shared task however they make different types of errors which can be seen as a reflection of their theoretical differences mstparser has the strength of exact inference but its choice of features is constrained by the requirement of efficient dynamic programmingmaltparser is deterministic yet its comparatively larger feature range is an advantageby comparing the two three interesting research questions arise how to increase the flexibility in defining features for graphbased parsing how to add search to transitionbased parsing and how to combine the two parsing approaches so that the strengths of each are utilizedin this paper we study these questions under one framework beamsearchbeamsearch has been successful in many nlp tasks and can achieve accuracy that is close to exact inferencemoreover a beamsearch decoder does not impose restrictions on the search problem in the way that an exact inference decoder typically does such as requiring the optimal subproblem property for dynamic programming and therefore enables a comparatively wider range of features for a statistical systemwe develop three parsersfirstly using the same features as mstparser we develop a graphbased parser to examine the accuracy loss from beamsearch compared to exactsearch and the accuracy gain from extra features that are hard to encode for exact inferenceour conclusion is that beamsearch is a competitive choice for graphbased parsingsecondly using the transition actions from maltparser we build a transitionbased parser and show that search has a positive effect on its accuracy compared to deterministic parsingfinally we show that by using a beamsearch decoder we are able to combine graphbased and transitionbased parsing into a single system with the combined system significantly outperforming each individual systemin experiments with the english and chinese penn treebank data the combined parser gave 921 and 862 accuracy respectively which are comparable to the best parsing results for these data sets while the chinese accuracy outperforms the previous best reported by 18in line with previous work on dependency parsing using the penn treebank we focus on projective dependency parsingfollowing mstparser we define the graphvariables agenda the beam for state items item partial parse tree output a set of output items index prev word indexes input x postagged input sentenceinitialization agenda put the best items from output to agenda output the best item in agenda based parsing problem as finding the highest scoring tree y from all possible outputs given an input x where gen denotes the set of possible parses for the input xto repeat our earlier comments in this paper we do not consider the method of finding the arg max to be part of the definition of graphbased parsing only the fact that the dependency graph itself is being scored and factored into scores attached to the dependency linksthe score of an output parse y is given by a linear model where 4b is the global feature vector from y and w is the weight vector of the modelwe use the discriminative perceptron learning algorithm to train the values of w the algorithm is shown in figure 1averaging parameters is a way to reduce overfitting for perceptron training and is applied to all our experimentswhile the mstparser uses exactinference we apply beamsearch to decodingthis is done by extending the deterministic covington algorithm for projective dependency parsing as shown in figure 2 the decoder works incrementally building a state item word by wordwhen each word is processed links are added between the current word and its predecessorsbeamsearch is applied by keeping the b best items in the agenda at each processing stage while partial candidates are compared by scores from the graphbased model according to partial graph up to the current wordbefore decoding starts the agenda contains an empty sentenceat each processing stage existing partial candidates from the agenda are extended in all possible ways according to the covington algorithmthe top b newly generated candidates are then put to the agendaafter all input words are processed the best candidate output from the agenda is taken as the final outputthe projectivity of the output dependency trees is guaranteed by the incremental covington processthe time complexity of this algorithm is o where n is the length of the input sentenceduring training the early update strategy of collins and roark is used when the correct state item falls out of the beam at any stage parsing is stopped immediately and the model is updated using the current best partial itemthe intuition is to improve learning by avoiding irrelevant information when all the items in the current agenda are incorrect further parsing steps will be irrelevant because the correct partial output no longer exists in the candidate rankingtable 1 shows the feature templates from the mstparser which are defined in terms of the context of a word its parent and its siblingto give more templates features from templates 1 5 are also conjoined with the link direction and distance while features from template 6 are also conjoined with the direction and distance between the child and its siblinghere distance refers to the difference between word indexeswe apply all these feature templates to the graphbased parserin addition we define two extra feature templates that capture information about grandchildren and arity these features are not conjoined with information about direction and distancethey are difficult to include in an efficient dynamic programming decoder but easy to include in a beamsearch decoderwe develop our transitionbased parser using the transition model of the maltparser which is characterized by the use of a stack and four transition actions shift arcright arcleft and reducean input sentence is processed from left to right with an index maintained for the current wordinitially empty the stack is used throughout the parsing process to store unfinished words which are the words before the current word that may still be linked with the current or a future wordthe shift action pushes the current word to the stack and moves the current index to the next wordthe arcright action adds a dependency link from the stack top to the current word pushes the current word on to the stack and moves the current index to the next wordthe arcleft action adds a dependency link from the current word to the stack top and pops the stackthe reduce action pops the stackamong the four transition actions shift and arcright push a word on to the stack while arcleft and reduce pop the stack shift and arcright read the next input word while arcleft and arcright add a link to the outputby repeated application of these actions the parser reads through the input and builds a parse treethe maltparser works deterministicallyat each step it makes a single decision and chooses one of the four transition actions according to the current context including the next input words the stack and the existing linksas illustrated in figure 3 the contextual information consists of the top of stack the parent of st the leftmost and rightmost child of st the current word the next three words from the input and the leftmost child of n0 given the context s the next action t is decided as follows where action shift arcright arcleft reduceone drawback of deterministic parsing is error propagation since once an incorrect action is made the output parse will be incorrect regardless of the subsequent actionsto reduce such error propagation a parser can keep track of multiple candidate outputs and avoid making decisions too earlysuppose that the parser builds a set of candidates gen for the input x the best output f can be decided by considering all actions here t0 represents one action in the sequence by which y is built and st represents the corresponding context when t0 is takenour transitionbased algorithm keeps b different sequences of actions in the agenda and chooses the one having the overall best score as the final parsepseudo code for the decoding algorithm is shown in figure 4here each state item contains a partial parse tree as well as a stack configuration and state items are built incrementally by transition actionsinitially the stack is empty and the agenda contains an empty sentenceat each processing stage one transition action is applied to existing state items as a step to build the final parseunlike the maltparser which makes a decision at each stage our transitionbased parser applies all possible actions to each existing state item in the agenda to generate new items then from all the newly generated items it takes the b with the highest overall score and puts them onto the agendain this way some ambiguity is retained for future resolutionnote that the number of transition actions needed to build different parse trees can varyfor example the threeword sentence a b c can be parsed by the sequence of three actions shift arcright arcright or the sequence of four actions shift arcleft shift arcright to ensure that all final state items are built by the same number of transition actions we require that the final state transfer the best items from output to agenda output the best item in agenda items must 1 have fullybuilt parse trees and 2 have only one root word left on the stackin this way popping actions should be made even after a complete parse tree is built if the stack still contains more than one wordnow because each word excluding the root must be pushed to the stack once and popped off once during the parsing process the number of actions inputs training examples initialization set w 0 needed to parse a sentence is always 2n 1 where n is the length of the sentencetherefore the decoder has linear time complexity given a fixed beam sizebecause the same transition actions as the maltparser are used to build each item the projectivity of the output dependency tree is ensuredwe use a linear model to score each transition action given a context n0t but not stwt or stwn0w we combine features manuallyas with the graphbased parser we use the discriminative perceptron to train the transitionbased model it is worth noticing that in contrast to maltparser which trains each action decision individually our training algorithm globally optimizes all action decisions for a parseagain early update and averaging parameters are applied to the training processthe graphbased and transitionbased approaches adopt very different views of dependency parsingmcdonald and nivre showed that the mstparser and maltparser produce different errorsthis observation suggests a combined approach by using both graphbased information and transitionbased information parsing accuracy can be improvedthe beamsearch framework we have developed facilitates such a combinationour graphbased and transitionbased parsers share many similaritiesboth build a parse tree incrementally keeping an agenda of comparable state itemsboth rank state items by their current scores and use the averaged perceptron with early update for trainingthe key differences are the scoring models and incremental parsing processes they use which must be addressed when combining the parsersfirstly we combine the graphbased and the transitionbased score models simply by summationthis is possible because both models are global and linearin particular the transitionbased model can be written as if we take et0act φ as the global feature vector φt we have which has the same form as the graphbased model scoreg φg wg we therefore combine the two models to give concatenating the feature vectors φg and φt to give a global feature vector φc and the weight vectors wg and wt to give a weight vector wc the combined model can be written as which is a linear model with exactly the same form as both submodels and can be trained with the perceptron algorithm in figure 1because the global feature vectors from the sub models are concatenated the feature set for the combined model is the union of the sub model feature setssecond the transitionbased decoder can be used for the combined systemboth the graphbased decoder in figure 2 and the transitionbased decoder in figure 4 construct a parse tree incrementallyhowever the graphbased decoder works on a perword basis adding links without using transition actions and so is not appropriate for the combined modelthe transitionbased algorithm on the other hand uses state items which contain partial parse trees and so provides all the information needed by the graphbased parser and hence the combined systemin summary we build the combined parser by using a global linear model the union of feature templates and the decoder from the transitionbased parserwe evaluate the parsers using the english and chinese penn treebank corporathe english data is prepared by following mcdonald et al bracketed sentences from the penn treebank 3 are split into training development and test sets as shown in table 4 and then translated into dependency structures using the headfinding rules from yamada and matsumoto before parsing pos tags are assigned to the input sentence using our reimplementation of the postagger from collins like mcdonald et al we evaluate the parsing accuracy by the precision of lexical heads and by the percentage of complete matches in which all words excluding punctuation have been assigned the correct parentsince the beam size affects all three parsers we study its influence first here we show the effect on the transitionbased parserfigure 6 shows different accuracy curves using the development data each with a different beam size bthe xaxis represents the number of training iterations and the yaxis the precision of lexical headsthe parsing accuracy generally increases as the beam size increases while the quantity of increase becomes very small when b becomes large enoughthe decoding times after the first training iteration are 102s 273s 455s 790s 1454s 2613s and 4695s respectively when b 1 2 4 8 16 32 64in the rest of the experiments we set b 64 in order to obtain the highest possible accuracywhen b 1 the transitionbased parser becomes a deterministic parserby comparing the curves when b 1 and b 2 we can see that while the use of search reduces the parsing speed it improves the quality of the output parsestherefore beamsearch is a reasonable choice for transitionbased parsingthe test accuracies are shown in table 5 where each row represents a parsing modelrows mstparser 12 show the firstorder and secondorder mstparsers as reported by the corresponding papersrows graph m and graph ma represent our graphbased parser using features from table 1 and table 1 table 2 respectively row transition represents our transitionbased parser and rows combined tm and combined tma represent our combined parser using features from table 3 table 1 and table 3 table 1 table 2 respectivelycolumns word and complete show the precision of lexical heads and complete matches respectivelyas can be seen from the table beamsearch reduced the head word accuracy from 915421 to 912408 with the same features as exactinferencehowever with only two extra feature templates from table 2 which are not conjoined with direction or distance information the accuracy is improved to 914425 this improvement can be seen as a benefit of beamsearch which allows the definition of more global featuresthe combined parser is tested with various sets of featuresusing only graphbased features in table 1 it gave 886 accuracy which is much lower than 912 from the graphbased parser using the same features this can be explained by the difference between the decodersin particular the graphbased model is unable to score the actions reduce and shift since they do not modify the parse treenevertheless the score serves as a reference for the effect of additional features in the combined parserusing both transitionbased features and graphbased features from the mstparser the combined parser achieved 920 perword accuracy which is significantly higher than the pure graphbased and transitionbased parsersadditional graphbased features further improved the accuracy to 921455 which is the best among all the parsers compared1 we use the penn chinese treebank 5 for experimental datafollowing duan et al we 1a recent paper koo et al reported parentprediction accuracy of 920 using a graphbased parser with a different set of features by applying separate word cluster information koo et al improved the accuracy to 932 which is the best known accuracy on the ptb datawe excluded these from table 5 because our work is not concerned with the use of such additional knowledge split the corpus into training development and test data as shown in table 6 and use the headfinding rules in table 8 in the appendix to turn the bracketed sentences into dependency structuresmost of the headfinding rules are from sun and jurafsky while we added rules to handle nn and frag and a default rule to use the rightmost node as the head for the constituent that are not listedlike duan et al we use goldstandard postags for the inputthe parsing accuracy is evaluated by the percentage of nonroot words that have been assigned the correct head the percentage of correctly identified root words and the percentage of complete matches all excluding punctuationthe accuracies are shown in table 7rows graph ma transition combined tm and combined tma show our models in the same way as for the english experiments from section 52row duan 2007 represents the transitionbased model from duan et al which applies beamsearch to the deterministic model from yamada and matsumoto and achieved the previous best accuracy on the dataour observations on parsing chinese are essentially the same as for englishour combined parser outperforms both the pure graphbased and the pure transitionbased parsersit gave the best accuracy we are aware of for dependency parsing using ctbour graphbased parser is derived from the work of mcdonald and pereira instead of performing exact inference by dynamic programming we incorporated the linear model and feature templates from mcdonald and pereira into our beamsearch framework while adding new global featuresnakagawa and hall also showed the effectiveness of global features in improving the accuracy of graphbased parsing using the approximate gibbs sampling method and a reranking approach respectivelyour transitionbased parser is derived from the deterministic parser of nivre et al we incorporated the transition process into our beamsearch framework in order to study the influence of search on this algorithmexisting efforts to add search to deterministic parsing include sagae and lavie which applied bestfirst search to constituent parsing and johansson and nugues and duan et al which applied beamsearch to dependency parsingall three methods estimate the probability of each transition action and score a state item by the product of the probabilities of all its corresponding actionsbut different from our transitionbased parser which trains all transitions for a parse globally these models train the probability of each action separatelybased on the work of johansson and nugues johansson and nugues studied global training with an approximated largemargin algorithmthis model is the most similar to our transitionbased model while the differences include the choice of learning and decoding algorithms the definition of feature templates and our application of the early update strategyour combined parser makes the biggest contribution of this paperin contrast to the models above it includes both graphbased and transitionbased componentsan existing method to combine multiple parsing algorithms is the ensemble approach which was reported to be useful in improving dependency parsing a more recent approach combined mstparser and maltparser by using the output of one parser for features in the otherboth hall et al and nivre and mcdonald can be seen as methods to combine separately defined modelsin contrast our parser combines two components in a single model in which all parameters are trained consistentlywe developed a graphbased and a transitionbased projective dependency parser using beamsearch demonstrating that beamsearch is a competitive choice for both parsing approacheswe then combined the two parsers into a single system using discriminative perceptron training and beamsearch decodingthe appealing aspect of the combined parser is the incorporation of two largely different views of the parsing problem thus increasing the information available to a single statistical parser and thereby significantly increasing the accuracywhen tested using both english and chinese dependency data the combined parser was highly competitive compared to the best systems in the literaturethe idea of combining different approaches to the same problem using beamsearch and a global model could be applied to other parsing tasks such as constituent parsing and possibly other nlp tasksthis work is supported by the ors and clarendon fundwe thank the anonymous reviewers for their detailed comments
D08-1059
a tale of two parsers investigating and combining graphbased and transitionbased dependency parsinggraphbased and transitionbased approaches to dependency parsing adopt very different views of the problem each view having its own strengths and limitationswe study both approaches under the framework of beamsearchby developing a graphbased and a transitionbased dependency parser we show that a beamsearch decoder is a competitive choice for both methodsmore importantly we propose a beamsearchbased parser that combines both graphbased and transitionbased parsing into a single system for training and decoding showing that it outperforms both the pure graphbased and the pure transitionbased parserstesting on the english and chinese penn treebank data the combined system gave stateoftheart accuracies of 921 and 862 respectivelywe define head rules to convert phrase structures into dependency structureswe combine beam search with a globally normalized discriminative model using structured perceptron learning and the early update strategy of collins and roark and also explore the addition of graph based features to a transitionbased parser
lattice minimum bayesrisk decoding for statistical machine translation we present minimum bayesrisk decoding over translation lattices that compactly encode a huge number of translation hypotheses we describe conditions on the loss function that will enable efficient implementation of mbr decoders on lattices we introduce an approximation to the bleu score that satisfies these conditions the mbr decoding under this approximate bleu is realized using weighted finite state automata our experiments show that the lattice mbr decoder yields moderate consistent gains in translation performance over nbest mbr decoding on arabictoenglish chinesetoenglish and englishtochinese translation tasks we conduct a range of experiments to understand why lattice mbr improves upon nbest mbr and study the impact of various parameters on mbr performance statistical language processing systems for speech recognition machine translation or parsing typically employ the maximum a posteriori decision rule which optimizes the 01 loss functionin contrast these systems are evaluated using metrics based on stringedit distance ngram overlap or precisionrecall relative to human annotationsminimum bayesrisk decoding aims to address this mismatch by selecting the hypothesis that minimizes the expected error in classificationthus it directly incorporates the loss function into the decision criterionthe approach has been shown to give improvements over the map classifier in many areas of natural language processing including automatic speech recognition machine translation bilingual word alignment and parsing in statistical machine translation mbr decoding is generally implemented by reranking an nbest list of translations produced by a firstpass decoder this list typically contains between 100 and 10 000 hypotheseskumar and byrne show that mbr decoding gives optimal performance when the loss function is matched to the evaluation criterion in particular mbr under the sentencelevel bleu loss function gives gains on bleuthis is despite the fact that the sentencelevel bleu loss function is an approximation to the exact corpuslevel bleua different mbr inspired decoding approach is pursued in zhang and gildea for machine translation using synchronous context free grammarsa forest generated by an initial decoding pass is rescored using dynamic programming to maximize the expected count of synchronous constituents in the tree that corresponds to the translationsince each constituent adds a new 4gram to the existing translation this approach approximately maximizes the expected bleuin this paper we explore a different strategy to perform mbr decoding over translation lattices that compactly encode a huge number of translation alternatives relative to an nbest listthis is a modelindependent approach in that the lattices could be produced by any statistical mt system both phrasebased and syntaxbased systems would work in this frameworkwe will introduce conditions on the loss functions that can be incorporated in lattice mbr decodingwe describe an approximation to the bleu score that will satisfy these conditionsour lattice mbr decoding is realized using weighted finite state automatawe expect lattice mbr decoding to improve upon nbest mbr primarily because lattices contain many more candidate translations than the nbest listthis has been demonstrated in speech recognition we conduct a range of translation experiments to analyze lattice mbr and compare it with nbest mbran important aspect of our lattice mbr is the linear approximation to the bleu scorewe will show that mbr decoding under this score achieves a performance that is at least as good as the performance obtained under sentencelevel bleu scorethe rest of the paper is organized as followswe review mbr decoding in section 2 and give the formulation in terms of a gain functionin section 3 we describe the conditions on the gain function for efficient decoding over a latticethe implementation of lattice mbr with weighted finite state automata is presented in section 4in section 5 we introduce the corpus bleu approximation that makes it possible to perform efficient lattice mbr decodingan example of lattice mbr with a toy lattice is presented in section 6we present lattice mbr experiments in section 7a final discussion is presented in section 8minimum bayesrisk decoding aims to find the candidate hypothesis that has the least expected loss under the probability model we begin with a review of mbr decoding for statistical machine translation statistical mt can be described as a mapping of a word sequence f in the source language to a word sequence e in the target language this mapping is produced by the mt decoder sif the reference translation e is known the decoder performance can be measured by the loss function lgiven such a loss function l between an automatic translation e and the reference e and an underlying probability model p the mbr decoder has the following form where r denotes the bayes risk of candidate translation e under the loss function l if the loss function between any two hypotheses can be bounded l g lmax the mbr decoder can be rewritten in terms of a gain function g lmax l we are interested in performing mbr decoding under a sentencelevel bleu score which behaves like a gain function it varies between 0 and 1 and a larger value reflects a higher similaritywe will therefore use equation 1 as the mbr decoderwe note that represents the space of translationsfor nbest mbr this space is the nbest list produced by a baseline decoderwe will investigate the use of a translation lattice for mbr decoding in this case will represent the set of candidates encoded in the latticein general mbr decoding can use different spaces for hypothesis selection and risk computation argmax and the sum in equation 1 as an example the hypothesis could be selected from the nbest list while the risk is computed based on the entire latticetherefore the mbr decoder can be more generally written as follows where h refers to the hypothesis space from where the translations are chosen and e refers to the evidence space that is used for computing the bayesriskwe will present experiments to show the relative importance of these two spaceswe now present mbr decoding on translation latticesa translation word lattice is a compact representation for very large nbest lists of translation hypotheses and their likelihoodsformally it is an acyclic weighted finite state acceptor consisting of states and arcs representing transitions between stateseach arc is labeled with a word and a weighteach path in the lattice consisting of consecutive transitions beginning at the distinguished initial state and ending at a final state expresses a candidate translationaggregation of the weights along the path1 produces the weight of the paths candidate h according to the modelin our setting this weight will imply the posterior probability of the translation e given the source sentence f the scaling factor α e 0 oc flattens the distribution when α 1because a lattice may represent a number of candidates exponential in the size of its state set it is often impractical to compute the mbr decoder directlyhowever if we can express the gain function g as a sum of local gain functions gi then we now show that equation 1 can be refactored and the mbr decoder can be computed efficientlywe loosely call a gain function local if it can be applied to all paths in the lattice via wfsa intersection without significantly multiplying the number of statesin this paper we are primarily concerned with local gain functions that weight ngramslet n w1 wn be the set of ngrams and let a local gain function gw x r for w e n be as follows where θw is a constant w is the number of times that w occurs in e0 and δw is 1 if w e e and 0 otherwisethat is gw is θw times the number of occurrences of w in e0 or zero if w does not occur in e we first assume that the overall gain function g can then be written as a sum of local gain functions and a constant θ0 times the length of the hypothesis e01 g θ0e0 given a gain function of this form we can rewrite the risk as follows where w e e δw 01 represents the paths of the lattice containing the ngram w at least oncethe mbr decoder on lattices can therefore be written as here p eeew p is the posterior probability of the ngram w in the latticewe have thus replaced a summation over a possibly exponential number of items with a summation over the number of ngrams that occur in which is at worst polynomial in the number of edges in the lattice that defines we compute the posterior probability of each ngram w as where z eee exp and z eeew expz and z represent the sums2 of weights of all paths in the lattices w and respectivelywe now show how the lattice mbr decision rule can be implemented using weighted finite state automata there are four steps involved in decoding starting from weighted finitestate automata representing the candidate outputs of a translation systemwe will describe these 2in the log semiring where log log is the collect operator steps in the setting where the evidence lattice e may be different from the hypothesis lattice h computing the set of ngrams n that occur in a finite automaton requires a traversal in topological order of all the arcs in the automatonbecause the lattice is acyclic this is possibleeach state q in the automaton has a corresponding set of ngrams nq ending theregiven an ngram w we construct an automaton matching any path containing the ngram and intersect that automaton with the lattice to find the set of paths containing the ngram suppose represent the weighted lattice we compute3 w n where w is the language that contains all strings that do not contain the ngram w the posterior probability p of ngram w can be computed as a ratio of the total weights of paths in w to the total weights of paths in the original lattice for each ngram w e n we then construct an automaton that accepts an input e with weight equal to the product of the number of times the ngram occurs in the input the ngram factor θw from equation 6 and the posterior probability pthe automaton corresponds to the weighted regular expression w wwe successively intersect each of these automata with an automaton that begins as an unweighted copy of the lattice hthis automaton must also incorporate the factor θ0 of each wordthis can be accomplished by intersecting the unweighted lattice with the automaton accepting the resulting mbr automaton computes the total expected gain of each patha path in this automaton that corresponds to the word sequence e has cost θ0jejewcar θwwp finally we extract the best path from the resulting automaton4 giving the lattice mbr candidate translation according to the gain function our lattice mbr formulation relies on the decomposition of the overall gain function as a sum of local gain functions we here describe a linear approximation to the log which allows such a decompositionthis will enable us to rewrite the log as a linear function of ngram matches and the hypothesis lengthour strategy will be to use a first order taylorseries approximation to what we call the corpus log gain the change in corpus log contributed by the sentence relative to not including that sentence in the corpuslet r be the reference length of the corpus c0 the candidate length and cnj1 n 41 the number of ngram matchesthen the corpus bleu score b can be defined as follows where we have ignored on the difference between the number of words in the candidate and the numthe semiri ber of ngramsif l is the average sentence length in the corpus δn pz cl the corpus log gain is defined as the change in log when a new sentences statistics are added to the corpus statistics where the counts in b0 are those of b plus those for the current sentencewe will assume that the brevity penalty does not change when adding the new sentencein experiments not reported here we found that taking into account the brevity penalty at the sentence level can cause large fluctuations in lattice mbr performance on different test setswe therefore treat only cns as variablesthe corpus log bleu gain is approximated by a firstorder vector taylor series expansion about the initial values of cn where the partial derivatives are given by where each δcn c0n cn counts the statistic in the sentence of interest rather than the corpus as a wholethis score is therefore a linear function in counts of words δc0 and ngram matches δcnour approach ignores the count clipping present in the exact bleu score where a correct ngram present once in the reference but several times in the hypothesis will be counted only once as correctsuch an approach is also followed in dreyer et al using the above firstorder approximation to gain in log corpus bleu equation 9 implies that θ0 θw from section 3 would have the following values we now describe how the ngram factors are computedthe factors depend on a set of ngram matches and counts these factors could be obtained from a decoding run on a development sethowever doing so could make the performance of lattice mbr very sensitive to the actual bleu scores on a particular runwe would like to avoid such a dependence and instead obtain a set of parameters which can be estimated from multiple decoding runs without mbrto achieve this we make use of the properties of ngram matchesit is known that the average ngram precisions decay approximately exponentially with n we now assume that the number of matches of each ngram is a constant ratio r times the matches of the corresponding n 1 gramif the unigram precision is p we can obtain the ngram factors as a function of the parameters p and r and the number of unigram tokens t we set p and r to the average values of unigram precision and precision ratio across multiple development setssubstituting the above factors in equation 6 we find that the mbr decision does not depend on t therefore any value of t can be usedfigure 1 shows a toy lattice and the final mbr automaton for bleu with a maximum ngram order of 2we note that the mbr hypothesis has a higher decoder cost relative to the map hypothesis however bcde gets a higher expected gain than abde since it shares more ngrams with the rank3 hypothesis this illustrates how a lattice can help select mbr translations that can differ from the map translationsubstituting the derivatives in equation 8 giveswe now present experiments to evaluate mbr decoding on lattices under the linear corpus bleu gainwe start with a description of the data sets and the smt systemwe present our experiments on the constrained data track of the nist 2008 arabictoenglish chinesetoenglish and englishtochinese machine translation tasks5 in all language pairs the parallel and monolingual data consists of all the allowed training sets in the constrained trackfor each language pair we use two development sets one for minimum error rate training and the other for tuning the scale factor for mbr decodingour development sets consists of the nist 20042003 evaluation sets for both aren and zhen and nist 2006 2003 evaluation sets for enzhwe report results on nist 2008 which is our blind test setstatistics computed over these data sets are reported in table 1our phrasebased statistical mt system is similar to the alignment template system described in och and ney the system is trained on parallel corpora allowed in the constrained trackwe first perform sentence and subsentence chunk alignment on the parallel documentswe then train word alignment models using 6 model1 iterations and 6 hmm iterationsan additional 2 iterations of model4 are performed for zhen and enzh pairsword alignments in both sourcetotarget and targettosource directions are obtained using the maximum aposteriori framework an inventory of phrasepairs up to length 5 is then extracted from the union of sourcetarget and targetsource alignmentsseveral feature functions are then computed over the phrasepairs5gram word language models are trained on the allowed monolingual corporaminimum error rate training under bleu is used for estimating approximately 20 feature function weights over the dev1 development settranslation is performed using a standard dynamic programming beamsearch decoder using two decoding passesthe first decoder pass generates either a lattice or an nbest listmbr decoding is performed in the second passthe mbr scaling parameter is tuned on the dev2 development setwe next report translation results from lattice mbr decodingall results will be presented on the nist 2008 evaluation setswe report results using the nist implementation of the bleu score which computes the brevity penalty using the shortest reference translation for each segment the bleu scores are reported at the wordlevel for aren and zhen but at the character level for enzhwe measure statistical significance using 95 confidence intervals computed with paired bootstrap resampling in all tables systems in a column show statistically significant differences unless marked with an asteriskwe first compare lattice mbr to nbest mbr decoding and map decoding in these experiments we hold the likelihood scaling factor α a constant it is set to 02 for aren and enzh and 01 for zhenthe translation lattices are pruned using forwardbackward pruning so that the average numbers of arcs per word is 30for nbest mbr we use nbest lists of size 1000to match the loss function lattice mbr is performed at the word level for arenzhen and at the character level for enzhour lattice mbr is implemented using the google openfst library6 in our experiments p are have values of 085072 080062 and 063048 for aren zhen and enzh respectivelywe note that lattice mbr provides gains of 0210 bleu points over nbest mbr which in turn gives 0206 bleu points over mapthese gains are obtained on top of a baseline system that has competitive performance relative to the results reported in the nist 2008 evaluation7 this demonstrates the effectiveness of lattice mbr decoding as a realization of mbr decoding which yields substantial gains over the nbest implementationthe gains from lattice mbr over nbest mbr could be due to a combination of factorsthese include 1 better approximation of the corpus bleu score 2 larger hypothesis space and 3 larger evidence spacewe now present experiments to tease apart these factorsour first experiment restricts both the hypothesis and evidence spaces in lattice mbr to the 1000best list we compare this to nbest mbr with a sentencelevel bleu and b sentencelevel log bleuthe results show that when restricted to the 1000best list lattice mbr performs slightly better than nbest mbr on arenenzh while nbest mbr is better on zhenwe hypothesize that on arenenzh the linear corpus bleu gain is better correlated to the actual corpus bleu than sentencelevel bleu while the opposite is true on zhennbest mbr gives similar results with either sentence bleu or sentence log bleuthis confirms that using a log bleu score does not change the outcome of mbr decoding and further justifies our taylorseries approximation of the log bleu scorewe next attempt to understand factors 2 and 3to do that we carry out lattice mbr when either the hypothesis or the evidence space in equation 2 is restricted to 1000best hypotheses for comparison we also include results from lattice mbr when both hypothesis and evidence spaces are identical either the full lattice or the 1000best list these results show that lattice mbr results are almost unchanged when the hypothesis space is restricted to a 1000best listhowever when the evidence space is shrunk to a 1000best list there is a significant degradation in performance these latter results are almost identical to the scenario when both evidence and hypothesis spaces are restricted to the 1000best listthis experiment throws light on what makes lattice mbr effective over nbest mbrrelative to the nbest list the translation lattice provides a better estimate of the expected bleu scoreon the other hand there are few hypotheses outside the 1000best list which are selected by lattice mbrfinally we show how the performance of lattice mbr changes as a function of the lattice densitythe lattice density is the average number of arcs per word and can be varied using forwardbackward pruning figure 2 reports the average number of lattice paths and bleu scores as a function of lattice densitythe results show that lattice mbr performance generally improves when the size of the lattice is increasedhowever on zhen there is a small drop beyond a density of 10this could be due to low quality hypotheses that get included at the larger densities and result in a poorer estimate of the expected bleu scoreon aren and enzh there are some gains beyond a lattice density of 30these gains are relatively small and come at the expense of higher memory usage we therefore work with a lattice density of 30 in all our experimentswe note that lattice mbr is operating over lattices which are gigantic in comparison to the number of paths in an nbest listat a lattice density of 30 the lattices in aren contain on an average about 1081 hypotheseswe next examine the role of the scale factor α in lattice mbr decodingthe mbr scale factor determines the flatness of the posterior distribution it is chosen using a grid search on the dev2 set figure 3 shows the variation in bleu scores on eval08 as this parameter is variedthe results show that it is important to tune this factorthe optimal scale factor is identical for all three language pairsin experiments not reported in this paper we have found that the optimal scaling factor on a moderately sized development set carries over to unseen test setslattice mbr decoding involves computing a posterior probability for each ngram in the latticewe would like to speed up the lattice mbr computation by restricting the maximum order of the ngrams in the procedurethe results show that on aren there is no degradation if we limit the maximum order of the ngrams to 3however on zhenenzh there is improvement by considering 4gramswe can therefore reduce lattice mbr computations in arenwe have presented a procedure for performing minimum bayesrisk decoding on translation latticesthis is a significant development in that the mbr decoder operates over a very large number of translationsin contrast the current nbest implementation of mbr can be scaled to at most a few thousands of hypothesesif the number of hypotheses is greater than say 20000 the nbest mbr becomes computationally expensivethe lattice mbr technique is efficient when performed over enormous number of hypotheses since it takes advantage of the compact structure of the latticelattice mbr gives consistent improvements in translation performance over nbest mbr decoding which is used in many stateoftheart research translation systemsmoreover we see gains on three different language pairsthere are two potential reasons why lattice mbr decoding could outperform nbest mbr a larger hypothesis space from which translations could be selected or a larger evidence space for computing the expected lossour experiments show that the main improvement comes from the larger evidence space a larger set of translations in the lattice provides a better estimate of the expected bleu scorein other words the lattice provides a better posterior distribution over translation hypotheses relative to an nbest listthis is a novel insight into the workings of mbr decodingwe believe this could be possibly employed when designing discriminative training approaches for machine translationmore generally we have found a component in machine translation where the posterior distribution over hypotheses plays a crucial rolewe have shown the effect of the mbr scaling factor on the performance of lattice mbrthe scale factor determines the flatness of the posterior distribution over translation hypothesesa scale of 00 means a uniform distribution while 10 implies that there is no scalingthis is an important parameter that needs to be tuned on a development setthere has been prior work in mbr speech recognition and machine translation which has shown the need for tuning this factorour mt system parameters are trained with minimum error rate training which assigns a very high posterior probability to the map translationas a result it is necessary to flatten the probability distribution so that mbr decoding can select hypotheses other than the map hypothesisour lattice mbr implementation is made possible due to the linear approximation of the bleu scorethis linearization technique has been applied elsewhere when working with bleu smith and eisner approximate the expectation of log bleu scorein both cases a linear metric makes it easier to compute the expectationwhile we have applied lattice mbr decoding to the approximate bleu score we note that our procedure is applicable to other gain functions which can be decomposed as a sum of local gain functionsin particular our framework might be useful with translation metrics such as ter or meteor in contrast to a phrasebased smt system a syntax based smt system can generate a hypergraph that represents a generalized translation lattice with words and hidden tree structureswe believe that our lattice mbr framework can be extended to such hypergraphs with loss functions that take into account both bleu scores as well as parse tree structureslattice and forest based search and training procedures are not yet common in statistical machine translationhowever they are promising because the search space of translations is much larger than the typical nbest list we hope that our approach will provide some insight into the design of latticebased search procedures along with the use of nonlinear global loss functions such as bleu
D08-1065
lattice minimum bayesrisk decoding for statistical machine translationwe present minimum bayesrisk decoding over translation lattices that compactly encode a huge number of translation hypotheseswe describe conditions on the loss function that will enable efficient implementation of mbr decoders on latticeswe introduce an approximation to the bleu score that satisfies these conditionsthe mbr decoding under this approximate bleu is realized using weighted finite state automataour experiments show that the lattice mbr decoder yields moderate consistent gains in translation performance over nbest mbr decoding on arabictoenglish chinesetoenglish and englishtochinese translation taskswe conduct a range of experiments to understand why lattice mbr improves upon nbest mbr and study the impact of various parameters on mbr performancewe consider taylor approximations to the logarithm of bleuwe extend mbr to word lattices which improves performance over kbest list mbrthe logbleu function must be modified slightly to yield a linear taylor approximation we replace the clipped ngram count with the product of an n gram count and an ngram indicator functionwe compute expected feature values by intersecting the translation lattice with a lattices for each ngram t
joint unsupervised coreference resolution with markov logic machine learning approaches to coreference resolution are typically supervised and require expensive labeled data some unsupervised approaches have been proposed but they are less accurate in this paper we present the first unsupervised approach that is competitive with supervised ones this is made possible by performing joint inference across mentions in contrast to the pairwise classification typically used in supervised methods and by using markov logic as a representation language which enables us to easily express relations like apposition and predicate nominals on muc and ace datasets our model outperforms haghigi and kleins one using only a fraction of the training data and often matches or exceeds the accuracy of stateoftheart supervised models the goal of coreference resolution is to identify mentions that refer to the same entitiesthis is a key subtask in many nlp applications including information extraction question answering machine translation and otherssupervised learning approaches treat the problem as one of classification for each pair of mentions predict whether they corefer or not while successful these approaches require labeled training data consisting of mention pairs and the correct decisions for themthis limits their applicabilityunsupervised approaches are attractive due to the availability of large quantities of unlabeled texthowever unsupervised coreference resolution is much more difficulthaghighi and kleins model the most sophisticated to date still lags supervised ones by a substantial marginextending it appears difficult due to the limitations of its dirichlet processbased representationthe lack of label information in unsupervised coreference resolution can potentially be overcome by performing joint inference which leverages the easy decisions to help make related hard onesrelations that have been exploited in supervised coreference resolution include transitivity and anaphoricity however there is little work to date on joint inference for unsupervised resolutionwe address this problem using markov logic a powerful and flexible language that combines probabilistic graphical models and firstorder logic markov logic allows us to easily build models involving relations among mentions like apposition and predicate nominalsby extending the stateoftheart algorithms for inference and learning we developed the first generalpurpose unsupervised learning algorithm for markov logic and applied it to unsupervised coreference resolutionwe test our approach on standard muc and ace datasetsour basic model trained on a minimum of data suffices to outperform haghighi and kleins oneour full model using apposition and other relations for joint inference is often as accurate as the best supervised models or morewe begin by reviewing the necessary background on markov logicwe then describe our markov logic network for joint unsupervised coreference resolution and the learning and inference algorithms we usedfinally we present our experiments and resultsmost existing supervised learning approaches for coreference resolution are suboptimal since they resolve each mention pair independently only imposing transitivity in postprocessing moreover many of them break up the resolution step into subtasks which further forsakes opportunities for joint inference that have been shown to be helpful using graph partitioning mccallum wellner incorporated transitivity into pairwise classification and achieved the stateoftheart result on the muc6 dataset but their approach can only leverage one binary relation at a time not arbitrary relations among mentionsdenis baldridge determined anaphoricity and pairwise classification jointly using integer programming but they did not incorporate transitivity or other relationswhile potentially more appealing unsupervised learning is very challenging and unsupervised coreference resolution systems are still rare to this dateprior to our work the best performance in unsupervised coreference resolution was achieved by haghighi klein using a nonparametric bayesian model based on hierarchical dirichlet processesat the heart of their system is a mixture model with a few linguistically motivated features such as head words entity properties and saliencetheir approach is a major step forward in unsupervised coreference resolution but extending it is challengingthe main advantage of dirichlet processes is that they are exchangeable allowing parameters to be integrated out but haghighi and klein forgo this when they introduce saliencetheir model thus requires gibbs sampling over both assignments and parameters which can be very expensivehaghighi and klein circumvent this by making approximations that potentially hurt accuracyat the same time the dirichlet process prior favors skewed cluster sizes and a number of clusters that grows logarithmically with the number of data points neither of which seems generally appropriate for coreference resolutionfurther deterministic or strong nondeterministic dependencies because gibbs sampling to break down making it difficult to leverage many linguistic regularitiesfor example apposition suggests coreference and thus the two mentions it relates should always be placed in the same clusterhowever gibbs sampling can only move one mention at a time from one cluster to another and this is unlikely to happen because it would require breaking the apposition ruleblocked sampling can alleviate this problem by sampling multiple mentions together but it requires that the block size be predetermined to a small fixed numberwhen we incorporate apposition and other regularities the blocks can become arbitrarily large making this infeasiblefor example suppose we also want to leverage predicate nominals then a sentence like he is bill gates the chairman of microsoft requires a block of four mentions he bill gates the chairman of microsoft and bill gates the chairman of microsoftsimilar difficulties occur with other inference methodsthus extending haghighi and kleins model to include richer linguistic features is a challenging problemour approach is instead based on markov logic a powerful representation for joint inference with uncertainty like haghighi and kleins our model is clusterbased rather than pairwise and implicitly imposes transitivitywe do not predetermine anaphoricity of a mention but rather fuse it into the integrated resolution processas a result our model is inherently joint among mentions and subtasksit shares several features with haghighi kleins model but removes or refines features where we believe it is appropriate tomost importantly our model leverages apposition and predicate nominals which haghighi klein did not usewe show that this can be done very easily in our framework and yet results in very substantial accuracy gainsit is worth noticing that markov logic is also well suited for joint inference in supervised systems nontrivial effort to incorporate can be handled in markov logic with the addition of a single formula in many nlp applications there exist rich relations among objects and recent work in statistical relational learning and structured prediction has shown that leveraging these can greatly improve accuracyone of the most powerful representations for joint inference is markov logic a probabilistic extension of firstorder logic a markov logic network is a set of weighted firstorder clausestogether with a set of constants it defines a markov network with one node per ground atom and one feature per ground clausethe weight of a feature is the weight of the firstorder clause that originated itthe probability of a state x in such a network is given by p exp where z is a normalization constant wz is the weight of the ith clause fz 1 if the ith clause is true and fz 0 otherwisemarkov logic makes it possible to compactly specify probability distributions over complex relational domainsefficient inference can be performed using mcsat mcsat is a slice sampling markov chain monte carlo algorithmslice sampling introduces auxiliary variables you that decouple the original ones x and alternately samples you conditioned on x and viceversato sample from the slice mcsat calls samplesat which uses a combination of satisfiability testing and simulated annealingthe advantage of using a satisfiability solver is that it efficiently finds isolated modes in the distribution and as a result the markov chain mixes very rapidlythe slice sampling scheme ensures that detailed balance is preservedmcsat is orders of magnitude faster than previous mcmc algorithms like gibbs sampling making efficient sampling possible on a scale that was previalgorithm 1 mcsat x satisfy for i 1 to num samples do m 0 for all ck e clauses satisfied by x do with probability 1 elk add ck to m ously out of reachalgorithm 1 gives pseudocode for mcsatat iteration i 1 the factor 0k for clause ck is either elk if ck is satisfied in x or 1 otherwisemcsat first samples the auxiliary variable uk uniformly from then samples a new state uniformly from the set of states that satisfy 0k uk for all k equivalently for each k with probability 1 elk the next state must satisfy ckin general we can factorize the probability distribution in any way that facilitates inference sample the uks and make sure that the next state is drawn uniformly from solutions that satisfy 0k uk for all factorsmcsat like most existing relational inference algorithms grounds all predicates and clauses thus requiring memory and time exponential in the predicate and clause aritieswe developed a general method for producing a lazy version of relational inference algorithms which carries exactly the same inference steps as the original algorithm but only maintains a small subset of active predicatesclauses grounding more as neededwe showed that lazymcsat the lazy version of mcsat reduced memory and time by orders of magnitude in several domainswe use lazymcsat in this papersupervised learning for markov logic maximizes the conditional loglikelihood l log p where y represents the nonevidence predicates x the evidence predicates and x y their values in the training datafor simplicity from now on we omit x whose values are fixed and always conditioned onthe optimization problem is convex and a global optimum can be found using gradient descent with the gradient being where ni is the number of true groundings of clause ithe expected count can be approximated as where yk are samples generated by mcsatto combat overfitting a gaussian prior is imposed on all weightsin practice it is difficult to tune the learning rate for gradient descent especially when the number of groundings varies widely among clauseslowd domingos used a preconditioned scaled conjugate gradient algorithm to address this problemthis estimates the optimal step size in each step as where g is the gradient d the conjugate update direction and a a parameter that is automatically tuned to trade off secondorder information with gradient descenth is the hessian matrix with the th entry being the hessian can be approximated with the same samples used for the gradientits negative inverse diagonal is used as the preconditioner1 the opensource alchemy package provides implementations of existing algorithms for markov logicin section 5 we develop the first generalpurpose unsupervised learning algorithm for markov logic by extending the existing algorithms to handle hidden predicates2 1lowd domingos showed that α can be computed more efficiently without explicitly approximating or storing the hessianreaders are referred to their paper for details2alchemy includes a discriminative them algorithm but it assumes that only a few values are missing and cannot handle completely hidden predicateskok domingos applied markov logic to relational clustering but they used hard themin this section we present our mln for joint unsupervised coreference resolutionour model deviates from haghighi kleins in several important waysfirst our mln does not model saliences for proper nouns or nominals as their influence is marginal compared to other features for pronoun salience it uses a more intuitive and simpler definition based on distance and incorporated it as a prioranother difference is in identifying headsfor the ace datasets haghighi and klein used the gold heads for the muc6 dataset where labels are not available they crudely picked the rightmost token in a mentionwe show that a better way is to determine the heads using head rules in a parserthis improves resolution accuracy and is always applicablecrucially our mln leverages syntactic relations such as apposition and predicate nominals which are not used by haghighi and kleinin our approach what it takes is just adding two formulas to the mlnas common in previous work we assume that true mention boundaries are givenwe do not assume any other labeled informationin particular we do not assume gold name entity recognition labels and unlike haghighi klein we do not assume gold mention types we determined the head of a mention either by taking its rightmost token or by using the head rules in a parserwe detected pronouns using a listthe main query predicate is inclust which is true iff mention m is in cluster c the t notation signifies that for each m this predicate is true for a unique value of c the main evidence predicate is head where m is a mention and t a token and which is true iff t is the head of m a key component in our mln is a simple head mixture model where the mixture component priors are represented by the unit clause inclust and the head distribution is represented by the head prediction rule all free variables are implicitly universally quantifiedthe notation signifies that the mln contains an instance of the rule with a separate weight for each value combination of the variables with a plus signby convention at each inference step we name each nonempty cluster after the earliest mention it containsthis helps break the symmetry among mentions which otherwise produces multiple optima and makes learning unnecessarily harderto encourage clustering we impose an exponential prior on the number of nonempty clusters with weight 1the above model only clusters mentions with the same head and does not work well for pronounsto address this we introduce the predicate isprn which is true iff the mention m is a pronoun and adapt the head prediction rule as follows this is always false when m is a pronoun and thus applies only to nonpronounspronouns tend to resolve with mentions that are semantically compatible with themthus we introduce predicates that represent entity type number and gender type number gender where x can be either a cluster or mention e e personorganizationlocationother n e singularplural and g e male female neutermany of these are known for pronouns and some can be inferred from simple linguistic cues 3 entity type assignment is represented by the unit clause type and similarly for number and gendera mention should agree with its cluster in entity typethis is ensured by the hard rule inclust type 3we used the following cues mr ms jr inc corp corporation companythe proportions of known properties range from 14 to 26there are similar hard rules for number and genderdifferent pronouns prefer different entity types as represented by which only applies to pronouns and whose weight is positive if pronoun t is likely to assume entity type e and negative otherwisethere are similar rules for number and genderaside from semantic compatibility pronouns tend to resolve with nearby mentionsto model this we impose an exponential prior on the distance between a pronoun and its antecedent with weight 14 this is similar to haghighi and kleins treatment of salience but simplersyntactic relations among mentions often suggest coreferenceincorporating such relations into our mln is straightforwardwe illustrate this with two examples apposition and predicate nominalswe introduce a predicate for apposition appo where x y are mentions and which is true iff y is an appositive of xwe then add the rule appo inclust which ensures that x y are in the same cluster if y is an appositive of xsimilarly we introduce a predicate for predicate nominals prednom and the corresponding rule5 the weights of both rules can be learned from data with a positive prior meanfor simplicity in this paper we treat them as hard constraintswe also consider a rulebased system that clusters nonpronouns by their heads and attaches a pronoun to the cluster which has no known conflicting type number or gender and contains the closest antecedent for the pronounthis system can be encoded in an mln with just four rulesthree of them are the ones for enforcing agreement in type number and gender between a cluster and its members as defined in the base mlnthe fourth rule is with a large but not infinite weight this rule has the effect of clustering nonpronouns by their heads except when it violates the hard rulesthe mln can also include the apposition and predicatenominal rulesas in the base mln we impose the same exponential prior on the number of nonempty clusters and that on the distance between a pronoun and its antecedentthis simple mln is remarkably competitive as we will see in the experiment sectionunsupervised learning in markov logic maximizes the conditional loglikelihood where z are unknown predicatesin our coreference resolution mln y includes head and known groundings of type number and gender z includes inclust and unknown groundings of type number gender and x includes isprn appo and prednomwith z the optimization problem is no longer convexhowever we can still find a local optimum using gradient descent with the gradient being where ni is the number of true groundings of the ith clausewe extended pscg for unsupervised learningthe gradient is the difference of two expectations each of which can be approximated using samples generated by mcsatthe th entry of and the step size can be computed accordinglysince our problem is no longer convex the negative diagonal hessian may contain zero or negative entries so we first took the absolute values of the diagonal and added 1 then used the inverse as the preconditionerwe also adjusted a more conservatively than lowd domingos notice that when the objects form independent subsets we can process them in parallel and then gather sufficient statistics for learningwe developed an efficient parallelized implementation of our unsupervised learning algorithm using the messagepassing interface learning in muc6 took only one hour and in ace2004 two and a halfto reduce burnin time we initialized mcsat with the state returned by maxwalksat rather than a random solution to the hard clausesin the existing implementation in alchemy samplesat flips only one atom in each step which is inefficient for predicates with uniquevalue constraints such predicates can be viewed as multivalued predicates with value ranging over all cs and are prevalent in nlp applicationswe adapted samplesat to flip two or more atoms in each step so that the uniquevalue constraints are automatically satisfiedby default mcsat treats each ground clause as a separate factor while determining the slicethis can be very inefficient for highly correlated clausesfor example given a nonpronoun mention m currently in cluster c and with head t among the mixture prior rules involving m inclust is the only one that is satisfied and among those headprediction rules involving m isprninclusthead is the only one that is satisfied the factors for these rules multiply to exp where wmr is the weight for inclust and wmt is the weight for isprn inclust head since an unsatisfied rule contributes a factor of e0 1we extended mcsat to treat each set of mutually exclusive and exhaustive rules as a single factoreg for the above m mcsat now samples you uniformly from and requires that in the next state 0 be no less than you equivalently the new cluster and head for m should satisfy wmc wmct logwe extended samplesat so that when it considers flipping any variable involved in such constraints it ensures that their new values still satisfy these constraintsthe final clustering is found using the maxwalksat weighted satisfiability solver with the appropriate extensionswe first ran a maxwalksat pass with only finiteweight formulas then ran another pass with all formulaswe found that this significantly improved the quality of the results that maxwalksat returnedwe implemented our method as an extension to the alchemy system since our learning uses sampling all results are the average of five runs using different random seedsour optimization problem is not convex so initialization is importantthe core of our model tends to cluster nonpronouns with the same headtherefore we initialized by setting all weights to zero and running the same learning algorithm on the base mln while assuming that in the ground truth nonpronouns are clustered by their headswe used 30 iterations of pscg for learningwe generated 100 samples using mcsat for each expectation approximation6 we conducted experiments on muc6 ace2004 and ace phrase2 we evaluated our systems using two commonlyused scoring programs muc and b3 to gain more insight we also report pairwise resolution scores and mean absolute error in the number of clustersthe muc6 dataset consists of 30 documents for testing and 221 for trainingto evaluate the contribution of the major components in our model we conducted five experiments each differing from the previous one in a single aspectwe emphasize that our approach is unsupervised and thus the data only contains raw text plus true mention boundariesmln1 in this experiment the base mln was used and the head was chosen crudely as the rightmost token in a mentionour system was run on each test document separately using a minimum of training data mln30 our system was trained on all 30 test documents togetherthis tests how much can be gained by pooling informationmlnh the heads were determined using the head rules in the stanford parser plus simple heuristics to handle suffixes such as corp and inc mlnha the apposition rule was addedmlnhan the predicatenominal rule was addedthis is our full modelwe also compared with two rulebased mlns rule chose the head crudely as the rightmost token in a mention and did not include the apposition rule and predicatenominal rule rulehan chose the head using the head rules in the stanford parser and included the apposition rule and predicatenominal rulepast results on ace were obtained on different releases of the datasets eg haghighi and klein used the ace2004 training corpus ng and denis and baldridge used ace phrase2 and culotta et al used the ace2004 formal test setin this paper we used the ace2004 training corpus and ace phrase2 to enable direct comparisons with haghighi klein ng and denis and baldridge due to license restrictions we were not able to obtain the ace2004 formal test set and so cannot compare directly to culotta et al the english version of the ace2004 training corpus contains two sections bnews and nwire with 220 and 128 documents respectivelyace2 contains a training set and a test setin our experiments we only used the test set which contains three sections bnews nwire and npaper with 51 29 and 17 documents respectivelytable 1 compares our system with previous approaches on the muc6 dataset in muc scoresour approach greatly outperformed haghighi klein the stateoftheart unsupervised systemour system trained on individual documents achieved an f1 score more than 7 higher than theirs trained on 60 documents and still outperformed it trained on 381 documentstraining on the 30 test documents together resulted in a significant gainbetter head identification led to a large improvement in accuracy which is expected since for mentions with a right modifier the rightmost tokens confuse rather than help coreference notice that with this improvement our system already outperforms a stateoftheart supervised system leveraging apposition resulted in another large improvement and predicate nominals also helpedour full model scores about 9 higher than haghighi klein and about 6 higher than mccallum wellner to our knowledge this is the best coreference accuracy reported on muc6 to date7 the b3 scores of mlnhan on the muc6 dataset are 774 676 and 722 interestingly the rulebased mln sufficed to outperform haghighi klein and by using better heads and the apposition and predicatenominal rules it outperformed mccallum wellner the supervised systemthe mlns with learning on the other hand substantially outperformed the corresponding rulebased onestable 2 compares our system to haghighi klein on the ace2004 training set in muc scoresagain our system outperformed theirs by a large marginthe b3 scores of mlnhan on the ace2004 dataset are 716 684 and 700 for bnews and 757 692 and 723 for nwire did not report b3due to license restrictions we could not compare directly to culotta et al who reported overall b3f1 of 793 on the formal test settables 3 and 4 compare our system to two recent supervised systems ng and denis baldridge our approach significantly outperformed ng it tied with denis baldridge on nwire and was somewhat less accurate on bnews and npaperluo et al pointed out that one can obtain a very high muc score simply by lumping all mentions togetherb3 suffers less from this problem but is not perfectthus we also report pairwise resolution scores the gold number of clusters and our mean absolute error in the number of clusters systems that simply merge all mentions will have exceedingly low pairwise precision and very large errors in the number of clustersour system has fairly good pairwise precisions and small mean error in the number of clusters which verifies that our results are soundmany of our systems remaining errors involve nominalsadditional features should be considered to distinguish mentions that have the same head but are different entitiesfor pronouns many remaining errors can be corrected using linguistic knowledge like binding theory and salience hierarchyour heuristics for identifying appositives and predicate nominals also make many errors which often can be fixed with additional name entity recognition capabilities the most challenging case involves phrases with different heads that are both proper nouns handling these cases requires domain knowledge andor more powerful joint inferencethis paper introduces the first unsupervised coreference resolution system that is as accurate as supervised systemsit performs joint inference among mentions using relations like apposition and predicate nominalsit uses markov logic as a representation language which allows it to be easily extended to incorporate additional linguistic and world knowledgefuture directions include incorporating additional knowledge conducting joint entity detection and coreference resolution and combining coreference resolution with other nlp taskswe thank the anonymous reviewers for their commentsthis research was funded by darpa contracts nbchd03001002000225 fa875007d0185 and hr001107c0060 darpa grant fa87500520283 nsf grant iis0534881 and onr grant n000140510313 and n000140810670the views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies either expressed or implied of darpa nsf onr or the united states government
D08-1068
joint unsupervised coreference resolution with markov logicmachine learning approaches to coreference resolution are typically supervised and require expensive labeled datasome unsupervised approaches have been proposed but they are less accuratein this paper we present the first unsupervised approach that is competitive with supervised onesthis is made possible by performing joint inference across mentions in contrast to the pairwise classification typically used in supervised methods and by using markov logic as a representation language which enables us to easily express relations like apposition and predicate nominalson muc and ace datasets our model outperforms haghigi and kleins one using only a fraction of the training data and often matches or exceeds the accuracy of stateoftheart supervised modelswe empirically report that global approaches achieve performance better than the ones based on incrementally processing a textour method is based on the entitymention modelin the predicate nominative construction the object of a copular verb is constrained to corefer with its subject
latticebased minimum error rate training for statistical machine translation minimum error rate training is an effective means to estimate the feature function weights of a linear model such that an automated evaluation criterion for measuring system performance can directly be optimized in training to accomplish this the training procedure determines for each feature function its exact error surface on a given set of candidate translations the feature function weights are then adjusted by traversing the error surface combined over all sentences and picking those values for which the resulting error count reaches a minimum typically in mert are represented as lists which contain the probable translation hypotheses produced by a decoder in this paper we present a novel algorithm that allows for efficiently constructing and reprethe exact error surface of translations that are encoded in a phrase lattice to mert the number of candidate translations thus taken into account increases by several orders of magnitudes the proposed method is used to train the feature function weights of a phrasebased statistical machine translation system experiments conducted on the nist 2008 translation tasks show significant runtime improvements moderate bleu score gains over mert many statistical methods in natural language processing aim at minimizing the probability of sentence errorsin practice however system quality is often measured based on error metrics that assign nonuniform costs to classification errors and thus go far beyond counting the number of wrong decisionsexamples are the mean average precision for ranked retrieval the fmeasure for parsing and the bleu score for statistical machine translation a class of training criteria that provides a tighter connection between the decision rule and the final error metric is known as minimum error rate training and has been suggested for smt in mert aims at estimating the model parameters such that the decision under the zeroone loss function maximizes some endtoend performance measure on a development corpusin combination with loglinear models the training procedure allows for a direct optimization of the unsmoothed error countthe criterion can be derived from bayes decision rule as follows let ff1 fi denote a source sentence which is to be translated into a target sentence ee1 eiunder the zeroone loss function the translation which maximizes the a posteriori probability is chosen earg max prpefq e since the true posterior distribution is unknown prpefqis modeled via a loglinear translation model which combines some feature functions hmpe fq with feature function weights am m1 m the feature function weights are the parameters of the model and the objective of the mert criterion is to find a parameter set am that minimizes the error count on a representative set of training sentencesmore precisely let f denote the source sentences of a training corpus with given reference translations vidual sentences ie eprs 1 and let cstes1 eskudenote a set of k candidate translationsassuming that the corpusbased error count for some translations es 1 is additively decomposable into the error counts of the indiin it was shown that linear models can effectively be trained under the mert criterion using a special line optimization algorithmthis line optimization determines for each feature function hm and sentence fs the exact error surface on a set of candidate translations csthe feature function weights are then adjusted by traversing the error surface combined over all sentences in the training corpus and moving the weights to a point where the resulting error reaches a minimumcandidate translations in mert are typically represented as nbest lists which contain the n most probable translation hypothesesa downside of this approach is however that nbest lists can only capture a very small fraction of the search spaceas a consequence the line optimization algorithm needs to repeatedly translate the development corpus and enlarge the candidate repositories with newly found hypotheses in order to avoid overfitting on cs and preventing the optimization procedure from stopping in a poor local optimumin this paper we present a novel algorithm that allows for efficiently constructing and representing the unsmoothed error surface for all translations that are encoded in a phrase latticethe number of candidate translations thus taken into account increases by several orders of magnitudes compared to nbest mertlattice mert is shown to yield significantly faster convergence rates while it explores a much larger space of candidate translations which is exponential in the lattice sizedespite this vast search space we show that the suggested algorithm is always efficient in both running time and memorythe remainder of this paper is organized as followssection 2 briefly reviews nbest mert and introduces some basic concepts that are used in order to develop the line optimization algorithm for phrase lattices in section 3section 4 presents an upper bound on the complexity of the unsmoothed error surface for the translation hypotheses represented in a phrase latticethis upper bound is used to prove the space and runtime efficiency of the suggested algorithmsection 5 lists some best practices for mertsection 6 discusses related worksection 7 reports on experiments conducted on the nist 2008 translation tasksthe paper concludes with a summary in section 8the goal of mert is to find a weights set that minimizes the unsmoothed error count on a representative training corpus this can be accomplished through a sequence of line minimizations along some vector directionstdm starting from an initial point λm 1 computing the most probable sentence hypothesis out of a set of k line λm1rydm1 results in the following optimization problem hence the total scorepqfor any candidate translation corresponds to a line in the plane with γ as the independent variablefor any particular choice of γ the decoder seeks that translation which yields the largest score and therefore corresponds to the topmost line segmentoverall the candidate repository cs defines k lines where each line may be divided into at most k line segments due to possible intersections with the other k1 linesthe sequence of the topmost line segments constitute the upper envelope which is the pointwise maximum over all lines induced by csthe upper envelope is a convex hull and can be inscribed with a convex polygon whose edges are the segments of a piecewise linear function in γ the importance of the upper envelope is that it provides a compact encoding of all possible outcomes that a rescoring of cs may yield if the parameter set λm1 is moved along the chosen directiononce the upper envelope has been determined we can project its constituent line segments onto the error counts of the corresponding candidate translations this projection is independent of how the envelope is generated and can therefore be applied to any set of line segments1an effective means to compute the upper envelope is a sweep line algorithm which is often used in computational geometry to determine the intersection points of a sequence of lines or line segments the idea is to shift a vertical ray from8to8over the plane while keeping track of those points where two or more lines intersectsince the upper envelope is fully specified by the topmost line segments it suffices to store the following components for each line object ℓ the xintercept ℓx with the leftadjacent line the slope ℓm and the yintercept ℓy a fourth component ℓt is used to store the candidate translationalgorithm 1 shows the pseudo code for a sweep line algorithm which reduces an input array a0k1 consisting of the k line objects of the candidate repository cs to its upper envelopeby construction the upper envelope consists of at most k line segmentsthe endpoints of each line segment define the interval boundaries at which the decision made by the decoder will changehence as γ increases from8to8 we will see that the most probable translation hypothesis will change whenever γ passes an intersection pointlet γis γ2s γfsn denote the sequence of interval boundaries and let aefs1 ae2s aefsn denote the corresponding sequence of changes in the error count where aens is the amount by which the error count will change if γ is moved from a point in ns1 γns to a point inr γns γns1qboth sequences together provide an exhaustive representation of the unsmoothed error surface for the sentence fs along the line λm ry dmthe error surface for the whole training corpus is obtained by merging the interval boundaries over all sentences in the training corpusthe optimal γ can then be found by traversing the merged error surface and choosing a point from the interval where the total error reaches its minimumafter the parameter update λm λm γopt dm1 the decoder may find new translation hypotheses which are merged into the candidate repositories if they are ranked among the top n candidatesthe relation k n holds therefore only in the first iterationfrom the second iteration on k is usually larger than n the sequence of line optimizations and decodings is repeated until the candidate repositories remain unchanged and γopt0in this section the algorithm for computing the upper envelope on nbest lists is extended to phrase latticesfor a description on how to generate lattices see formally a phrase lattice for a source sentence f is defined as a connected directed acyclic graph gfp with vertice set vf unique source and sink nodes s tp c vf and a set of arcs ef c vf x vfeach arc is labeled with a phrase ϕij ei1 eij and the feature function values hma path 7r in gf defines a partial translation eπ of f which is the concatenation of all phrases along this paththe corresponding feature function values are obtained by summing over the arcspecific feature function values in the following we use the notation inp and outp to refer to the set of incoming and outgoing arcs for a node vp c vfsimilarly headp and tailp denote the head and tail of e c efto develop the algorithm for computing the upper envelope of all translation hypotheses that are encoded in a phrase lattice we first consider a node vpvf with some incoming and outgoing arcs each path that starts at the source node s and ends in v defines a partial translation hypothesis which can be represented as a line we now assume that the upper envelope for these partial translation hypotheses is knownthe lines that constitute this envelope shall be denoted by f1 fnnext we consider continuations of these partial translation candidates by following one of the outgoing arcs algorithm 2 lattice envelope input a phrase lattice 9fpvf efq output upper envelope of 9f eoutpvqeach such arc defines another line denoted by gpeif we add the slope and yintercept of gpeto each line in the setf1 fnu then the upper envelope will be constituted by segments of fi gp fn gpthis operation neither changes the number of line segments nor their relative order in the envelope and therefore it preserves the structure of the convex hullas a consequence we can propagate the resulting envelope over an outgoing arc e to a successor node v1 headpother incoming arcs for v1may be associated with different upper envelopes and all that remains is to merge these envelopes into a single combined envelopethis is however easy to accomplish since the combined envelope is simply the convex hull of the union over the line sets which constitute the individual envelopesthus by merging the arrays that store the line segments for the incoming arcs and applying algorithm 1 to the resulting array we obtain the combined upper envelope for all partial translation candidates that are associated with paths starting at the source node s and ending in v1the correctness of this procedure is based on the following two observations againproof suppose that such a continuation exists then this continuation can be represented as a line g and since f has been discarded from the envelope the path associated with g must also be a valid continuation for the line segments f1 fn that constitute the envelopethus it follows that maxpfig fngqmaxpf1 fnqg fg for some yrthis however is in contra diction with the premise that for all y c r to keep track of the phrase expansions when propagating an envelope over an outgoing arc e c tailp the phrase label ϕv hwdp has to be appended from the right to all partial translation hypotheses in the envelopethe complete algorithm then works as follows first all nodes in the phrase lattice are sorted in topological orderstarting with the source node we combine for each node v the upper envelopes that are associated with vs incoming arcs by merging their respective line arrays and reducing the merged array into a combined upper envelope using algorithm 1the combined envelope is then propagated over the outgoing arcs by associating each e c outp with a copy of the combined envelopethis copy is modified by adding the parameters of the line gp to the envelopes constituent line segmentsthe envelopes of the incoming arcs are no longer needed and can be deleted in order to release memorythe envelope computed at the sink node is by construction the convex hull over all translation hypotheses represented in the lattice and it compactly encodes those candidates which maximize the decision rule eq for any point along the line λmydm algorithm 2 shows the pseudo codenote that the component ex does not change and therefore requires no updateit remains to verify that the suggested algorithm is efficient in both running time and memoryfor this purpose we first analyze the complexity of algorithm 1 and derive from it the running time of algorithm 2after sorting each line object in algorithm 1 is visited at most three timesthe first time is when it is picked by the outer loopthe second time is when it either gets discarded or when it terminates the inner loopwhenever a line object is visited for the third time it is irrevocably removed from the envelopethe runtime complexity is therefore dominated by the initial sorting and amounts to op topological sort on a phrase lattice _ can be performed in time o as will be shown in section 4 the size of the upper envelope for g can never exceed the size of the arc set e the same holds for any subgraph grv of g which is induced by the paths that connect the source node s with vpvsince the envelopes propagated from the source to the sink node can only increase linearly in the number of previously processed arcs the total running time amounts to a worst case complexity of opvelogeqthe memory efficiency of the suggested algorithm results from the following theorem which provides a novel upper bound for the number of cost minimizing paths in a directed acyclic graph with arcspecific affine cost functionsthe bound is not only meaningful for proving the space efficiency of lattice mert but it also provides deeper insight into the structure and complexity of the unsmoothed error surface induced by loglinear modelssince we are examining a special class of shortest paths problems we will invert the sign of each local feature function value in order to turn the feature scores into corresponding costshence the objective of finding the best translation hypotheses in a phrase lattice becomes the problem of finding all costminimizing paths in a graph with affine cost functionstheorem let g be a connected directed acyclic graph with vertex set v unique source and sink nodes s tp c v and an arc set e c v x v in which each arc e c e is associated with an affine cost function ce ae y be ae be c r counting ties only once the cardinality of the union over the sets of all costminimizing paths for all y c r is then upperbounded by 1e 1 proof the proposition holds for the empty graph as well as for the case that v is tu with all arcs e c e joining the source and sink nodelet g therefore be a larger graphthen we perform an st cut and split g into two subgraphs g1 and g2 arcs spanning the section boundary are duplicated and connected with a newly added head or tail node the zerocost arcs in g2 that emerged from the duplication process are contracted which can be done without loss of generality because zerocost arcs do not affect the total costs of paths in the latticethe contraction essentially amounts to a removal of arcs and is required in order to ensure that the sum of edges in both subgraphs does not exceed the number of edges in g all nodes in g1 with outdegree zero are then combined into a single sink node t1similarly nodes in g2 whose indegree is zero are combined into a single source node s2let n1 and n2 denote the number of arcs in g1 and g2 respectivelyby construction nin2eboth subgraphs are smaller than g and thus due to the induction hypothesis their lower envelopes consist of at most n1 and n2 line segments respectivelywe further notice that either envelope is a convex hull whose constituent line segments inscribe a convex polygon in the following denoted by p1 and p2now we combine both subgraphs into a single graph g1by merging the sink node t1 in g1 with the source node s2 in g2the merged node is an articulation point whose removal would disconnect both subgraphs and hence all paths in g1that start at the source node s and stop in the sink node t lead through this articulation pointthe graph g1has at least as many cost minimizing paths as g although these paths as well as their associated costs might be different from those in g the additivity of the cost function and the articulation point allow us to split the costs for any path from s to t into two portions the first portion can be attributed to g1 and must be a line inside p1 the remainder can be attributed to g2 and must therefore be a line inside p2hence the total costs for any path in g1can be bounded by the convex hull of the superposition of p1 and p2this convex hull is again a convex polygon which consists of at most n1 n2 edges and therefore the number of cost minimizing paths in g1 is upper bounded by nin2 corollary the upper envelope for a phrase lattice gfp consists of at most ief i line segmentsthis bound can even be refined and one obtains 19 1 i v i 2both bounds are tightthis result may seem somewhat surprising as it states that independent of the choice of the direction along which the line optimization is performed the structure of the error surface is far less complex than one might expect based on the huge number of alternative translation candidates that are represented in the lattice and thus contribute to the error surfacein fact this result is a consequence of using a loglinear model which constrains how costs can evolve due to hypothesis expansionif instead quadratic cost functions were used the size of the envelopes could not be limited in the same waythe above theorem does not however provide any additional guidance that would help to choose more promising directions in the line optimization algorithm to find better local optimato alleviate this problem the following section lists some best practices that we found to be useful in the context of mertthis section addresses some techniques that we found to be beneficial in order to improve the performance of mert function weights by defining weights restrictionsthe weight restriction for a feature function hm is specified as an interval rmare lm rms lm rmp e lr you oo oo which defines the admissible region from which the feature function weight λm can be chosenif the line optimization is performed under the presence of weights restrictions γ needs to be chosen such that the following constraint holds itive or negative boost ω on the objective function if the new weight is chosen such that it matches a certain target value a a zeroweights prior provides a means of doing feature selection since the weight of a feature function which is not discriminative will be set to zeroan initialweights prior can be used to confine changes in the parameter update with the consequence that the new parameter may be closer to the initial weights setinitial weights priors are useful in cases where the starting weights already yield a decent baseline interval merging the intervalr ryzsryzs1q of a translation hypothesis can be merged with the intervalr ryz s1 ryz s of its leftadjacent translation hypothesis if the corresponding change in the error count dezs 0the resulting intervalr 1ry41s1 ryzs1q has a larger range and the choice of ryopt may be more reliable iteration will result in the update of a single feature function onlywhile this update scheme provides a ranking of the feature functions according to their discriminative power it does not take possible correlations between the feature functions into accountas a consequence the optimization procedure may stop in a poor local optimumon the other hand it is difficult to compute a direction that decorrelates two or more correlated feature functionsthis problem can be alleviated by exploring a large number of random directions which update many feature weights simultaneouslythe random directions are chosen as the lines which connect some randomly distributed points on the surface of an mdimensional hypersphere with the hyperspheres centerthe center of the hypersphere is defined as the initial parameter setas suggested in an alternative method for the optimization of the unsmoothed error count is powells algorithm combined with a gridbased line optimization in the mert criterion is optimized on nbest lists using the downhill simplex algorithm the optimization procedure allows for optimizing other objective function as eg the expected bleu scorea weakness of the downhill simplex algorithm is however its decreasing robustness for optimization problems in more than 10 dimensionsa different approach to minimize the expected bleu score is suggested in who use deterministic annealing to gradually turn the objective function from a convex entropy surface into the more complex risk surfacea large variety of different search strategies for mert are investigated in which provides many fruitful insights into the optimization processin mert is used to boost the bleu score on nbest reranking tasksthe incorporation of a large number of sparse feature functions is described in the paper investigates a perceptronlike online largemargin training for statistical machine translationthe described approach is reported to yield significant improvements on top of a baseline system which employs a small number of feature functions whose weights are optimized under the mert criteriona study which is complementary to the upper bound on the size of envelopes derived in section 4 is provided in which shows that the number of inference functions of any graphical model as for instance bayesian networks and markov random fields is polynomial in the size of the model if the number of parameters is fixedexperiments were conducted on the nist 2008 translation tasks under the conditions of the constrained data track for the language pairs arabictoenglish englishtochinese and chinesetoenglish the development corpora were compiled from test data used in the 2002 and 2004 nist evaluationseach corpus set provides 4 reference translations per source sentencetable 1 summarizes some corpus statisticstranslation results were evaluated using the mixedcase bleu score metric in the implementation as suggested by translation results were produced with a stateoftheart phrasebased smt system which uses emtrained word alignment models and a 5gram language model built from the web1t collection2translation hypotheses produced on the blind test data were reranked using the minimumbayes risk decision rule each system uses a loglinear combination of 20 to 30 feature functionsin a first experiment we investigated the convergence speed of lattice mert and nbest mertfigure 2 shows the evolution of the bleu score in the course of the iteration index on the zhendev1 corpus for either methodin each iteration the training procedure translates the development corpus using the most recent weights set and merges the top ranked candidate translations into the candidate repositories before the line optimization is performedfor nbest mert we used n50 which yielded the best resultsin contrast to lattice mert nbest mert optimizes all dimensions in each iteration and in addition it also explores a large number of random starting points before it redecodes and expands the hypothesis setas is typical for nbest mert the first iteration causes a dramatic performance loss caused by overadapting the candidate repositories which amounts to more than 273 bleu pointsalthough this performance loss is recouped after the 5th iteration the initial decline makes the line optimization under nbest mert more fragile since the optimum found at the end of the training procedure is affected by the initial performance drop rather than by the choice of the initial start weightslattice mert on the other hand results in a significantly faster convergence speed and reaches its optimum already in the 5th iterationfor lattice mert we used a graph density of 40 arcs per phrase which corresponds to an nbest size of more than two octillionp21027qentriesthis huge number of alternative candidate translations makes updating the weights under lattice mert more reliable and robust and compared to nbest mert it becomes less likely that the same feature weight needs to be picked again and adjusted in subsequent iterationsfigure 4 shows the evolution of the bleu score on the zhendev1 corpus using lattice mert with 5 weights updates per iterationthe performance drop in iteration 1 is also attributed to overfitting the candidate repositorythe decline of less than 05 in terms of bleu is however almost negligible compared to the performance drop of more than 27 in case of nbest mertthe vast number of alternative translation hypotheses represented in a lattice also increases the number of phase transitions in the error surface and thus prevents mert from selecting a low performing feature weights set at early stages in the optimization procedurethis is illustrated in figure 3 where lattice mert and nbest mert find different optima for the weight of the phrase penalty feature function after the first iterationtable 2 shows the bleu score results on the nist 2008 blind test using the combined dev1dev2 corpus as training datawhile only the aren task shows improvements on the development data lattice mert provides consistent gains over nbest mert on all three blind test setsthe reduced performance for nbest mert is a consequence of the performance drop in the first iteration which causes the final weights to be far off from the initial parameter setthis can impair the ability of nbest mert to generalize to unseen data if the initial weights are already capable of producing a decent baselinelattice mert on the other hand can produce weights sets which are closer to the initial weights and thus more likely to retain the ability to generalize to unseen datait could therefore be worthwhile to investigate whether a more elaborated version of an initialweights prior allows for alleviating this effect in case of nbest merttable 3 shows the effect of optimizing the feature function weights along some randomly chosen directions in addition to the coordinate axesthe different local optima found on the development set by using random directions result in additional gains on the blind test sets and range from 01 to 06 absolute in terms of bleuwe presented a novel algorithm that allows for efficiently constructing and representing the unsmoothed error surface over all sentence hypotheses that are represented in a phrase latticethe proposed algorithm was used to train the feature function weights of a loglinear model for a statistical machine translation system under the minimum error rate training criterionlattice mert was shown analytically and experimentally to be superior over nbest mert resulting in significantly faster convergence speed and a reduced number of decoding stepswhile the approach was used to optimize the model parameters of a single machine translation system there are many other applications in which this framework can be useful tooone possible usecase is the computation of consensus translations from the outputs of multiple machine translation systems where this framework allows us to estimate the system prior weights directly on confusion networks it is also straightforward to extend the suggested method to hypergraphs and forests as they are used eg in hierarchical and syntaxaugmented systems our future work will therefore focus on how much system combination and syntaxaugmented machine translation can benefit from lattice mert and to what extent feature function weights can robustly be estimated using the suggested method
D08-1076
latticebased minimum error rate training for statistical machine translationminimum error rate training is an effective means to estimate the feature function weights of a linear model such that an automated evaluation criterion for measuring system performance can directly be optimized in trainingto accomplish this the training procedure determines for each feature function its exact error surface on a given set of candidate translationsthe feature function weights are then adjusted by traversing the error surface combined over all sentences and picking those values for which the resulting error count reaches a minimumtypically candidates in mert are represented as nbest lists which contain the n most probable translation hypotheses produced by a decoderin this paper we present a novel algorithm that allows for efficiently constructing and representing the exact error surface of all translations that are encoded in a phrase latticecompared to nbest mert the number of candidate translations thus taken into account increases by several orders of magnitudesthe proposed method is used to train the feature function weights of a phrasebased statistical machine translation systemexperiments conducted on the nist 2008 translation tasks show significant runtime improvements and moderate bleu score gains over nbest mertwe find that first iterations of the tuning process produces very bad weights this exceptional performance drop is attributed to an overfitting on the candidate repositorywe present a procedure for conducting line optimisation directly over a word lattice encoding the hypotheses in cswe apply the sweepline algorithm to the union to discard redundant linear functions and their associated hypotheseswe theorize that an upper bound for the number of linear functions in the upper envelope at the final state is equal to the number of edges in the latticein our mert algorithm we compute the error statistics from the target sentences from the translation search space that are exactly those that are minimally discriminable by changing the feature weights along a single vector in the dimensions of the feature spacewe extend the mert algorithm so as to use the whole set of candidate translations compactly represented in the search lattice produced by the decoder instead of only a nbest list of candidates extracted from itwe find that the down hill simplex algorithm loses its robustness as the dimension goes up by more than 10